added
string
created
string
id
string
metadata
dict
source
string
text
string
version
string
2019-08-17T10:44:23.481Z
2016-01-01T00:00:00.000
221854653
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GREEN", "oa_url": "https://www.geosci-model-dev.net/9/3907/2016/gmd-9-3907-2016.pdf", "pdf_hash": "460908f36da59fc6a90ac1a33681d7b9e3c3c734", "pdf_src": "Adhoc", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41083", "s2fieldsofstudy": [ "Environmental Science" ], "sha1": "84aef2d8687bd7161bb1096091e82f289b5be319", "year": 2016 }
pes2o/s2orc
An Approach to Computing Discrete Adjoints for MPI-Parallelized Models Applied to the Ice Sheet System Model . Within the framework of sea-level rise projections, there is a strong need for hindcast validation of the evolution of polar ice sheets in a way that tightly matches observational records (from radar, gravity, and altimetry observations mainly). However, the computational requirements for making hindcast reconstructions possible are severe and rely mainly on the evaluation of the adjoint state 5 of transient ice-flow models. Here, we look at the computation of adjoints in the context of the NASA/JPL/UCI Ice Sheet System Model, written in C++ and designed for parallel execution with MPI. We present the adaptations required in the way the software is designed and written but also generic adaptations in the tools facilitating the adjoint computations. We concentrate on the use of operator overloading coupled with the AdjoinableMPI library to achieve the adjoint computation of 10 ISSM. We present a comprehensive approach to 1) carry out type changing through ISSM, hence facilitating operator overloading, 2) bind to external solvers such as MUMPS and GSL-LU and 3) handle MPI-based parallelism to scale the capability. We demonstrate the success of the approach by computing sensitivities of hindcast metrics such as the misfit to observed records of surface altimetry on the North-East Greenland Ice Stream, or the misfit to observed records of surface velocities 15 on Upernavik Glacier, Central West Greenland. We also provide metrics for the scalability of the approach, and the expected performance. This approach has the potential of enabling a new generation of hindcast-validated projections that make full use of the wealth of datasets currently being collected, or alreay collected in Greenland and Antarctica. MPI by prefix AMPI and have few pa- rameters adjoint functionality. provides additional and predefined symbols. Discussing the internal design of AMPI is the scope of this the to ISSM is the first large scale practical use of AMPI and in the following we will discuss the steps taken to use it in the ISSM code base. 1 Introduction 20 Constant monitoring of polar ice sheets through remote sensing, in particular since the advent of altimeter, radar, and gravity sensors such as ICESat-1, CryoSat, RADARSAT-1, ERS-1 and ERS-2, Envisat, and GRACE, has created a large amount of data that has yet to find its way through Ice Sheet Models (ISMs) and hindcast reconstructions of polar ice sheet evolution. In particular, as evidenced by the wide discrepancy between ISMs involved in the SeaRISE and Ice2Sea projects 25 (Nowicki et al., 2013;Bindschadler et al., 2013) significant improvements in modeled projections of the mass balance of polar ice sheets and their contribution to future sea-level rise has not resulted from the increase in availability of data, but rather from improvements in the type of physics captured in forward models. One reason for this is the lack of data assimilation capabilities embedded in the current generation of state-of-the-art ISMs. In the past 10 years, great strides have been made in 30 improving model initialization by using steady-state model inversions of basal friction (MacAyeal, 1993;Morlighem et al., 2010;Larour et al., 2012;Price et al., 2011;Arthern and Gudmundsson, 2010), ice rheology (Rommelaere and MacAyeal, 1997;Larour et al., 2005;Khazendar et al., 2007) and bedrock elevation (Morlighem et al., 2014) among others. However, these approaches aim at improving our knowledge of poorly constrained input parameters and boundary conditions as long 35 as the ice-flow regime is captured in a steady-state configuration. These inversions rely on analytically derived adjoint states of forward stress-balance or mass-transport models, but do not extend to transient regimes of ice flow. Applications to transient models and long temporal time series such as the ICESat/CryoSat continuous altimetry record from 2003 to present-day, have been much more rare, and to our knowledge, 40 are limited to a few studies such as Heimbach and Bugnion (2009), Goldberg and Sergienko (2011), Goldberg and Heimbach (2013), Larour et al. (2014) and Goldberg et al. (2015) among others. The main issue here precluding widespread application of transient data assimilation lies in the difficulty of deriving temporal adjoints of transient models. Actually, in many cases, a manual derivation of the adjoint state of a forward model is not possible, especially where ice-flow physics are not 45 differentiable. This is the case for example in thermal transient models, where the melting-point is a physical constraint (of the threshold type) that is imposed on temperature. This numerical issue can be mitigated by adopting different approaches, such as: 1) ensemble runs, as in Applegate et al. (2012), where model runs compatible with observations are selected; 2) methods similar to the flux-correction methods implemented in Aschwanden et al. (2013); Price et al. (2011) where 50 boundary conditions are corrected in order to match time series of observations (tuning approach); 3) quasi-static approaches, where snapshot inversions are carried out in time, as in Habermann et al. (2012Habermann et al. ( , 2013 and 4) sampling methods, which have the main drawback of being computationally very expensive (each sample at the cost of one forward run). Though this is not an exhaustive list of all available methods, the main advantage of adjoint driven inversions is that it relies on the exact 55 sensitivity of a forward model to its inputs, hence ensuring a physically meaningful inversion. Understanding sensitivities of a forward ice-flow model, which is needed to physically constrain a temporal inversion, requires computation of derivatives of model outputs to model inputs. If such derivatives are approximated by finite-difference schemes, they are subject to the tradeoff between approximation and truncation errors for the perturbation, which is aggravated for higher-order 60 derivatives. If the derivatives are computed using algorithmic differentiation (AD) (Griewank and Walther, 2008), also known as automatic differentiation, then one can attain derivatives with machine precision provided the underlying program implementation of the numerical model is amenable to the application of an AD tool. In particular, this approach does not depend on the type of physics relied upon, and it is transparent to the model equations, provided each step of the overall software 65 is differentiable. Indeed, the AD method assumes a numerical model M that is a function implemented as a computer program. The execution of the program implies the execution of a sequence of arithmetic operators such as +, −, * and intrinsics sin, e x , and so forth to which the chain rule can be applied. For each such elemental operation r = φ(a, b, . . .) with result r and arguments 70 a, b, . . . 1 we can write the total derivative as: For example, if φ is the multiplication operator * , i.e. r = ab then one will get the product rulė r = bȧ + aḃ. Applying the above rule to each elemental operation in the sequence gives a method to compute:ẏ = Jẋ with the Jacobian J = ∂f i ∂x j , i = 1 . . . m, j = 1 . . . n without explicitly forming the Jacobian. This method applies the chain rule in the computation order of the values in the program and is known as forward-mode AD. The opposite order of applying the chain rule to the elemental operations, known as reverse or adjoint-mode AD, yields projections This is achieved by applying to the elemental operations φ, in reverse order of their original execution, the rule: where the bar operator defines the corresponding adjoint variable. For the multiplication example 80 one therefore getsā =ā+br;b =b+ar;r = 0. In particular, for applications in which m n, the 1 In practice most φ are uni-or bivariate. Published: 9 May 2016 c Author(s) 2016. CC-BY 3.0 License. reverse mode is advantageous because its computational cost does not depend on n. Typical problems in Cryosphere science involve computations of diagnostics which are scalar-valued cost functions (m = 1), such as for example the spatio-temporally averaged misfit between modeled surface elevation and observed surface topography (Larour et al., 2014). For these cases, one can compute 85 the gradient ∇f = J Tȳ withȳ = 1 as a single projection. Thus, for high-resolution models implying very large n, the reverse mode is an enabling and potentially very efficient technique. This significant capability in AD is what makes its application to data assimilation so efficient. Instead of evaluating a Jacobian for each one of the outputs of the forward model with respect to each input, which would be significantly consuming as it scales in n 2 , the reverse mode evaluates the gradient of one specific 90 output of interest with respect to the model inputs in only one sweep that scales in n. Applying AD to large-scale frameworks such as ISSM (Larour et al., 2012), MITgm (Heimbach, 2008), SICOPOLIS (Heimbach and Bugnion, 2009) or DassFlow (Monnier, 2010) is a difficult proposition, but one which enables significant improvements in the way models can be initialized (Heimbach and Bugnion, 2009), hindcast validated (Larour et al., 2012), and calibrated (Goldberg 95 et al., 2015) towards better projections. Traditional approaches relying on Source-to-Source transformation have been developed, but for frameworks such as ISSM, which are C++ oriented, and highly parallelized, this type of approach breaks down. Our goal here is to demonstrate how the socalled operator-overloading AD approach can be implemented and validated for a framework such as ISSM, and what developments were necessary to make this capablity operational. Our approach is 100 discussed in section 2 of this manuscript, with section 3 describing the method validation as well as applications with ISSM. We discuss and conclude in the last section the applicability of such an approach to other frameworks, and on the opportunities these new developments afford for Cryosphere Science and data assimilation of remote sensing data in particular. Type Change to Enable Overloaded Operators Changing to the aforementioned active type is a significant effort to be undertaken in the model code. Among the choices to effect this type change one should select one that is transparent to the model development process, is maintainable, and minimizes the manual effort. Interpreting the trace by R(T ) means that the space holding the data for the variables r, a, b ∈ V * (the set of instantiated program variables at run time) occurring in each operation r = φ(a, b) must 160 be represented by some mapping ω : V * → N + to pseudo addresses. In ADOL-C these addresses are called locations and represent indices in a work array held by R(T ). The pseudo addresses must be managed through the TA constructor and destructor in a fashion similar to the memory management of the actual program variables themselves, i.e. pseudo addresses are assigned from and returned to a pool Ω of available addresses. However, no distinction between heap and stack variables is 165 made and generally data locality will not be preserved. On the other hand, for special operations φ A with array arguments of size s, that is, calls to external solvers (see section 2.3) or MPI routines (see section 2.4), it would be counterproductive to record in T the pseudo-addresses of each of the s array elements rather than a consecutive range. The latter, however, imposes that the pool be primed by some call Ω(s) such that ω returns consecutive pseudo addresses when called by the constructor for 170 each element in the array. In ADOL-C this is done by calling ensureContiguousLocations immediately before an active array is instantiated. Avoiding copying of array values in M as an efficiency measure means that any given array has a good chance of being used in φ A , and to avoid littering the code with preprocessor-guarded calls to Ω(s), we decided in ISSM to instead add a call to ensureContiguousLocations in the TA specialization of xNew : 210 External functions f e that had been supported by ADOL-C had the signature f e (l x , x, l y , y) with inputs x, outputs y for the original call, l x , l y their respective array lengths, and f e (l y ,ȳ, l x ,x) for the adjoint counterpart. In the case of a linear system with A ∈ A, the input x is packed with both A and r while y contains the solution s of the system on return. This, however, was insufficient for binding to solvers from the GNU Scientific Library (GSL) (Galassi, 2009) Regarding Q1 we know that in ISSM A ∈ A, and regarding Q2 the parameters passed to f e have no other uses and therefore, using the controls (E3), we avoid (re)storing their values. The direct solver from the GSL used here had no API control to back solve with the factors for the transposed 225 and we did not want to reverse engineer the permutation representation. Hence the refactoring was done as a matter of convenience for the sequential reference case requiring E1. The parallel and therefore practically more efficient MUMPS solver operates on sparse, distributed A, therefore requiring E2. MUMPS offers both the ability to store the factors to file and perform the back-solve for the transpose. However, the MUMPS portion of the runtime is comparatively small (see section 3). 230 Consequently the overhead for the file I/O when considering the factor data size after fill-in is not expected to yield much practical benefit in this context, answering Q3. Finally, for Q4, the extension E4 is exploited because transient runs of ISSM need to account for changes in the system size and a preallocation with the maximal buffer sizes therefore avoids some of the memory management overhead. Handling Parallelism with the AdjoinableMPI Library As is the case with ISSM, practically relevant science problems incur a computational complexity that necessitates execution on parallel hardware, often using MPI as the paradigm of choice. Sending data with MPI from a source buffer s to a target buffer t can be interpreted as a simple assignment t = s. This implies for the adjoint an increments =s+t, that is, the adjoint is a communication of the rameters where needed to enable the adjoint functionality. AMPI also provides additional types and predefined symbols. Discussing the internal design of AMPI is outside the scope of this paper. However, the application to ISSM is the first large scale practical use of AMPI and in the following we will discuss the steps taken to use it in the ISSM code base. For testing and small scale experiments, the ISSM code, as many other models, treats MPI par-250 allelization as a compile-time option controlled by preprocessor macros. Furthermore, turning the adjoint capability on and off as suggested in section 2.2 would imply additional switching between the original MPI and AMPI with code duplication and the potential for errors. To avoid these undesirable consequences we decided to introduce in ISSM another wrapper layer (prefixed ISSM_MPI ) of calls and definitions to encapsulate completely the four functionality variants listed in Table 1. 255 The approach is shown for MPI_Reduce in Fig. 1 The approach is shown in Fig. 2. Assuming many AMPI calls in foo , this reduces the count of code locations where type errors may be introduced to the template instantiations. Finally, a practical concern for using the parallelized adjoint is the handling of sensitivities to quantities that are uniformly initialized across ranks, such as parameters. Frequently, as was the case within ISSM, these quantities are initialized from files or otherwise per process in the parallel case the same way as in the sequential case. In the parallel case that implies a replication of the same quantity across ranks. However, to obtain the correct sensitivities, the quantity q in question, should be unique, in other words that quantity must be uniquely initialized at one root rank and then broadcast to the other ranks. Otherwise, for r ranks, then at each rank one would obtain only a part q i of the totalq and would have to "manually" sum upq = q i . With an initial broadcast of q , however, the corresponding adjoint provided through AMPI by using AMPI_Bcast is that exact sum reduction and R(T ) yields the correct adjoint at the broadcast root. This notion similarly applies 285 to any situation where a conceptually unique quantity of active type is implicitly replicated on some ranks. Validation ISSM is validated in AD mode by continuously running a test suite within the Jenkins (Smart, 2011) integration and development framework (available here at http://issm.jpl.nasa.gov/developers/) . A 290 detailed description of the suite of benchmarks is given in Table 3) The aim is to 1) compare forward runs in ISSM with their counterparts when overloaded operators are switched on. The results should be identical within double precision tolerances; and 2) compare forward and reverse runs carried out with ISSM AD on and off, using the GSL and MUMPS solvers. Comparisons of gradients computed in AD mode with standard forward differences methods are also carried out to make sure 295 the gradients computed (essentially in reverse scalar mode, which is the mode of predilection in ISSM for data assimilation) in AD mode are accurate. Application The ongoing use of adjoint computations includes sensitivity studies and state estimation problems 300 for transient model runs. Because this paper concentrates on technical aspects, we show, only as exemplary evidence of the practical usefulness, some sensitivities of cost functions calibrated for two sensors commonly encountered in Cryosphere Science, altimeters (that measure surface elevation), and radars (that measure surface displacement, or velocity). In Fig. 3 Table 3. ISSM AD validation suite integrated within Jenkins for continuous integration and delivery (Smart, 2011). Tests 3001 to 3010 and 3101 to 3110 test the repeatability of forward runs with and without ADOL-C compiled, but with no AD drivers specifically called. The forward runs involved are the standard stress balance, mass transport and thermal solutions, with 2D SSA (Shelfy-Stream Approximation MacAyeal (1989)), 3D SSA, 3D HO (Higher-Order, (Blatter, 1995;Pattyn, 1996)), 3D Full-Stokes (Stokes, 1845) with default optimization (-O2) in Fig. 5 (upper frame). While this plot indicates a small overhead factor of ≈ 4.5 in particular for the largest mesh case (distance 12.5 km) the reason for this becomes apparent in the plot on the lower frame. It shows that the majority of the run time is consumed by the GSL solver (libgsl) completely overshadowing any of the overhead caused by the adjoint for the 345 largest mesh. We want to emphasize that GSL was chosen not for its efficiency but for the simplicity of the setup which quickly enabled adjoint computations. Introducing AMPI and thereby moving to a more appropriate solver (MUMPS) causes the adjoint overhead to become more prominent. The ableMPI library, which is to our knowledge the first time this type of approach has been systematically applied to a software framework of this size and complexity. Despite the difficulties encountered rewriting the software, the overloaded approach is transparent to the user, which is critial given the size of the larger Cryosphere Science commuity that is not familiar with the adjoint work, and for which classic approaches such as source-to-source transformation have proven to be overly cumber-370 some. The flexibility of this approach allows in particular for quick turn-around in developing adjoint models of new parameterizations which are not easily hand derived. This is a major advantage in that it opens this approach to the wider community. This, given the large amount of remote sensing data currently being collected and under-utilized, could prove paramount if we are to hindcast validate projections of sea-level rise. Further work is of course required to bring in additional observations 375 such as gravity sensors, or radar stratigraphy observations, which will involve development of new cost functions, and scalability in 3D. Though this is complex in that it requires integrated resiliency and adjoint checkpointing schemes for long running transient modeling scenarios, our approach has proven flexible, and should lead to a brand new set of data assimilation capabilities that have already been available to other Earth Science communities for a long time. Indeed, by allowing temporal 380 data assimilation for a large number of sensors and models, such as demonstrated here with the use of altimetry and radar sensors for mass transport and stress balance models respectively, ISSM paves the way for wider integration between the modeling and observational Cryosphere community. Code Availability The ISSM code and its AD components are available at http://issm.jpl.nasa.gov. The instructions for 385 the compilation of ISSM in AD mode, along with test cases is presented in the supplement attached to this manuscript.
v3-fos-license
2018-12-15T20:22:36.639Z
2018-11-06T00:00:00.000
54831898
{ "extfieldsofstudy": [ "Environmental Science" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://acp.copernicus.org/articles/18/15859/2018/acp-18-15859-2018.pdf", "pdf_hash": "484e459d86dde76ba67fc15615866ae855ddb69d", "pdf_src": "MergedPDFExtraction", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41085", "s2fieldsofstudy": [ "Environmental Science" ], "sha1": "3596a7837dd60e786568821d27e943873f7ffcb5", "year": 2018 }
pes2o/s2orc
Long-range transport of volcanic aerosol from the 2010 Merapi tropical eruption to Antarctica Volcanic sulfate aerosol is an important source of sulfur for Antarctica, where other local sources of sulfur are rare. Midlatitude and high-latitude volcanic eruptions can directly influence the aerosol budget of the polar stratosphere. However, tropical eruptions can also enhance polar aerosol load following long-range transport. In the present work, we analyze the volcanic plume of a tropical eruption, Mount Merapi in 2010, and investigate the transport pathway of the volcanic aerosol from the tropical tropopause layer (TTL) to the lower stratosphere over Antarctica. We use the Lagrangian particle dispersion model Massive-Parallel Trajectory Calculations (MPTRAC) and Atmospheric Infrared Sounder (AIRS) SO2 measurements to reconstruct the altitude-resolved SO2 injection time series during the explosive eruption period and simulate the transport of the volcanic plume using the MPTRAC model. AIRS SO2 and aerosol measurements, the aerosol cloud index values provided by Michelson Interferometer for Passive Atmospheric Sounding (MIPAS), are used to verify and complement the simulations. The Lagrangian transport simulation of the volcanic plume is compared with MIPAS aerosol measurements and shows good agreement. Both the simulations and the observations presented in this study suggest that volcanic plumes from the Merapi eruption were transported to the south of 60 S 1 month after the eruption and even further to Antarctica in the following months. This relatively fast meridional transport of volcanic aerosol was mainly driven by quasi-horizontal mixing from the TTL to the extratropical lower stratosphere, and most of the quasi-horizontal mixing occurred between the isentropic surfaces of 360 to 430 K. When the plume went to Southern Hemisphere high latitudes, the polar vortex was displaced from the South Pole, so that the volcanic plume was carried to the South Pole without penetrating the polar vortex. Although only 4 % of the sulfur injected by the Merapi eruption was transported into the lower stratosphere south of 60 S, the Merapi eruption contributed up to 8800 t of sulfur to the Antarctic lower stratosphere. This indicates that the long-range transport under favorable meteorological conditions enables a moderate tropical volcanic eruption to be an important remote source of sulfur for the Antarctic stratosphere. Abstract. Volcanic sulfate aerosol is an important source of sulfur for Antarctica, where other local sources of sulfur are rare. Midlatitude and high-latitude volcanic eruptions can directly influence the aerosol budget of the polar stratosphere. However, tropical eruptions can also enhance polar aerosol load following long-range transport. In the present work, we analyze the volcanic plume of a tropical eruption, Mount Merapi in 2010, and investigate the transport pathway of the volcanic aerosol from the tropical tropopause layer (TTL) to the lower stratosphere over Antarctica. We use the Lagrangian particle dispersion model Massive-Parallel Trajectory Calculations (MPTRAC) and Atmospheric Infrared Sounder (AIRS) SO 2 measurements to reconstruct the altitude-resolved SO 2 injection time series during the explosive eruption period and simulate the transport of the volcanic plume using the MPTRAC model. AIRS SO 2 and aerosol measurements, the aerosol cloud index values provided by Michelson Interferometer for Passive Atmospheric Sounding (MIPAS), are used to verify and complement the simulations. The Lagrangian transport simulation of the volcanic plume is compared with MIPAS aerosol measurements and shows good agreement. Both the simulations and the observations presented in this study suggest that volcanic plumes from the Merapi eruption were transported to the south of 60 • S 1 month after the eruption and even further to Antarctica in the following months. This relatively fast meridional transport of volcanic aerosol was mainly driven by quasi-horizontal mixing from the TTL to the extratropical lower stratosphere, and most of the quasi-horizontal mixing occurred between the isentropic surfaces of 360 to 430 K. When the plume went to Southern Hemisphere high lati-tudes, the polar vortex was displaced from the South Pole, so that the volcanic plume was carried to the South Pole without penetrating the polar vortex. Although only 4 % of the sulfur injected by the Merapi eruption was transported into the lower stratosphere south of 60 • S, the Merapi eruption contributed up to 8800 t of sulfur to the Antarctic lower stratosphere. This indicates that the long-range transport under favorable meteorological conditions enables a moderate tropical volcanic eruption to be an important remote source of sulfur for the Antarctic stratosphere. Introduction Over the past two decades, multiple volcanic eruptions have injected sulfur into the upper troposphere and lower stratosphere, which has been the dominant source of the stratospheric sulfate aerosol load , preventing the background level from other sources ever being seen . Stratospheric sulfate aerosol mainly reflects solar radiation and absorbs infrared radiation, causing cooling of the troposphere and heating of the stratosphere. Stratospheric sulfate aerosol also has an impact on chemical processes in the lower stratosphere (Jäger and Wege, 1990;Solomon et al., 1993), in particular on polar ozone depletion (e.g., McCormick et al., 1982;Solomon, 1999;Solomon et al., 1986Solomon et al., , 2016Portmann et al., 1996;Tilmes et al., 2008;Drdla and Müller, 2012). The presence of H 2 SO 4 in the polar stratosphere in combination with cold temperatures facilitates the formation of polar stratospheric clouds (PSCs), which increase heterogeneous ozone X. Wu et al.: Long-range transport of volcanic aerosol from the Merapi eruption depletion chemistry (Solomon, 1999;Zuev et al., 2015). Recent healing of Antarctic ozone depletion has constantly been disturbed by moderate volcanic eruptions (Solomon et al., 2016). Midlatitude and high-latitude explosive volcanic eruptions may directly influence the polar stratosphere and may have an effect on ozone depletion in the next austral spring. For example, the aerosol plume from the Calbuco eruption in 2015, including various volcanic gases, strongly enhanced heterogeneous ozone depletion at the vortex edge and caused an Antarctic ozone hole, with the largest daily averaged size on record in October 2015 (Solomon et al., 2016;Ivy et al., 2017;Stone et al., 2017). Usually, Antarctica is relatively free of local aerosol sources, but aerosol from low latitudes can reach Antarctica through long-range transport (Sand et al., 2017). Part of the sulfate found in ice cores can be attributed to tropical volcanic eruptions (Gao et al., 2007). Measurements of enhanced aerosol in the lower Antarctic stratosphere right above the tropopause were made in October and November 1983, 1984and 1985. These enhanced aerosol number concentrations were attributed to aerosol transported to Antarctica from the eruption of the tropical volcano El Chichón in 1982(Hofman et al., 1985Hofmann et al., 1988). Model results indicated that numerous moderate eruptions affected ozone distributions over Antarctica, including the Merapi tropical eruption in 2010 (Solomon et al., 2016). However, due to the limit of spatial and temporal resolution of satellite data and in situ observations, it is difficult to investigate the transport process as well as the influence of the location of the eruption, the plume height and the background meteorological conditions. The transport mechanism is not well represented in present global climate models and the uncertainties of the modeled aerosol optical depth in polar regions are large (Sand et al., 2017). Mount Merapi (7.5 • S, 110.4 • E; elevation: 2930 m) is an active stratovolcano located in Central Java, Indonesia. Merapi has a long record of eruptive activities. The most recent large eruption with a volcanic explosivity index of 4 occurred between 26 October and 7 November 2010 (Pallister et al., 2013), with SO 2 emission rates being a few orders of magnitude higher than previous eruptions. Following the Merapi eruption in 2010, evidence of poleward transport of sulfate aerosol towards the Southern Hemisphere high latitudes was found in time series of aerosol measurements by the Michelson Interferometer for Passive Atmospheric Sounding (MI-PAS) (Günther et al., 2018) and Cloud-Aerosol Lidar with Orthogonal Polarization (CALIOP) (Khaykin et al., 2017;Friberg et al., 2018). There are three main ways that transport out of the tropical tropopause layer (TTL) occurs: the deep and shallow branches of the Brewer-Dobson circulation (BDC) and horizontal mixing (Vogel et al., 2011). There is considerable year-to-year seasonal variability in the amount of irreversible transport from the tropics to high latitudes, which is related to the phase of the quasi-biennial oscillation and the state of the polar vortex (Olsen et al., 2010). The BDC plays a large role in determining the distributions of many constituents in the extratropical lower stratosphere. The faster quasi-horizontal transport between the tropics and polar regions also significantly contributes to determining these distributions. The efficiency of transporting constituents quasi-horizontally depends on wave breaking patterns and varies with the time of the year (Toohey et al., 2011;Wu et al., 2017). Better knowledge of the transport pathways and an accurate representation of volcanic sulfur injections into the upper troposphere and lower stratosphere (UTLS) are key elements for estimating the global stratospheric aerosol budget, the cooling effects and the ozone loss linked to volcanic activity. The aim of the present study is to reveal the transport process and the influence of meteorological conditions by combining satellite observations with model simulations in a case study. We investigate the quasi-horizontal transport by tracing the volcanic plume of the Merapi eruption from the tropics to Antarctica and quantifying its contribution to the sulfur load in the Antarctic lower stratosphere. In Sect. 2, the new Atmospheric Infrared Sounder (AIRS) SO 2 measurements , the new MIPAS aerosol measurements (Höpfner et al., 2015;Griessbach et al., 2016) and the method for reconstructing the SO 2 injection time series of the Merapi eruption are introduced. In Sect. 3 the results are presented: first, the reconstructed time series of the Merapi eruption is discussed; second, the dispersion of the Merapi plume is investigated using long Lagrangian forward trajectories initialized with the reconstructed SO 2 time series; third, the simulation results are compared with MIPAS aerosol measurements and the plume dispersion is investigated using MIPAS aerosol detections. In Sect. 4 the results are discussed and the conclusions are given in Sect. 5. 2 Satellite data, model and method 2.1 MIPAS aerosol measurements MIPAS (Fischer et al., 2008) is an infrared limb emission spectrometer aboard the European Space Agency's (ESA's) Envisat, which provided nearly 10 years of measurements from July 2002 to April 2012. MIPAS spectral measurements cover the wavelength range from 4.15 to 14.6 µm. The vertical coverage of MIPAS nominal measurement mode during the optimized resolution phase from January 2005 to April 2012 was 7-72 km. The field of view of MIPAS was about 3 km × 30 km (vertically × horizontally) at the tangent point. The extent of the measurement volume along the line of sight was about 300 km, and the horizontal distance between two adjacent limb scans was about 500 km. On each day, ∼ 14 orbits with ∼ 90 profiles per orbit were measured. From January 2005 to April 2012, the vertical sampling grid spacing between the tangent altitudes was 1.5 km in the UTLS and 3 km at altitudes above. In 2010 and 2011, MIPAS measured for 4 days in the nominal mode followed alternately by 1 day in the middle atmosphere mode or upper atmosphere mode. In this study, we focussed on measurements in the nominal mode. For the aerosol detection, we used the MIPAS altituderesolved aerosol cloud index (ACI) as introduced by Griessbach et al. (2016) for comparison with the model simulations and to analyze the poleward transport of the Merapi volcanic plume. The ACI is the maximum value of the cloud index (CI) and aerosol index (AI): ACI = max(CI; AI). (1) The CI is an established method to detect clouds and aerosol with MIPAS. The CI is the ratio between the mean radiances around 792 cm −1 , where a CO 2 line is located, and the atmospheric window region around 833 cm −1 (Spang et al., 2001): where I 1 and I 2 are the mean radiances of each window. The AI is defined as the ratio between the mean radiance around the 792 cm −1 CO 2 band and the atmospheric window region between 960 and 961 cm −1 : where I 1 and I 3 are the mean radiance of each window. The ACI is a continuous unitless value. Small ACI values indicate a high cloud or aerosol particle load, and large values indicate a smaller cloud or aerosol particle load. For the CI, Sembhi et al. (2012) defined a set of variable (latitude, altitude and season) thresholds to discriminate between clear and cloudy air. The most advanced set of altitudeand latitude-dependent thresholds allows for the detection of aerosol and clouds with infrared extinction coefficients larger than 10 −5 km −1 . For the ACI, a comparable sensitivity is achieved when using a fixed threshold value of 7 . Variations in the background aerosol are also visible with larger ACI values. To remove ice clouds and volcanic ash from the MIPAS aerosol measurements, we first separated the data into clear air (ACI > 7) and cloudy air (ACI < =7). Then we applied the ice cloud filter and the volcanic ash and mineral dust filter to the cloudy part and removed all ice or ash detections. This ACI-based ice and ash cloud filter have been successfully used in studies of volcanic sulfate aerosol, e.g., Wu et al. (2017) and Günther et al. (2018). However, since the ice and ash cloud filters are not sensitive to non-ice PSCs, the resulting aerosol retrieval results still contain non-ice PSCs. We keep the non-ice PSCs in the MIPAS retrieval results in this study to show the temporal and spatial extent of the PSCs, when and where the identification of volcanic aerosol is not possible. AIRS AIRS (Aumann et al., 2003) is an infrared nadir sounder with across-track scanning capabilities aboard the National Aeronautics and Space Administration's (NASA's) Aqua satellite. Aqua was launched in 2002 and operates in a nearly polar Sun-synchronous orbit at about 710 km with a period of 98 min. AIRS provides nearly continuous measurement coverage with 14.5 orbits per day, and with a swath width of 1780 km it covers the globe almost twice a day. The AIRS footprint size is 13.5 km × 13.5 km at nadir and 41 km × 21.4 km for the outermost scan angles, respectively. The along-track distance between two adjacent scans is 18 km. The AIRS measurements provide good horizontal resolution and make it ideal for observing the fine filamentary structures of volcanic SO 2 plumes. In this study, we use an optimized SO 2 index (SI, unit: K) to estimate the amount of SO 2 injected into the atmosphere by the Merapi eruption in 2010. The SI is defined as the brightness temperature differences in the 7.3 µm SO 2 waveband. where BT is the brightness temperature measured at wavenumber ν. This SI is more sensitive to low concentrations and performs better in suppressing background interfering signals than the SI provided in the AIRS operational data products. It is an improvement of the SI definition given by Hoffmann et al. (2014) by means of a better choice of the background channel (selecting 1412.87 cm −1 rather than 1407.2 cm −1 ). The SI increases with increasing SO 2 column density and it is most sensitive to SO 2 at altitudes above 3-5 km. SO 2 injections into the lower troposphere are usually not detectable in the infrared spectral region because the atmosphere becomes opaque due to the water vapor continuum. A detection threshold of 1 K was used in this study to identify the Merapi SO 2 injections. The SI was converted into SO 2 column density using a correlation function described in Hoffmann et al. (2014), which was obtained using radiative transfer calculations. AIRS detected the Merapi SO 2 cloud from 3 to 15 November 2010. MPTRAC model and reconstruction of the volcanic SO 2 injection time series of the Merapi eruption In this study, we use the highly scalable Massive-Parallel Hoffmann et al. (2016) showed that ERA-Interim data provide the best trade-off between accuracy and computing time. So in this study, our calculations are based on ERA-Interim data. Diffusion is modeled by uncorrelated Gaussian random displacements of the air parcels with zero mean and standard deviations, σ x = √ D x t (horizontally) and σ z = √ D z t (vertically). D x and D z are the horizontal and vertical diffusivities, respectively, and t is the time step for the trajectory calculations. Depending on the atmospheric conditions, actual values of D x and D z may vary by several orders of magnitude (e.g., Legras et al., 2003Legras et al., , 2005Pisso et al., 2009). In our simulations, we follow the approach of Stohl et al. (2005) and set D x and D z to 50 and 0 m 2 s −1 in the troposphere, and 0 and 0.1 m 2 s −1 in the stratosphere, respectively. The same values of diffusivities were also used in Hoffmann et al. (2017) for trajectory calculations in the Southern Hemisphere stratosphere (September 2010 to January 2011), and in Wu et al. (2017) for trajectory calculations in the Northern Hemisphere UTLS region (June to July 2009). In addition, subgrid-scale wind fluctuations, which are particularly important for long-range simulations, are simulated by a Markov model (Stohl et al., 2005;Hoffmann et al., 2016). Loss processes of chemical species, SO 2 in our case, are simulated based on an exponential decay of the mass assigned to each air parcel. A constant half lifetime of 7 days is assumed for SO 2 for the stratosphere, and 2.5 days is assumed for the troposphere. Sensitivity tests were made to test the most appropriate SO 2 lifetime in this study. When a longer lifetime, e.g., 2 to 4 weeks is used, the simulated SO 2 density decay is much slower than the AIRS SO 2 measurement shows. We attribute this to the high water vapor and hydroxyl radical concentration in the tropical troposphere and lower stratosphere region. To estimate the time-and altitude-resolved SO 2 injections, we follow the approach of Hoffmann et al. (2016) and Wu et al. (2017) and use backward trajectories calculated with the MPTRAC model together with AIRS SO 2 measurements. Measurements from 3 to 7 November 2010 were used to estimate the SO 2 injection during the explosive eruption. Since the AIRS measurements do not provide altitude information, we established a column of air parcels at locations of individual AIRS SO 2 detections. The vertical range of the column was set to 0-25 km, covering the possible vertical dispersion range of the SO 2 plume in the first few days. The AIRS footprint size varies between 14 and 41 km; hence in the horizontal direction, we chose an average of 30 km as the full width at half maximum for the Gaussian scatter of the air parcels. In our simulations, a fixed total number of 100 000 air parcels was assigned to all air columns and the number of air parcels in each column was scaled linearly proportional to the SO 2 index. Then backward trajectories were calculated for all air parcels, and trajectories that were at least 2 days but no more than 5 days long and that passed the volcano domain were recorded as emissions of Merapi. The volcano domain was defined by means of a search radius of 75 km around the location of the Merapi and 0-20 km in the vertical direction, covering all possible injection heights. Sensitivity experiments have been conducted to optimize these pre-assigned parameters to obtain the best simulation results. Our estimates of the Merapi SO 2 injection are shown in Sect. 3. Starting with the reconstructed altitude-resolved SO 2 injection time series, the transport of the Merapi plume is simulated for 6 months. The trajectory calculations are driven by the ERA-Interim data (Dee et al., 2011) interpolated on a 1 • × 1 • horizontal grid on 60 model levels, with the vertical range extending from the surface to 0.1 hPa. The ERA-Interim data are provided at 00:00, 06:00, 12:00 and 18:00 UTC. Outputs of model simulations are given every 3 h at 00:00, 03:00, 06:00, 09:00, 12:00, 15:00, 18:00 and 21:00 UTC. The impact of different meteorological analyses on MPTRAC simulations was assessed by Hoffmann et al. (2016Hoffmann et al. ( , 2017. In both studies the ERA-Interim data showed good performance. Meteorological background conditions in Antarctica The Merapi eruption in 2010 occurred during the seasonal transition from austral spring to summer when the polar vortex typically weakens and the ozone hole shrinks. Figure 1 depicts the meteorological conditions in the polar lower stratosphere (150 hPa, ∼ 12 km) after the eruption. The minimum temperature south of 50 • S (Fig. 1a) was much lower than the climatological mean during mid-November to mid-December but still higher than the low temperature necessary for the existence of PSCs. The polar mean temperature in Fig. 1b, defined as the temperature averaged over latitudes south of 60 • S, stayed lower than the climatological mean from November 2010 until February 2011. Corresponding to the low temperatures, the average zonal wind speed at 60 • S (Fig. 1c) was significantly larger than the climatological mean value from November 2010 to mid-January 2011. The eddy heat flux in Fig. 1d is the product of meridional wind departures and temperature departures from the respective zonal mean values. A more negative value of the eddy heat flux indicates that wave systems are propagating into the stratosphere and are warming the polar region (Edmon et al., 1980;Newman and Nash, 2000;Newman et al., 2001). There is a strong anticorrelation between temperature and the 45-day average of the eddy heat flux lagged prior to the temperature. Compared with the climatological mean state, the polar vortex was more disturbed during mid-July to the end of August, but from mid-October to late November, the heat flux was much smaller than the long-term average, which meant a reduction in dynamical disturbances. Considering the temperature, the subpolar wind speed and the heat flux, . The ozone hole area in (f) is determined from OMI ozone satellite measurements (Levelt et al., 2006). The red triangles indicate the time of the Merapi eruption. the polar vortex was colder and stronger in November and early December 2010 than it was at the same time in other years (see Fig. 1e). Consistent with the large wind speed and low temperature, the polar vortex was stable after the Merapi eruption until early December 2010. Afterwards, it shrunk abruptly and was destructed by mid-January 2011. In accordance with the strength of the polar vortex, in November and early December 2010 the ozone hole area in Fig. 1f, defined as the region of ozone values below 220 Dobson units (DU) located south of 40 • S, was larger than the climatological mean. Meanwhile, the low polar mean temperature and stable polar vortex resulted in a long-lasting ozone hole, which disappeared in the last week of December. The polar vortex broke down by mid-January 2011 when the subpolar wind speed decreased below 15 m s −1 (Fig. 1c). Merapi eruption and SO 2 injection time series According to the chronology of the Merapi eruption that combined satellite observations from AIRS, the Infrared Atmospheric Sounding Interferometer, the Ozone Monitoring Instrument (OMI) and a limited number of ground-based ultraviolet differential optical absorption spectroscopy measurements (Surono et al., 2012), the explosive eruption first occurred between 10:00 and 12:00 UTC on 26 October and this eruption generated an ash plume that reached 12 km al- titude. A period of relatively small explosive eruptions continued from 26 to 31 October. On 3 November, the eruptive intensity increased again accompanied by much stronger degassing and a series of explosions. The intermittent explosive eruptions occurred during 4-5 November, with the climactic eruption on 4 November, producing an ash column that reached up to 17 km altitude. From 6 November, explosive activity decreased slowly and the degassing declined. Figure 2 shows the time-and altitude-resolved SO 2 injections of the Merapi eruption retrieved using the AIRS SO 2 index data and the backward trajectory approach. It agrees well with the chronology of the Merapi eruption as outlined by Surono et al. (2012). SO 2 was injected into altitudes below 8 km during the initial explosive eruptions on 26-30 October. Starting from 31 October the plume reached up to 12 km. During 1-2 November the SO 2 injections into altitudes below 12 km continued but the mass was less than the mass at the initial phase. On 3 November the intensity increased again and peaked on 4 November. Before 3 November the reconstruction indicates a minor fraction of SO 2 right above the tropopause. The SO 2 above the tropopause is not reported in the study of Surono et al. (2012), but is quite robust in our simulations. Further, CALIOP profiles show that some dust appeared at the height from about 14 to 18 km around Mount Merapi on 2, 3 and 5 November 2010, and between 3 and 17 km on 6 November 2010. It could be a fraction of volcanic plume elevated by the updraft in the convection associated with the tropical storm Anggrek. The center of the tropical storm Anggrek was on the Indian Ocean about 1000 km southwest of Mount Merapi. The SO 2 mass above the tropopause is very small compared with the total SO 2 mass. To study the long-range transport of the Merapi plume, we initialized 100 000 air parcels as the SO 2 injection time series shown in Fig. 2. A total SO 2 mass of 0.44 Tg is as-signed to these air parcels as provided in Surono et al. (2012). Then the trajectories are calculated forward for 6 months. Here, we only considered the plume in the upper troposphere and stratosphere, where the lifetime of both SO 2 and sulfate aerosol is longer than their lifetime in the lower troposphere. Further, the SO 2 was converted into sulfate aerosol within a few weeks (von Glasow et al., 2009; also confirmed by the AIRS SO 2 and MIPAS aerosol data), and we assumed that the sulfate aerosol remained collocated with the SO 2 plume. Figure 3 shows the evolution of the simulated Merapi plume and compares the plume altitudes to the aerosol top altitudes measured by MIPAS between 7 and 23 November. Immediately after the eruption, the majority of the plume moved towards the southwest and was entrained by the circulation of the tropical storm Anggrek. After Anggrek weakened and dissipated, the majority of the plume parcels in the upper troposphere moved eastward and those in the lower stratosphere moved westward. In general, the altitudes of the simulated plume agree with the MIPAS measurements. The remaining discrepancies of air parcel altitudes being higher than the altitudes of MIPAS aerosol detections can be attributed to the fact that the MIPAS tends to underestimate aerosol top cloud altitudes, which is about 0.9 km in the case of low extinction aerosol layers and can reach down to 4.5 km in the case of broken cloud conditions (Höpfner et al., 2009). Lagrangian simulation and satellite observation of the transport of the Merapi plume The early plume evolution until about 1 month after the initial eruption is shown on the maps in Fig. 3 together with MIPAS measurements of volcanic aerosol (only aerosol detections with ACI < 7 are shown). Within about 1 month after the initial eruption, the plume is nearly entirely transported around the globe in the tropics, moving west at altitudes of about 17 km. The lower part of the plume, below about 17 km, is transported southeastward and reaches latitudes south of 30 • S by mid-November. The simulated longterm transport of the Merapi plume is illustrated in Fig. 4, showing the proportion of air parcels reaching a latitudealtitude bin every half a month. The simulation results show that during the first month after the eruption (Fig. 4a-b), the majority of the plume was transported southward roughly along the isentropic surfaces. The significant pathways are above and under the core of the subtropical jet in the Southern Hemisphere. However, because of the transport barrier of the polar jet during austral spring, the plume was confined to the north of 60 • S. In December 2010 (Fig. 4c-d), a larger fraction of the plume was transported southward above the subtropical jet core and deep into the polar region south of 60 • S as the polar jet broke down. Until the end of January 2011, the majority of the plume entered the midlatitudes and high latitudes in the Southern Hemisphere. Substantial quasi-horizontal poleward transport from the TTL towards the extratropical lowermost stratosphere (LMS) in Antarctica was found from November 2010 to February 2011 ( Fig. 4a-h), approximately between 350 and 480 K (∼ 10-20 km). In March 2011 ( Fig. 4i-j), the proportion of the plume that went across 60 • S stopped increasing and the maxima of the proportion descended below 380 K. Besides this transport towards Antarctica, a slow upward transport could also be seen. The top of the simulated plume was below the 480 K isentropic surface at around 18 km in Fig. 4a and then the top of the plume went up to 25 km 5 months later in Fig. 4j. This slow upward transport was mainly located in the tropics and can be attributed to the tropical upwelling. MIPAS ACI values are used to study the plume dispersion and compare with the simulations. Figure 5 While individual aerosol detections can be shown using an ACI < 7 (Fig. 3), we used zonal ACI averages to compare with the simulations. To show the change of aerosol load in the Southern Hemisphere due to the Merapi eruption, we first removed the seasonal cycle from the MIPAS aerosol data. Therefore, we selected a time period from November 2007 to March 2008 with no major SO 2 emission in the Southern Hemisphere UTLS (as shown in Fig. 5) as a "reference state". We calculated the biweekly median ACI between November 2010 and March 2011 and subtracted it from the biweekly ACI median of the reference state. The results are shown in Fig. 6. Small MIPAS ACI values represent large aerosol load and large ACI values represent small aerosol load, so in In the first half of November, the zonal median (Fig. 6a) does not show a signal of the Merapi eruption because during the initial time period, the plume was confined to longitudes around the volcano (see Fig. 3), and the MIPAS tracks did not always sample the maximum concentration, so the median ACI values are large (low concentration or clear air). In the second half of November, the plume was transported zonally around the globe, and hence the largest aerosol increase appeared in the upper troposphere at the latitude of Mount Merapi (Fig. 6b) and then moved quasi-horizontally southward into the UTLS region at ∼ 30-40 • S (Fig. 6c-d), consistent with the simulation result in Fig. 4c-d. The increase of the aerosol load south of 60 • S started to become prominent after December 2010, and the poleward movement is most obvious above the 350 K isentropic surface (Fig. 6e-h). The Fig. 6 underestimates the increase of aerosol load in the tropical stratosphere after the Merapi eruption. The slow upward transport in the tropics shown in Fig. 4 is about 7 km in 5 months. It is not visible in Fig. 6, but the time series of the MIPAS measurements in the tropical stratosphere reveals a similar upward transport trend (see Fig. A1 in the Appendix). Quasi-horizontal transport from the tropics to Antarctica The MPTRAC simulations and the MIPAS measurements show the transport in the "surf zone" that reaches from the subtropics to high latitudes (Holton et al., 1995), where air masses are affected by both fast meridional transport and the slow BDC. The reconstructed emission time series in Fig. 2 and the MIPAS aerosol measurements in Figs. 3 and 6 show that the volcanic plume was injected into the TTL. Hence, the main transport pathway towards Antarctica is the quasihorizontal mixing in the lower extratropical stratosphere between 350 and 480 K (see Figs. 4 and 6). Figure 7 illustrates how the volcanic plume between 350 and 480 K approached Antarctica over time. Kunz et al. (2015) derived a climatology of potential vorticity (PV) streamer boundaries on isentropic surfaces between 320 and 500 K using ERA-Interim reanalyses for the time period from 1979 to 2011. This boundary is derived from the maximum product of the meridional PV gradient and zonal wind speed on isentropic surfaces, which identifies a PV contour that best represents the dynamical discontinuity on each isentropic surface. It can be used as an isentropic transport barrier and to determine the isentropic cross-barrier transport related to Rossby wave breaking (Haynes and Shuckburgh, 2000;Kunz et al., 2011a, b). In Fig. 7, gray dashed lines mark PV boundaries on the 350 K isentropic surface. On isentropic surfaces below 380 K, the PV boundaries represent the dynamical discontinuity near the core of the subtropical jet stream. Isentropic transport of air masses across these boundaries indicates exchange between the tropics and extratropics due to Rossby wave breaking. On isentropic surfaces above 400 K, PV boundaries represent a transport barrier in the lower stratosphere, in particular, due to the polar vortices in winter (Kunz et al., 2015). For comparison, we also show the 220 DU contour lines of ozone column density (black isolines in Fig. 7), obtained from OMI on the Aura satellite, indicating the boundary and size of the ozone hole. The PV boundary on 480 K is in most cases collocated with the area of the ozone hole, showing that both quantities provide a consistent representation of the area of the polar vortex. The Merapi volcanic plume first reached the transport barrier on the 350 K isentropic surface in mid-November and went close to the transport barrier on the 480 K isentropic surface in December. The long-lasting polar vortex prevented the volcanic plume from crossing the transport barrier at 480 K in early December. But from mid-December, the polar vortex became more disturbed and displaced from the South Pole, resulting in a shrinking ozone hole. As mentioned in Sect. 3.1, the ozone hole broke down at the end of December 2010 and the polar vortex broke down by mid-January 2011. The fractions of the volcanic plume that crosses the individual transport barrier or the latitude of 60 • S on each isentropic surface are shown in Fig. 8. In both cases, the pro- portion increased from November 2010 to January 2011. In November and December 2010 the largest plume transport across the transport barriers occurred between the 360 and 430 K isentropic surface (Fig. 8a), with a peak at 380-390 K. In January and February 2011 the peak was slightly elevated to 390-400 K. In November 2010, the volcanic plume did not cross the 480 K transport barrier of the polar vortex at high altitudes, especially above about 450 K. The high-latitude fraction increased from December 2010 to February 2011 as the weakening of the polar vortex made the transport barrier more permeable. In March and April 2011, the total proportion decreased and the peak descended to 370 K in March and further to 360 K in April. The proportion of the volcanic plume south of 60 • S (Fig. 8b) increased slightly from November to December 2010, and then increased significantly from December 2010 to January and February 2011 at all altitudes as the polar vortex displaced and broke down. Finally, the transport to the south of 60 • S started to decrease in March 2011. From November 2010 to February 2011 the peak was around 370-400 K, but in March and April 2011 the peak resided around 350-370 K. Figure 9a summarizes the simulated poleward transport of the volcanic plume between the isentropic surfaces of 350 and 480 K from November 2010 to March 2011 and the percentage of air parcels south of 60 • S. The percentage was calculated by dividing the number of SO 2 parcels between 350 and 480 K south of 60 • S by the total number of SO 2 parcels released for the forward trajectory simulation. Fig. 9b, where a positive ACI difference indicates an increase in the aerosol load, confirm the simulated transport pattern in Fig. 9a. Overall, simulations and observations indicate the largest increase of the aerosol load in the tropics and midlatitudes, but also show a significant enhancement over the south polar region after December 2010. Discussion The results presented in Sect. 3 show that the main transport pathway for the poleward transport of the Merapi volcanic plume to Antarctica was between the isentropic surfaces of 350 and 480 K (about 10 to 20 km), covering the TTL and the lower stratosphere at midlatitudes and high latitudes. Our findings using the MPTRAC model and MIPAS ACI are supported by the sulfate aerosol volume density retrievals by Günther et al. (2018). The aerosol enhancement due to volcanic eruptions between 12 and 18 km revealed in Fig. 5 agrees with the sulfate aerosol enhancement shown in Fig. 5 in Günther et al. (2018). Figure 5 in Günther et al. (2018) shows that after the eruption of Merapi in 2010, clear poleward dispersion of sulfate aerosol can be seen between 12 and 18 km, and the most prominent signal appears at 14 and 16 km. The sulfate aerosol mole fraction south of 60 • S increased also from the end of December 2010 as shown in our Fig. 9, although the specific date is not clear because of the seasonal cycle in the data. The enhanced sulfate aerosol lasted until the middle of March 2011 at 12 and 14 km, and until early April at 16 km, which also generally agrees with our results. There are some differences in the MIPAS ACI data used in this study and the sulfate aerosol retrievals in Günther et al. (2018), which may result in some minor discrepancies regarding the spatial distribution of aerosols. The MIPAS ACI data in this study are given on a 1.5 km vertical sampling grid in the UTLS region, but the sulfate aerosol retrieval in Günther et al. (2018) had a vertical resolution of 3-4 km. And comparing with the MIPAS ACI, there are further steps in Günther et al. (2018) to deal with ice cloud and to filter out PSCs. Due to the vertical resolution of 3-4 km, once ice cloud is detected, Günther et al. (2018) removed all retrieved data up to 4 km above the ice cloud. And they filtered out PSCs based on latitude, time and temperature criteria; i.e., if the temperature between 17 and 23 km dropped below the nitric acid trihydrate existence temperature (195 K) poleward of 40 • in the Northern Hemisphere winter (15 November to 15 April) and the Southern Hemisphere winter (1 April to 30 November), the retrieval results were discarded. Filtering out PSCs based on time, latitude and temperature criteria may lead to a loss of clear air profiles, e.g., when the synoptic temperature is sufficiently low but no PSCs are present. Usually the PSCs signals are much stronger than the aerosol signals, so we keep the non-ice PSC data to give an impression of the time and region when and where the identification of volcanic aerosol is not possible. Based on the simulation results in Sect. 3.4, up to 4 % of the volcanic plume air parcels were transported from the TTL to the lower stratosphere in the south polar region till the end of February 2011. We assigned a total mass of 0.44 Tg to all SO 2 parcels released, which means the Merapi eruption contributed about 8800 t of sulfur to the polar lower stratosphere within 4 months after the eruption, assuming that the sulfate aerosol converted from the SO 2 remained in the plume. The MPTRAC model we used to simulate the three-dimensional movement of air parcels in the volcanic plumes can estimate the conversion of SO 2 to sulfate aerosol during the transport process; however, it does not resolve the chemical processes of aerosol formation. Hence, the estimated 8800 t of sulfur is the maximum value since processes as, e.g., wet deposition remove sulfur from the atmosphere. But in the lower stratosphere, the atmosphere is relatively dry and clean compared with the lower troposphere, so the sulfate aerosol has a lower possibility of interacting with clouds or of being washed out. In fact, in the polar lower stratosphere, usually sedimentation and downward transport by the BDC are the main removal processes. Clouds and washout processes usually cannot be expected in the lower stratosphere. However, the amount of sulfate aerosol in the plume could also be affected by other mechanisms that speed up the loss of sulfur, for example, coagulation in the volcanic plume, or the absorption of sulfur onto fine ash particles. But for the moderate eruption of Merapi in 2010, sulfuric particle growth may not be as significant as it is in a large volcanic eruption, so the scavenging efficiency of sulfur will be low. So generally, our estimation may be larger than the actual value, but this number may be considered as the upper limit of the contribution of the Merapi eruption. Besides, a kinematic trajectory model like MPTRAC, in which reanalysis vertical wind is used as vertical velocity, typically shows higher vertical dispersion in the equatorial lower stratosphere compared with a diabatic trajectory model (Schoeberl et al., 2003;Wohltmann and Rex, 2008;Liu et al., 2010;Ploeger et al., 2010Ploeger et al., , 2011. However, the ERA-Interim reanalysis data used in this study to drive the model may constrain the vertical dispersion much better than older reanalyses (Liu et al., 2010;Hoffmann et al., 2017). The meridional transport in this study was mainly quasi-horizontal transport in the midlatitude and high-latitude UTLS region, so the effect of the vertical speed scheme is limited. The aerosol transported to the polar lower stratosphere will finally descend with the downward flow and have a chance to become a nonlocal source of sulfur for Antarctica through dry and wet deposition, following the general precipitation patterns. Quantifying the sulfur deposition flux onto Antarctica is beyond the scope of this study, though. Model results of Solomon et al. (2016) suggest that the Merapi eruption made a small but significant contribution to the ozone depletion in the following year over Antarctica in the vertical range of 100-200 hPa, roughly between 10 and 14 km. This altitude range is in agreement with our results; we found transport into the Antarctic stratosphere between 10 and 20 km. When the volcanic plume was transported to Antarctica in December 2010, the polar synoptic temperature at these low height levels was already too high for the formation of PSCs. The additional ozone depletion found by Solomon et al. (2016), together with the fact that sulfate aerosol was transported from Merapi into the Antarctic stratosphere between November and February where no PSCs are present during polar summer, may support the study that suggested that significant ozone depletion can also occur on cold binary aerosol (Drdla and Müller, 2012). The Merapi eruption in 2010 could be an interesting case study for more sophisticated geophysical models to study the aftermath of volcanic eruptions on polar processes. Summary In this study, we analyzed the poleward transport of volcanic aerosol released by the Merapi eruption in 2010 from the tropics to the Antarctic lower stratosphere. The analysis was based on AIRS SO 2 measurements, MIPAS sulfate aerosol detections and MPTRAC transport simulations. First, we estimated altitude-resolved SO 2 injection time series during the explosive eruption period using AIRS data together with a backward trajectory approach. Second, the long-range transport of the volcanic plume from the initial eruption to April 2011 was simulated based on the derived SO 2 injection time series. Then the evolution and the poleward migration of the volcanic plume were analyzed using the forward trajectory simulations and MIPAS aerosol measurements. The simulations are compared with and verified by the MIPAS aerosol measurements. Results of this study suggest that the volcanic plume from the Merapi eruption was transported from the tropics to the south of 60 • S within a timescale of 1 month. Later on, in the UTLS region a fraction of the volcanic plume (∼ 4 %) crossed 60 • S, even further to Antarctica until the end of February 2011. As a result, the aerosol load in the Antarctic lower stratosphere was significantly elevated. This relatively fast meridional transport of volcanic aerosol was mainly carried out by quasi-horizontal mixing from the TTL to the extratropical lower stratosphere. Based on the simulations, most of the quasi-horizontal mixing occurred between the isentropic surfaces of 360 to 430 K. This transport was in turn facilitated by the weakening of the subtropical jet and the breakdown of the polar vortex in the seasonal transition from austral spring to summer. The polar vortex in late austral spring 2010 was relatively strong compared to the climatological mean state. However, in December 2010 the polar vortex was displaced off the South Pole and later on broke down when the plume went to the high latitudes, so the volcanic plume did not penetrate the polar vortex but entered the South Pole with the breakdown of the polar vortex. Overall, after the Merapi eruption, the largest increase of aerosol load occurred in the Southern Hemisphere midlatitudes, and a relatively small but significant fraction of the volcanic plume (4 %) was further transported to the Antarc-tic lower stratosphere within 4 months after the eruption. As a maximum estimation, it contributed up to 8800 t of sulfur to the Antarctic stratosphere, which indicates that longrange transport under favorable meteorological conditions can make moderate tropical volcanic eruptions an important remote source of sulfur to Antarctica. Code and data availability. AIRS data are distributed by the NASA Goddard Earth Sciences Data Information and Services Center. The SO 2 index data used in this study are available for download at https://datapub.fz-juelich.de/slcs/airs/ volcanoes/ (last access: 19 October 2018). Envisat MIPAS Level-1B data are distributed by the European Space Agency. The ERA-Interim reanalysis data (Dee et al., 2011) Figure A1 shows the 9-day running median values of the MIPAS ACI between 10 • N and 10 • S. During the time period of the reference state (November 2007to March 2008, the aerosol load in the tropical stratosphere from 20 to 25 km is elevated by a couple of previous volcanic eruptions. The aerosol load at this vertical range after the Merapi eruption in 2010 is apparently smaller compared with the reference state. It should be noted that there are semiannual data oscillations in the MIPAS ACI aerosol detections. This periodic pattern is caused by the aerosol index that uses the atmospheric window region between 960 and 961 cm −1 . Around this window region, there are CO 2 laser bands. Due to the semiannual temperature changes at about 50 km (semiannual oscillation), the CO 2 radiance contribution to this window region also oscillates. As this window is generally very clear of other trace gases, this oscillation is not only visible at higher altitudes but also in the lower stratosphere because the satellite line of sight looks through the whole layer . But even though with the semiannual data oscillations, the upward transport of the aerosol from the Merapi eruption in the tropical stratosphere is still visible, the vertical speed is estimated to be about 7-8 km in 5 months (November 2010 to March 2011).
v3-fos-license
2012-02-07T17:01:38.000Z
2011-09-08T00:00:00.000
14574478
{ "extfieldsofstudy": [ "Medicine", "Physics" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://doi.org/10.1364/oe.19.022723", "pdf_hash": "dd7e6e903b918d58e0a8bd41db26b3403bd4db8c", "pdf_src": "Arxiv", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41087", "s2fieldsofstudy": [ "Physics" ], "sha1": "dd7e6e903b918d58e0a8bd41db26b3403bd4db8c", "year": 2011 }
pes2o/s2orc
A high-speed tunable beam splitter for feed-forward photonic quantum information processing We realize quantum gates for path qubits with a high-speed, polarization-independent and tunable beam splitter. Two electro-optical modulators act in a Mach-Zehnder interferometer as high-speed phase shifters and rapidly tune its splitting ratio. We test its performance with heralded single photons, observing a polarization-independent interference contrast above 95%. The switching time is about 5.6 ns, and a maximal repetition rate is 2.5 MHz. We demonstrate tunable feed-forward operations of a single-qubit gate of path-encoded qubits and a two-qubit gate via measurement-induced interaction between two photons. INTRODUCTION In the rapidly growing area of photonic quantum information processing [1][2][3][4], beam splitter is one of the basic building blocks [5]. It is crucial for building single-photon, as well as two-photon gates. The measurement-induced nonlinearity realized by two-photon interference [6] on a beam splitter enables the interaction between photons and hence along with single photon gates allows the realization of quantum dense coding [7], quantum teleportation [8], entanglement swapping [9,10], multi-photon interferometers [11] and many twophoton quantum gates [2]. Particularly quantum computation [12][13][14], quantum metrology [15] and quantum simulation [16,17] require highspeed, tunable quantum gates (beam splitters) controlled by the signals from feed-forward detections to enhance their efficiency and speed. Also these special beam splitters can be used in a single-photon multiplexing scheme to improve the quality of single photons [18][19][20]. Since the polarization of a photon is a convenient and widely-used degree of freedom for encoding quantum information, such a beam splitter is preferably polarization-independent and conserves coherence. Therefore, a high-speed, polarization-independent, tunable beam splitter for photonic quantum information processing is highly desirable. In classical photonics, although fast optical modulators based on electro-optical materials embedded in Mach-Zehnder interferometers (MZI) are already at an advanced stage [21], the polarization preference of these devices makes them unsuitable for quantum information processing. Recently, several photon switches have been demonstrated [22][23][24]. However, the demonstrations of feed-forward operations on path-encoded qubits still remained a challenge. The operational principle of our high-speed, tunable beam splitter (TBS) is the following. In order to build a tunable beam splitter (Fig.1A), one needs a knob to adjust its splitting ratio. One of the possibilities is to vary the phase of a Mach-Zehnder interferometer (MZI), as shown in Fig. 1B. The splitting ratio can be tuned by adjusting the phase of interferometer. This is represented as the intensity modulations of the outgoing beams (Fig. 1C). For instance, if the phase is 0 there is no beam splitting, and if the phase is π 2 the splitting ratio is 1. The splitter ratio (the ratio between the transmissivity and reflectivity) of a tunable beam splitter (TBS) can be adjusted. Therefore, the counts of detectors D1 and D2 vary according to the splitting ratio. B. The realization of a TBS with a Mach-Zehnder interferometer, which consists of two beam splitters (BS1 and BS2), two mirrors (M1 and M2) and a phase shifter (PS). The splitting ratio can be tuned by adjusting PS. C. The normalized intensities of D1 (red solid curve) and D2 (black dashed curve) are plotted as a function of the phase. EXPERIMENT Here we present an experimental realization of such a high-speed, polarization-independent, tunable beam splitter based on bulk electro-optic modulators (EOM) embedded in a Mach-Zehnder interferometers, as shown in Fig. 2. The TBS consists of two 50:50 beam splitters, mirrors and most importantly two EOMs (one in each arm of the interferometer). EOMs vary the phase of transmitted light by the application of an electric field inducing a birefringence in a crystal. Here we use Rubidium Titanyl Phosphate (RTP) crystals as the electrooptic material for our EOMs. Since it lacks piezo-electric resonances for frequencies up to 200 kHz [25] and shows very rare resonances up to 2.5 MHz, it is suitable to drive the EOMs at high repetition rate. The interferometer is built in an enclosed box made from acoustic isolation materials in order to stabilize the phase passively. Additionally, an active phase stabilization system . The up-converted pulses and the remaining fundamental pulses are separated with six dichroic mirrors (6 DMs). A correlated photon pair is generated from BBO2 via spontaneous parametric down-conversion. By detecting photon 1 in the transmitting arm of the polarizing beam splitter (PBS) with an avalanche photon detector (D3), we herald the presence of photon 2 in the reflecting arm of the PBS. Photon 2 is delayed with an optical fiber and the polarization rotation of the fiber is compensated by a polarization controller (PC). Then the photon is sent to the tunable beam splitter (TBS). Both photons are filtered by using interference filters (IF) with 3 nm bandwidth centered around 808 nm. The output signal of the detection of photon 1 is used to trigger two EOMs. The time delay between trigger pulse and the arrival of photon 2 at the EOMs is adjusted via the field-programmable gate array (FPGA) logic, which allows the feed-forward operations. Photon 2 is detected by D1 or D2 at the output of the TBS. Note that the interferometer is actively stabilized by using an auxiliary He-Ne laser (He-Ne), a silicon photon detector (PD), an analogue proportional-integral-derivative (PID) control circuit and a ring-piezo transducer. is implemented by using an auxiliary beam from a powerstabilized He-Ne laser counter-propagating through the whole MZI, as shown by the red dashed line in Fig. 2. This beam has a small transversal displacement from the signal beam, and picks up any phase fluctuation in the interferometer. The corresponding intensity variations of the output He-Ne laser beam are measured with a silicon photon detector (PD) and the signal is fed into an analogue proportional-integralderivative (PID) control circuit. A ring-piezo transducer attached to one of the mirrors in the MZI is controlled by this PID and compensates the phase fluctuations actively. Since the wavelength of the He-Ne laser (about 633 nm) is smaller than the wavelength used in the quantum experiment, it implies higher sensitivity to fluctuations of the phase. The optical axes of both RTP crystals are oriented along 45 • and the voltages applied to them are always of the same amplitudes but with opposite polarities. The scheme of using the two EOMs is crucial, because the tunability of this highspeed tunable beam splitter relies on first-order interference. By employing EOMs in both arms, the polarization states of photon 2 in path c and in path d remain indistinguishable and hence allow first-order interference. The voltages applied to the EOMs tune the relative phase of the MZI. The phase of the MZI is defined to be 0 when all the photons entering from input path a exit into the output path f. This also corresponds to the phase locking point of the He-Ne beam. The functionality of the TBS is best seen by describing how the quantum states of polarization and path of the input photon evolve in the MZI. Since the optical axes of the EOMs are along +45 • /−45 • , we decompose the input polarization into the |+ /|− basis, with |+ (|− ) being the eigenstate to the +45 • (−45 • ) direction. The arbitrarily polarized input state in spatial mode a is |Ψ = (α |+ + β |− ) |a , where α and β are complex amplitudes and normalized (|α| 2 + |β| 2 = 1). It evolves as: where φ(U) is the voltage dependent birefringence phase given by the EOMs. From Eq. 1, it is straitforward to see that one can tune the splitting ratio by varying φ(U). The transmissivity (T ) and reflectivity (R) are defined as the probabilities of detecting photons, entering from mode a, in the outputs of spatial modes f and e: T = cos 2 φ(U) 2 and R = sin 2 φ(U) 2 . Additionally, the input polarization will be rotated from α |+ +β |− to α |+ −β |− in the spatial mode e. This polarization rotation can be dynamically compensated with an additional EOM on path e applied with a half wave voltage. We test the performance of our TBS with a heralded singlephoton source sketched in Fig. 2. By using the correlation of the emission time of the photon pair generated via spontaneous parametric down-conversion (SPDC), we herald the presence of one photon with the detection of its twin, the trigger photon. In order to employ the feed-forward technique, the detection of the trigger photon is used to control the EOMs in the TBS, which operates on the heralded single photon. Femtosecond laser pulses from a Ti:Sapphire oscillator are up-converted with a 0.7 mm β-barium borate crystal (BBO1) cut for type-I phase-matching. This produces vertically polarized second harmonic pulses. The up-converted pulses and the remaining fundamental pulses are separated with six dichroic mirrors (6 DMs). The collinear photon pairs are generated via SPDC from a 2 mm BBO crystal (BBO2) cut for collinear type-II phase-matching. The UV pulses are removed from the down converted photons with a dichroic mirror (DM) and long pass filter (LPF). The photon pair is separated by a polarizing beam splitter (PBS) and each photon is coupled into a singlemode fiber (SMF). The transmitted photon, photon 1, is detected by an avalanche photon detector (D3), which serves as the trigger to control the EOMs. The reflected photon, photon 2, is de- Demonstration of the tunable and polarization-independent feed-forward operation of a single-qubit gate of path-encoded qubits. Experimental results for A horizontally and B +45 • polarized input single photons. The black squares (red circles) are the values of the transmissivity (reflectivity) of the ultrafast tunable beam splitter, which are calculated from the coincidence counts between D1 and D3 (D2 and D3) and fitted with a black solid (red dash) sinusoidal curve. Error bars represent statistical errors of ±1 standard deviation and are comparable with the size of the data points. layed in a 100 m single-mode fiber and then sent through the TBS. The on-time of the EOMs is 20 ns, which requires fine time delay adjustment of the detection pulse from D3 used to trigger the EOMs. It is achieved with a field-programmable gate array (FPGA). To use an on-time of 20 ns is a compromise of the performance of EOMs and the repetition rate of our pulsed laser system. On one hand, the rising and falling times of the EOM are about 5 ns (see below) and hence the on-time of the EOMs has to be longer than 10 ns. Experimentally, we find that it has to be at least 20 ns for achieving a polarization switching contrast above 98%. On the other hand, the repetition rate of our laser is about 80 MHz and it corresponds to a period of 12.5 ns. To avoid switching the uncorrelated photons generated from distinct laser pulses, the on-time should be comparable to 12.5 ns. In our experiment, we use a relative low pump power for SPDC and the chance of generating two pairs of photons in consecutive pulses is extremely low. Therefore, 20 ns on-time is the suitable choice for our application. In order to maximize the operation quality of the TBS, the path length difference between the two arms of the MZI has be minimized. In addition, two pairs of crossed oriented BBO crystals (BBO3 and BBO4) are placed in each arm of the MZI in order to compensate the unwanted birefringence induced by optical elements. Photon 2 is detected by avalanche photon detectors D1 or D2 at the output of the TBS. One important feature of our TBS is polarization independence, i.e. it works in a similar way for all polarizations. This is tested by preparing the polarization state of photon 2 with a fiber polarization controller (PC), as shown in Fig. 3A for horizontal polarization and Fig. 3B for +45 • polarization. We fit sinusoidal curves to the measurement results and determine that the visibilities for horizontal and +45 • are 95.9% ± 0.2% and 95.3% ± 0.3%, respectively. It is defined as V = (R Max − R Min )/(R Max + R Min ), where R Max and R Min are the maximum and minimum of the reflectivity. For input path b, we have observed the same results, which confirms the usefulness of this device to perform two-photon interference experiments. The polarization fidelity of the switchable beam splitter for an arbitrary state is above 98%. Two further important requirements of a TBS are highfrequency operation and short rise and fall times. These are challenging to achieve with today's electro-optical active crystals and the driving electronics. We use the particular advantages of RTP crystals and drive the EOMs with frequencies up to 2.5 MHz. We measure the rise/fall time (10% to 90% of signal amplitude transition time) of the TBS with a continuous wave laser with a Si photon detector and an oscilloscope. As shown in Fig. 4, the rise time of a π-phase modulation is about 5.6 ns. Note that the fall time is about the same. For an optical two-qubit entangling gate, Hong-Ou-Mandel (HOM) two-photon interference [6] is at the heart of many quantum information processing protocols, especially for photonic quantum computation experiments (C-phase gate [29][30][31], entanglement swapping [9,10], etc.). In or-der to use our optical circuitry for more complicated quantum logical operations, it is crucial to demonstrate with it such two-photon interference phenomena. Therefore, we also carried out a tunable two-photon interference experiment with our TBS. We use a pair of photons generated from one source and send them to the TBS. We vary the path length difference between these two photons with a motorized translation stage mounted on one of the fiber coupling stages and measure the two-fold coincidences between two detectors placed directly behind two outputs of the TBS (e and f in Fig. 2). The phase of the MZI and hence the reflectivity (and transmissivity) of the TBS is varied by applying different voltages on the EOMs. In case of a π/2 phase, the TBS becomes a balanced beam splitter and the distinguishability of two input photons' spatial modes is erased. In consequence, the minimum of the coincidence counts occurs for the optimal temporal overlap (with the help of suitable individual fiber delays) of the two photons, and HOM two-photon interference with a visibility of 88.7% ± 3.8% has been observed. In case of a 0 phase, the whole TBS represents a highly transmissive beam splitter and the two photons remain distinguishable in their spatial modes. Correspondingly, one obtains path length difference insensitive coincidence counts. This is in agreement with complementarity, where in principle no HOM interference can be observed. In ref [20], some results are presented in different context. The insertion loss of the TBS is about 70%, which is mainly due to single-mode fiber coupling and the Fresnel loss at optical surfaces. With current technology, it is in principle possible to improve the transmission of a TBS to about 95%, including Fresnel loss on each optical surface (0.5% per surface) and a finite fiber coupling efficiency of 98%. CONCLUSION In conclusion, we experimentally demonstrate tunable feed-forward operations of a single-qubit gate of pathencoded qubits and a two-qubit gate via measurement-induced interaction between two photons by using a high-speed polarization-independent tunable beam splitter. We have shown its unique advantages, such as excellent fidelity, high speed, and high repetition rate, operating for photons at about 800 nm. The active stabilization allows continuous usage of this tunable beam splitter over periods of many days. This systems is well suited for experimental realizations of quantum information processing and fundamental quantum physics tests. A future challenge will be to utilize state-of-art micro-optics technology, such as integrated photonic quantum circuits on a silicon chip [32,33], for developing compact and scalable quantum technologies based on photons.
v3-fos-license
2015-09-18T23:22:04.000Z
2015-06-05T00:00:00.000
18476180
{ "extfieldsofstudy": [ "Chemistry" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://www.mdpi.com/1996-1073/8/6/5381/pdf", "pdf_hash": "01b4dfe2246b9b0464f9dc91b5dbc45aeaa0b051", "pdf_src": "ScienceParsePlus", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41088", "s2fieldsofstudy": [ "Engineering" ], "sha1": "4ed2a4749cdd775c4492b2f4c75937f2ad7fc143", "year": 2015 }
pes2o/s2orc
Stability Analysis of Methane Hydrate-Bearing Soils Considering Dissociation : It is well known that the methane hydrate dissociation process may lead to unstable behavior such as large ground deformations, uncontrollable gas production, etc. A linear instability analysis was performed in order to investigate which variables have a significant effect on the onset of the instability behavior of methane hydrate-bearing soils subjected to dissociation. In the analysis a simplified viscoplastic constitutive equation is used for the soil sediment. The stability analysis shows that the onset of instability of the material system mainly depends on the strain hardening-softening parameter, the degree of strain Introduction Recently, methane hydrates (MHs) have been viewed as a potential energy resource since a large amount of methane gas is trapped within ocean sediments and permafrost regions.A unit volume of methane hydrate dissociates into approximately 160-170 times the volume (at 0 °C and 1 atmosphere) of methane gas.However, we do not have enough knowledge about the behaviors of sediments caused by dissociation of hydrates in the ground.Some researchers have pointed out that gas hydrates may be a trigger of submarine geohazards which could impact global climate change. Kvenvolden [1] presented several examples of a possible connection between gas hydrate dissociation and submarine slides, and slump surfaces were recognized.Many of these slides are on gentle slopes which should be stable.Other authors have also reported that gas hydrates, mostly methane hydrates, might affect submarine slides due to dissociation [2][3][4].Submarine landslides can be caused by an increase in applied shear stress or a reduction in the strength of the soil.When gas hydrates form in sediments, the pore spaces are occupied by the solid phase of gas hydrates, although the gas hydrates themselves can act as a cementation (bonding) agent between soil particles.The reduction in hydrostatic pressure of the hydrate reservoir, or increases in the temperature of the reservoir leads to the dissociation of the gas hydrates.The solid phase of the gas hydrates is lost and the hydrates change into the fluid phase, i.e., water and gas.When this released fluid pressure is trapped inside an area of low permeability, the effective stress, which is one of the factors of describing strength of the sediments, should be reduced and slope failure can be triggered, resulting in submarine landslides.The submarine landslides may lead to even worse situation, for example, cutting submarine cables, and tsunami disasters.Figure 1 shows a schematic view of possible hazards in marine sediments induced by gas hydrate dissociation.Slope failure in marine sediments can cause enormous turbidity current, and it may generate a tsunami.The tsunami produced by slope failure will hit coastal areas and offshore structures, and in the worst scenario, many people might be killed by a tsunami. Many experimental and numerical studies have been conducted on the deformation behavior associated with methane hydrate dissociation [5][6][7][8][9][10].Nevertheless, the behavior of methane hydrate-bearing soils during dissociation is still a subject of active research, and theoretical analyses to investigate the onset of instability, such as linear stability analyses, have not been performed. It has been well recognized that strain localization in soil materials is closely related to the onset of material failure.Slope failure is one of the typical strain localization problems in which deformation occurs in a narrow area.Rice [11] indicated that the phenomena of this problem can be treated in a general framework of bifurcation problems.Rice [11] also indicated the importance of the influences of pore fluid on the instability, and investigated the stability of fluid-saturated porous material in quasi-static conditions.Anand et al. [12], and Zbib and Aifantis [13] conducted linear perturbation stability analyses for the onset of shear localization.Loret and Harireche [14] investigated the acceleration waves in inelastic porous media, and Benallal and Comi [15] showed material instabilities in saturated material under a dynamic state using a perturbation stability analysis.Oka et al. [16] have been dealing with the strain localization problem of water-saturated clay through the use of viscoplastic constitutive equations because of the rate-dependent nature of cohesive soil.Higo et al. [17] have studied the effect of permeability and initial heterogeneity on the strain localization of water-saturated soil.Kimoto et al. [18] have performed a linear instability analysis on the thermo-hydro-mechanical coupled material system, and have indicated that strain softening and temperature softening are the main reasons for the material instability.Recently, Garcia et al. [19] have performed a linear stability analysis in order to investigate which variables have a significant effect on the onset of the instability of an unsaturated viscoplastic material subjected to water infiltration.They have found that the onset of the growing instability of the material system mainly depends on the specific moisture capacity, the suction and the hardening parameter. In a similar manner to the past researches, we regard the slope failure and submarine landslides as instability problems induced by gas hydrate dissociation.In the first part of the present study, we conduct a linear stability analysis to investigate the onset of instability during the dissociation process.Then, in second part, a series of numerical analysis has been conducted in order to confirm the effect of the parameters detected by the linear stability analysis on the material instability.Figure 2 shows an illustration of the stable and unstable regions of methane hydrate-bearing sediments with and without hydrate dissociation.We discuss which parameters or variables have a significant effect on the instability of methane hydrate-bearing materials when they are subjected to a dissociation process.In the linear stability analysis, we extend the method by Oka et al. [16], and Garcia et al. [19] to a chemo-thermo-mechanically coupled material considering hydrate dissociation. Figure 2. Illustrative view of stable and unstable regions of methane hydrate-bearing sediments with and without dissociation. One-Dimensional Instability Analysis of Methane Hydrate Bearing Viscoplastic Material In this section, the linear stability analysis of methane hydrate-bearing soil considering dissociation is shown.We follow the method by Garcia et al. [19], and extend the method by considering the energy balance and hydrate reaction process(es) in order to deal with the dissociation phenomenon.The governing equations for the chemo-thermo-mechanically coupled behavior are based on Kimoto et al. [9], and a viscoplastic constitutive model is used for the soil skeleton.The details of the governing equations for the stability analysis are shown in the following sections. General Settings The multiphase material Ψ is composed of four phases, namely, soil (S), water (W), gas (G), and hydrates (H) which are continuously distributed throughout space: in which S is soil phase, W is water phase, G is gas phase and H is hydrate phase, respectively.For simplicity, we assume that hydrates move with soil particles, in other words, the solid phase, denoted as SH, is composed of soil and hydrates which exist around the soil particles.Total volume V is obtained from the sum of the partial volumes of the constituents, namely: The water phase and gas phase are expressed as the fluid phase, and the total volume of fluids VF is given by: The volume fraction n α is defined as the local ratio of the volume element with respect to the total volume given by: With dissociation Without dissociation The volume fraction of the void, n, is written as: The volume fraction of the fluid, F n , is given by: In addition, the fluid saturation is given by: Hereafter, the water saturation s W will be denoted as s: Density of each material ρ α , and the total phase ρ is denoted by: in which M α is the mass of each phase α. Stress Variables The stress variables are defined in the following one-dimensional form.Total stress σ is obtained from the sum of the partial stresses, namely: where superscripts S, H, W, G indicate the soil, hydrate, water, and gas phases, respectively.The partial stresses for the fluid phases can be written as: where P W and P G are the pore water pressure and the pore gas pressure, respectively.Tension is positive for the stresses.For simplicity, we assume that the soil phase and the hydrate phase are in the same phase, namely, the solid phase.Thus, the partial stress of the solid phase is defined as: where σ is called the skeleton stress in the present study; it acts on the solid phase and is used as the stress variable in the constitutive equation.The terms n S and n H are the volume fractions of the soil phase and the hydrate phase, respectively, and P F is the average fluid pressure given by: where s is the water saturation.Substituting Equations ( 11)-( 14) into Equation (10), the skeleton stress is obtained as: Terzaghi [20] defined the effective stress for water-saturated soil.In the case of unsaturated soil, however, the effective stress needs to be considered in order to include a third phase, namely, the gas phase which is considered to be compressible.In general, we need the suction in the unsaturated soil model.Hence, we do not use the name of the effective stress.In the present formulation, the skeleton stress tensor σ is used; Jommi [21] defined it as the average soil skeleton stress. Conservation of Mass The conservation of mass for the soil, the water, the gas, and the hydrate phases are given by the following equations: in which α ρ is the material density for α phase, α v is the velocity vector of each phase, and α m  is the mass-increasing rate per unit volume due to hydrate dissociation.α D Dt denotes the material time derivative following particles of α phase.Assuming that the densities of the soil, the water and the hydrate are constant, that is, ρ ρ ρ 0 , where the superimposed dot denotes the material time derivative, Equation ( 16) yields: in which the mass-increasing rate for the soil phase is zero, i.e., 0 S m   .We assume that the soil particles and hydrates move together, namely: Under the small strain condition, the spatial gradient of velocity for the solid phase is equal to the strain rate: According to Equation ( 5), the volume fraction of soil phase S n should be equal to   1 n  , and the time derivative of S n can be written as follows: Substituting Equations ( 5) and ( 23) into Equation ( 17) provides: From Equations ( 7) and ( 8), the volume fractions for the water phase and gas phase can be expressed as: Considering Equation ( 25), the time derivative of W n and G n are given by the following equations:   Substituting Equations ( 25)-( 27) into Equations ( 18) and (19) gives: Dividing both sides of Equation ( 20) by ρ H , and rearranging the equation, we obtain: The apparent velocities of the water and the gas, with respect to the solid phase, are defined as: In order to describe the changes in gas density, the equation for ideal gas is used, i.e.: where M is the molar mass of gas and θ is the temperature.Multiplying Equation ( 24) by   adding Equation (28), and considering Equation (31), the continuity equation for the water phase can be written as: . Balance of Momentum The one-dimensional equilibrium equation can be written as: in which the acceleration term is disregarded. Darcy Type of Law For gas-water-solid three phase materials, we adopt a Darcy type of law for the water and gas phases that can be obtained from the balance of linear momentum for each phase as described below.The Darcy type law for the flows of the water and the gas can be described as follows: It should be noted that the above equations are only valid for creeping flow which has a small Reynolds number (Re).Some researchers have pointed it out that ground water flow can be treated as a laminar flow at Re < 1-10 [22,23].In the case that the velocity becomes very high, especially for the gas flow, e.g., the Forchheimer law may describe the flow motion. Conservation of Energy In the present study, we consider heat conductivity and the heat sink rate as being associated with the hydrate dissociation.The one-dimensional equation of energy conservation is written as: where c  is the specific heat of phase α , θ is temperature for all phases, and H Q  is time rate of dissociation heat per unit volume due to the hydrate dissociation. 744 where H N  is dissociation rate of hydrates.Heat flux follows Fourier's law as: in which θ k is the thermal conductivity for all phases.Substituting Equations (39) and (40) into Equation (38), and assuming that the term σ ε  , which is related to the viscoplastic work, is very small and negligible, Equation (38) can be rewritten as: where n is a hydrate number and that is assumed to be equal to 5.75 in natural hydrates.For the methane hydrate dissociation rate H N  , we use Kim-Bishnoi's equation [24], namely:   where NH is the moles of hydrates in the volume V, NH0 is the moles of hydrates in the initial state, P F is the average pore pressure and P e is an equilibrium pressure at temperature θ.When the dissociation occurs, the dissociation rate is negative, i.e., 0 The rates of generation of water and gas are given by: 5 75 The increasing mass rates for hydrates and the water and the gas phases, required in Equations ( 33) and (34), can be obtained from the above equations: where MH, MW, and MG are the molar mass of the methane hydrates, the water, and the methane gas, respectively. Simplified Viscoplastic Constitutive Model In the analysis, a simplified viscoplastic constitutive model is used.The stress-strain relation can be expressed as: where ε is the strain, ε  is the strain rate, H is the strain hardening-softening parameter and μ is the viscoplastic parameter.We ignore the dependency of the hardening-softening parameter H on the skeleton stress σ , namely, we assume that the strain hardening-softening parameter H is a function of suction C P and hydrate saturation H r S for simplicity.Viscoplastic parameter μ is a function of the Suction and the hydrate saturation are defined as: where v V is the volume of void. Perturbed Governing Equations Next, in order to estimate the instability of the material system, we consider the equilibrium equation, the continuity equation, the energy balance equation, the constitutive equations, and the equation of hydrate dissociation rate in a perturbed configuration.In Equations ( 33)-( 35), ( 41), (43), and (46), the unknowns are pore water pressure W P , pore gas pressure G P , strain ε , temperature θ , and moles of hydrate H N .For each unknown, we suppose that: where the first terms on right side in Equation ( 50) indicate the values which satisfy the governing equations and the second terms are the perturbations of each variable.For the perturbations, we assume the following periodic form: where q is the wave number   , ω is the rate of the fluctuation growth, and superscript   * indicates the amplitude of each variable. Disregarding the changes in material density and considering the body force as constant, the perturbation of the equilibrium equation, Equation (35), is expressed by: where the perturbed variables are indicated by a tilde.From Equation (46), the perturbation of the skeleton stress σ  can be written as: Since the strain hardening-softening parameter H is a function of the suction and the hydrate saturation, and the viscoplastic parameter µ is a function of temperature, the perturbations of H and µ are given as: The parameter H increases with an increase in the suction C P ; consequently, the slope of the Similarly, the perturbation of average pore pressure F P ~ can be written as In Equation ( 58), the degree of water saturation , is a function of the suction C P ; the perturbation of the saturation is given as: where   indicates the slope of the Using Equations ( 51), (58), and (59), we have a gradient of the perturbed average pore pressure as: By substituting Equations ( 57) and (58) into Equation (52) and rearranging the terms, we obtain: In a similar manner to the perturbed equation of the equilibrium equation, the continuity equations of water and gas, Equations ( 33) and (34), can be rewritten in the perturbed configuration as follows: Considering Equations ( 41) and (51) gives the perturbation of the conservation of energy as follows: Since the dissociation rate of the methane hydrates is a function of temperature θ , average pore pressure F P , and the moles of hydrate H N the perturbation of the dissociation rate can be written as: By substituting Equations ( 51) and (58) into Equation (65), we obtain: Finally, we rewrite the perturbed governing equations, Equations ( 61)-(64), and (69), in matrix form as:   det A , we have a polynomial function of ω as: The details of coefficients   are noted in Appendix A. Note that in Equation (71) the sign b , C P , G P , θ , V are always positive, whereas the sign for C B and H N  is negative.The sign for strain  is positive in expansion and negative in compression, and strain rate ε  can be positive or negative. The srain softening-hardening parameter H is positive in viscoplastic hardening and negative in A Self-archived copy in Kyoto University Research Information Repository https://repository.kulib.kyoto-u.ac.jp viscoplastic softening.Considering that the sign of each term and Equation (A.1), the sign of 5 a is always positive. Conditions of Onset of Material Instability In the following, we discuss the instability of the material system.If the growth rate of the perturbations ω , which is the root of Equation ( 73), has a positive real part, the perturbation diverges, and finally, the material system is unstable.On the contrary, if the real part of ω is negative, the material system is stable.The necessary and sufficient conditions that the all roots have negative real parts are given by the Routh-Hurwitz criteria.The necessary and sufficient conditions whereby the all the roots have negative real parts are given by the Routh-Hurwitz criteria.When the coefficient of the highest order of ω is positive, that is, 5 0 a  , the necessary and sufficient conditions that all the roots have negative real parts are to satisfy all the equations from (i) to (iv) expressed below.Let us discuss the first condition (i), because the material system might be unstable if at least one of the conditions described above is not satisfied.As for the other conditions (ii)-(iv), the coefficients are quite complicated as shown in Appendix A, and hence we discuss Equation (74). Sign for the Coefficients 5 a and 0 a First, we will compare the sign of coefficients 5 a and 0 a , because it is relatively easy to compare the sign.Considering the sign for the terms described above, 5 a is always positive: Hence, when 0 a becomes negative, the first conditions of Routh-Hurwitz criteria is not satisfied, namely, the material system may become unstable: From Equation (79), the material stability depends on strain hardening-softening parameter H , SH H strain ε , and the volume fraction of void n and hydrate H n .We will consider two cases; the first case is the compressive strain ε 0  and the second is the expansive strain ε 0  . (A) ε 0  : compressive strain (1) When parameter H is positive, that is, the viscoplastic hardening, the term in 0 a is always positive.The sign for 0 a always becomes positive: (2) When the parameter H is negative, that is, the viscoplastic softening, 0 a becomes negative, if it satisfies the following inequality: The material instability might occur even if it is viscoplastic hardening material. (B) ε 0  : expansive strain (3) When parameter H is positive, that is, the viscoplastic hardening, the term in 0 a becomes negative, if it satisfies the following inequality: (4) When parameter H is negative, that is, viscoplastic softening, the term  is always negative.Thus, the sign for 0 a is always negative.This may lead to the material instability, because it does not satisfy the first condition of the Routh-Hurwitz criteria: If there are no hydrates in the material, namely, 0 H r S  , 0 a can be written as: The sign of the coefficients 0 a depends on only whether the material is viscoplastic hardening or softening. In general, the material system is likely to be stable when it is in the viscoplastic hardening regions [18,19].Of course, conditions of the onset of the material instability depends on not only the sign for 5 a and 0 a but also the other coefficients.The main point of this analysis is that the material instability may occur even in the viscoplastic hardening regions, in the case of the expansive strain.In other words, the expansive strain will make the material system unstable. Sign for the Coefficients 5 a and 4 a Next, we will discuss the sign for 4 a .Coefficient 4 a is given in Equation (A.2), and it can be regarded as a second order polynomial of the wave number q : a a q a q   (85) 1 1 where 4 2 , a and 4 0 , a indicate the coefficients associated with 2 q and 0 q , respectively.The coefficient of the highest order term of 4 a always positive, that is, 4 2 , a is always positive.When the term 0 4 0 , a q is positive, the sign for 4 a becomes positive.It is difficult, however, to clarify the sign for 4 0 , a because of complexity.Even when 0 4 0 , a q is negative and the term 2 4 2 , a q is larger than 0 4 0 , a q , 4 a becomes positive, that is: Considering Equations ( 86) and (87), the conditions for which 4 a can be positive are obtained as follows:  In the case of large values for W k , G k , and θ k , 4 a can become positive more easily.In contrast, low permeabilities for water and gas make the material system unstable. In this section, we have discussed the conditions for the onset of the instability of methane hydrate-bearing sediments by means of an analytical method using a simplified viscoplastic model and linear instability analysis.From the analysis, we found that material instability may occur in the case of both viscoplastic hardening and softening, and that the parameters H , SH H , and the hydrate saturation S have a significant effect on the onset of the material instability.Furthermore, the sign for strain, that is, compression ε 0  or expansion ε 0  also has a significant effect on material instability, and expansive strain will make the material system unstable.The permeabilities for water W k , and gas G k are also essential parameters for material instability. Since it is rather difficult to discuss the sign of coefficients a1-a4 and the other conditions of Routh-Hurwitz criteria due to the complexity, the material instability will be studied numerically.In the next section, the results of various numerical simulations of the dissociation-deformation problem using the one-dimensional finite element mesh will be presented in order to study the material instability by using the chemo-thermo-mechanically coupled model proposed by Kimoto et al. [9,10].The results will be compared to those of the instability analysis. Numerical Simulation of Instability Analysis by an Elasto-Viscoplastic Model Considering Methane Hydrate Dissociation The effect of material parameters, especially the permeability, should be investigated.This is because in the previous section it was found that larger permeability makes the material system more stable.In the linear stability analysis, only the first condition of Routh-Hurwitz criteria have been discussed, because the other conditions are too complicated to be analyzed theoretically due to the complexity of the coefficients.In order to compensate insufficiency of the instability analysis and confirm the effect of permeability on the material instability, a series of parametric studies on the permeability is conducted in this part.The detailed conditions for the numerical simulation are described in the following section. One-Dimensional Finite Element Mesh and Boundary Conditions Figure 3 illustrates a schematic drawing of the target area of MH-bearing sediments for the numerical simulations, and the finite element meshes and boundary conditions used in the simulations are shown in Figure 4.The seabed ground, at a depth of about 350 m from the top of the seabed ground surface and at a water depth of 1010 m, is modeled.The MH-bearing layer lies at a depth of 290 m from the top of the ground surface with a thickness of 44 m.We assume the depressurization method for the hydrate dissociation, and the depressurizing source is set at the left boundary of the model from the surface of the seabed ground.For the finite element mesh, a homogeneous soil column in the horizontal direction with a thickness of 1 m is employed, as shown in Figure 4.The modelled seabed ground was just used to determine distributions of the initial stress, the initial temperature, and the initial pore pressure.The numerical simulations were performed to confirm the results of the linear stability analysis, not to fully simulate the real situation of the methane hydrate production. The linear stability analysis was conducted under one-dimensional conditions for simplicity.In the numerical simulations part we have solved a one-dimensional problem in order to confirm the consistency of the results between the linear stability analysis and the numerical analysis, while the program code is written in two-dimensional plane-strain conditions.Consequently, the water and gas flow is limited to the one-dimensional flow by setting the no-flow boundary on the top and bottom surface.The left boundary is set to be permeable for water and gas and the adiabatic boundary.The top and bottom boundaries are set to be impermeable boundary so that the water and gas flow are limited to one-dimensional flow.The right boundary is also set to be permeable; however, the boundary conditions of the right boundary are set to be isothermal, namely, the temperature is kept constant at 287 K. Initial and Simulation Conditions The initial state of the pore pressure and temperature at the depressurization source are shown in Figure 5 with the methane hydrate equilibrium curve.The initial pore water pressure is 13 MPa for all elements, which is the hydrostatic pressure, and the pore water pressure at the depressurization source is linearly reduced to the target pressure.The target value varies with respect to the initial pore pressure at depressurization source Pini; the degree of depressurization ΔP varies from 20% to 80% with increments of 10%, as shown in Figure 5.By changing the magnitude of the depressurization, it becomes possible to control the MH dissociation.The depressurizing rate for each case is the same, that is 0.116 kPa/s (10 MPa/day); thus, the time when the pore pressure at the depressurization source reaches the target value is different for the different cases.The total time of the simulation is 100 h for each case.In the simulations, the total calculation time (100 h) is determined by considering the depressurization rate and depressurization level.The depressurization rate is 0.116 kPa/s (10 MPa/day), which is almost the same as that of offshore methane hydrate production trial conducted by Japan Oil, Gas, and Metals National Corporation (JOGMEC) in March 2013 [25].According to the rate, the depressurization will finish at about 6.2 h in the case of 20% of depressurization level, and even in the case of the maximum depressurization level, i.e., 80%, it will end at about 25 h.Another reason is that in each case the computation time takes more than 10 h, and we need vast amounts of h to calculate total 42 cases and more.Thus, the total simulation time is set to be 100 h.The initial conditions and material parameters are listed in Tables 1 and 2, respectively.Initial void ratio 0 e is 1.00, and the initial hydrate saturation in the voids, 0 H r S , is 0.51, where the void ratio e is defined by: The material parameters for the constitutive model are summarized in Table 2.The material parameters for the constitutive equation are mainly determined from the results of triaxial tests and its parametric studies, whose samples were obtained from the seabed ground at the Nankai Trough [26].Constitutive equations used in this simulation follow the equations presented by Kimoto et al. [9], and details can be obtained in their paper. In order to study the instability of the MH-bearing material system, different permeability values are considered as well as different levels of depressurization.The permeability for water changes from 1.0 × 10 −3 to 1.0 × 10 −8 (m/s), and the permeability for gas phases is set to 10 times the water permeability.The permeability coefficient for water 0 W k and gas 0 G k can be written as follows: where g is gravity acceleration, K is intrinsic permeability, μ W and μ G are the viscosity for water and gas, ρ W and ρ G are the density of water and gas, respectively.In the simulation, the seabed ground at 1300 m water depth is modelled, and the pore pressure at the initial state is around 13 MPa.Considering that the gas is treated as an ideal gas, the dynamic viscosity for gas phase v G becomes about 0.1 times that of the water phase, although it varies depending on the temperature.Consequently, the permeability coefficient for the gas phase becomes about 10 times larger than that of the water phase. In addition, the permeability for water and gas depends on several parameters, i.e., phase saturation , the void ratio e , the hydrate saturation H r S , and temperature.In the simulation, however, we especially focused on the effect of the void ratio e and the hydrate saturation H r S , and we used the following equations for the permeability.This is due to the fact that we do not consider the interaction between the air and the water, we only consider the interactions between air and solid skeleton,and water and solid skeleton: The permeability ratio α α , and it varies from 1 to 0 with respect to the hydrate saturation as indicated in Figure 6.The initial state of the sediments is fully saturated by water, and it changes into unsaturated conditions due to the hydrate dissociation.In fact, there is little knowledge about how the gas behaves under fully or almost fully water saturated conditions.It may be quite a difficult problem and an important research subject to clarify the initiation of the gas flow in water-saturated ground.It is one of the reasons why the effect of the phase saturation is not considered.The combinations of the permeability and the level of depressurization are listed in Table 3.The total number of cases is 42, that is, six permeability and seven depressurization levels.The case name "Case-i-j" indicates that the permeability for the water phase is 1.0 × 10 −i (m/s) and the depressurization level is j (%).In the following section, the results of the numerical simulation and the discussion which intends to show a trend in the material instability will be presented. Results of the Stable-Unstable Behavior during MH Dissociation Figure 7 shows the results of the simulation for different values of the permeability and the depressurization levels.In the figure, the solid circles (•) indicates stable simulation results, while the (×) marks indicates unstable simulation results.The judgment of unstable or stable is given according to whether at least one mechanical, thermal or chemical variable diverges or not.From Figure 7, it can be said that the material system basically becomes stable with an increase in permeability.That is because a large permeability will make the dissipation of the fluid pressure produced by the MH dissociation easier, while a reduction in pore pressure will reach farther away from the depressurization source and more MHs will dissociate due to the large permeability.In the case of relatively lower permeability, that is, k W = 1.0 × 10 −7 and k W = 1.0 × 10 −8 , it becomes more stable for the large depressurization levels than in the cases of k W = 1.0 × 10 −5 and k W = 1.0 × 10 −6 .The reason why the lower permeability makes the material system more stable is that the low permeability may limit the spreading of the depressurization; consequently, the area where the MHs dissociates becomes smaller and the production of the pore gas pressure is reduced.The balance between the permeability and the depressurization is one of the important factors in material instability.In order to investigate the details for the onset of material instability, several cases are selected, namely, two stable cases and two unstable cases.For the stable cases, we choose Case-4-30 and Case-7-30, which have the same depressurized level and different permeabilities, and for the unstable cases, Case-4-40 and Case-7-40 are chosen.In Case-4-30 and Case-7-30, the depressurization finishes after about 9.4 h, while in Case-4-40 and Case-7-40, it ends after 12.5 h. Figure 8a-d shows the time profiles of pore gas pressure P G (MPa) in elements-1, 2, and 3 for each case.The pore gas pressure is calculated in the elements where the MHs begin to dissociate.In Case-4-30, which is illustrated in Figure 8a, the pore gas pressure decreases with the progress of the depressurization.The production of pore gas pressure in elements-2 and 3 is initiated soon after that in element-1.This is because the depressurization spreads to the next element easily due to the large permeability.After that, the pore gas pressure in each element becomes the same value, which is consistent with the depressurized one.In Case-4-40, on the other hand, the pore gas pressure diverges just after 9.4 h, and the calculation stops as shown in Figure 8b.The large depressurization level may enhance gas production, and the permeability is not enough for the pore gas pressure to be allowed to dissipate.The time profiles of the pore gas pressure are the same as Case-4-30 until 9.4 h; the pore gas pressure in each element consists of the depressurized value due to the larger permeability. The time profiles of the pore gas pressure in the case of low permeability, that is, Case-7-30 and Case-7-40, are different from those of larger permeability, as illustrated in Figure 8c,d.The pore gas pressure in both element-2 and element-3 is produced behind that of element-1, because it is difficult for the depressurization to spread due to the low permeability.In the case of large permeability, the pore gas pressure in each element decreased at the same level.In Case-7-30 and Case-7-40, however, the pore gas pressure becomes higher in the element farther away from the depressurization source.It sometimes increases rapidly, mainly because it is more difficult for the pore gas pressure to dissipate than in the case of large permeability.In Case-7-40, the pore gas pressure becomes the high pressure level after about 60 h.Finally, it diverges at 78 h and the calculation stops, as shown in Figure 8d.The behavior of the pore gas pressure is unstable.The large degree of the depressurization produces a larger amount of gas than in Case-7-30, and this makes the material unstable.From the pore pressure and pore gas pressure results we calculated the Reynolds number of the water and the gas flow in Element-1 in Case-4-30.First, the water velocity becomes 8.4 × 10 −4 (m/s), which can be estimated from the gradient of the pore pressure and the permeability indicated in Equation (90).As for a diameter of the groundwater flow, an average grain size 50 D is often used in the geomechanics field.Therefore, we use 50 0.15 (mm) D  , which is the value of fine sand or silt.Dynamic viscosity ν W for water is 1.52 × 10 −6 (m 2 /s), at a temperature of 5 degrees.The Reynolds number parameters are listed in Table 4. Substituting those values into the Reynolds number equations, we obtain As for the gas flow, we have also calculated Reynolds number in the same manner as the water flow.Table 5 indicates the parameters for calculating Re of the gas flow.Reynolds number of the gas flow becomes .For both the water and the gas flow, Reynolds number becomes less than 1-10, although it is a rough estimation.Consequently, Darcy's law introduced in Equations ( 36) and ( 37) is valid to describe the fluid flow.In order to see the behavior of MH dissociation, the time profiles of the MH remaining ratio are illustrated in Figures 9a-d.The MH remaining ratio is defined as the percentage of the current moles of MHs with respect to the initial moles, that is: In the case of large permeability, the ratio decreases equally in each element.This means that the effect of the depressurization spreads to a similar extent because of the large permeability.When the depressurization stops at 9.4 h, the MH remaining ratio becomes almost constant in Case-4-30.In the case of lower permeability, on the other hand, the behavior of the ratio differs among the elements.As for the remaining MH ratio in element-2 and element-3, the dissociation starts behind that of element-1; the MH dissociation begins after 15 h in element-2 and after 24 h in element-3.This is because it is hard for the depressurized area to spread due to the low permeability and the amount of dissociated MHs is lowered.It may become evidence that the material system becomes more stable even in the case of the lower permeability.The reduction of the ratio continues moderately even after finishing the depressurization at 9.4 h in Case-7-30, meanwhile it becomes constant after 9.4 h in Case-4-30.This indicates that a lower permeability can reduce the total amount of MH dissociation more than that of a higher permeability.However, the dissociation will continue for a long term.Both Case-4-40 and Case-7-40 are the unstable results; the calculation stops after 9.4 h in Case-4-40 and after 78 h in Case-7-40.It can be said that the material instability may occur in the short term, namely, during dissociation in the case of the larger permeability, and in the lower one, the material instability has to be considered over a long span, namely, even after the depressurization.Figures 10a-d and 11a-d show the time profiles of the average pore pressure F P and the mean skeleton stress σ m  , respectively.The average pore pressure P F is defined by Equation ( 14), and the mean skeleton stress σ m  are defined as follows: The average pore pressure decreases with increases in the depressurization in each case.In Case-4-30 and Case-4-40, it declines linearly, because the pore pressure at the depressurization source is reduced linearly.In Case-7-30 and Case-7-40, the average pore pressure in element-2 and element-3 decreases behind element-1, and the pressure gradient increases between element-1 and element-2 or element-3.When the material instability occurs in Case-4-40 and Case7-40, the average pore pressure diverges in the same manner as the pore gas pressure shown in Figure 8.This abrupt increases in pore gas pressure G P and average pore pressure F P leads to a drastic decrease in the mean skeleton stress, as illustrated in Figure 11.The result whereby the large pore fluid pressure produces a great reduction in the mean skeleton stress is the same as that obtained from the experiments [27].Figure 12 indicates the time profiles of total volumetric strain ε v for each element.It is worth noting that the volumetric strain is positive in expansion and negative in compression.In comparing Case-4-30 with Case-7-30, both cases are stable, and the total volumetric strain in Case-4-30 becomes larger than that in Case-7-30.This is because the amount of drained water in Case-4-30 is larger than that of Case-7-30 due to the large permeability.Finally, the total volumetric strain in elements-1, 2, and 3 become −2.0%,−1.7%, −1.5%, respectively in Case-4-30.In Case-7-30, the total volumetric strain in elements-1, 2, and 3 become −1.2%, −0.4%, and −0.5%, respectively.In Case-4-30, the depressurization stops at 9.4 h, and the changes in the average pore pressure and the mean skeleton stress are also small; however, the total volumetric strain keeps increasing until 100 h.The results indicate that the deformation is likely to continue increasing even after the changes in the pore pressure and the mean skeleton stress become small.In the unstable cases, that is, Case-4-40 and Case-7-40, the expansive volumetric strain is observed at the time the calculation stops.The large pore fluid pressure leads to the large expansive strain, and the material system becomes unstable.This results in the expansive strain making the material system more easily unstable, consistent with the results of the linear stability analysis. Conclusions In the first part of this paper, a linear stability analysis was performed in order to investigate the effects of the parameters on the onset of the instability of MH-bearing sediments induced by dissociation.The governing equations of the MH-bearing sediments used in the section are based on the chemo-thermo-mechanically coupled model proposed by Kimoto et al. [9] and for the constitutive equation for the soil skeleton, we used a simplified viscoplastic constitutive model.The main conclusions obtained in the stability analysis are listed as follows: 1.The parameters which have a significant influence on the material instability are the viscoplastic hardening-softening parameter, its gradient with respect to hydrate saturation, the permeability of the water and the gas, and the strain.2. Material instability may occur in both the viscoplastic hardening region and the softening region regardless of whether the strain is compressive or expansive.However, when the strain is expansive, material instability can occur even if it is in the viscoplastic hardening region.The expansive strain makes the possibility of the instability higher in the model.3. Permeability is one of the most important parameters associated with material instability. The larger the permeability for the water and the gas become, the more stable the material system becomes.In other words, the lower the permeability is, the higher the possibility is for material instability to occur.These results are consistent with the results obtained from the experimental studies.In the second part of the section, some examples of the numerical simulation of the dissociation-deformation problem using the one-dimensional finite element mesh were presented in order to study the material instability by using the chemo-thermo-mechanically coupled model.The effect of the material parameters, especially the permeability, was investigated.In order to clarify the relationship between the permeability and the degree of hydrate dissociation, a series of parametric studies on the permeability was conducted.For the dissociation method, we adopted the depressurization method.The main results of the numerical simulations are summarized as follows: 4. Basically the simulation results become more stable with increases in permeability.However, they also become stable in the region of the lower permeability.This was because the depressurized area is limited due to the low permeability; and consequently, the amount of MH dissociation is also reduced. 5.When the calculation became unstable, the pore gas pressure diverged, and then the mean skeleton stress was decreased drastically.The larger expansive volumetric strain was also observed.These results are consistent with those obtained from the linear stability analysis.6.In the case of a higher permeability and a larger depressurization level, the divergence occurred during depressurization and MH dissociation.On the other hand, in the case of the lower one, the instability was observed around the end part of the simulation when the MH dissociation almost converged.It is important to consider the material instability over the long term, that is, even after the dissociation calms down.7. The compressive volumetric strain kept increasing after the depressurization finished and the changes in the pore pressure and the mean skeleton stress became small.It also proves the importance of considering the long term stability. Appendix A The coefficients of the polynomial, Equation (73), are written as follows: Appendix B Routh-Hurwitz criteria of general n-th degree a polynomial equation is given by the following equation.Given an equation as shown below: Figure 1 . Figure 1.Schematic view of possible hazards in marine sediments induced by gas hydrates dissociation. curve is positive, i.e., 0 PC H  .Similarly, H increases with an increase in hydrate saturation A Self-archived copy in Kyoto University Research Information Repository https://repository.kulib.kyoto-u.ac.jpH r S , i.e., 0 SH H  .On the other hand, viscoplastic parameter μ decreases with an increase in the temperature due to thermo-viscoplasticity; hence, μ θ   is negative.Consequently, the term in Equation (55) is positive.Using Equation (51), and Equations (53)-(56), and considering ε ωε     , we obtain a spatial gradient of perturbed skeleton stress as: Figure 3 . Figure 3. Schematic view of the target area of MH-bearing sediments for the numerical simulations. Figure 4 . Figure 4. Simulation model and boundary conditions. Figure 5 . Figure 5. Conditions of change in pore pressure change at the depressurization source. 1 2 3 A Depressurizing source  Permeable for water and gas  Adiabatic boundary  Constant water pressure (13MPa)  Permeable for water and gas  Constant temperature (287K)  Impermeable for water and gas  Self-archived copy in Kyoto University Research Information Repository https://repository.kulib.kyoto-u.ac.jp Figure 7 . Figure 7. Stable and unstable regions of permeability and the depressurization level during the MH dissociation process. Table 5 . archived copy in Kyoto University Research Information Repository https://repository.kulib.kyoto-u.ac.jpParameters for calculating Reynolds number of gas flow. Figure 9 . Figure 9.Time profiles of the remaining MH ratio archived copy in Kyoto University Research Information Repository https://repository.kulib.kyoto-u.ac.jp Table 1 . ) Initial conditions of the soil material. Table 2 . Material parameters for the constitutive equation. Table 4 . Parameters for calculating Reynolds number of water flow.
v3-fos-license
2014-10-01T00:00:00.000Z
2010-02-24T00:00:00.000
7882173
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://bmcneurol.biomedcentral.com/track/pdf/10.1186/1471-2377-10-15", "pdf_hash": "0c6b7e7d6fbfb916b09cffb543dfab49660fc9a7", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41089", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "sha1": "0c6b7e7d6fbfb916b09cffb543dfab49660fc9a7", "year": 2010 }
pes2o/s2orc
Improving adherence to medication in stroke survivors (IAMSS): a randomised controlled trial: study protocol Background Adherence to therapies is a primary determinant of treatment success, yet the World Health Organisation estimate that only 50% of patients who suffer from chronic diseases adhere to treatment recommendations. In a previous project, we found that 30% of stroke patients reported sub-optimal medication adherence, and this was associated with younger age, greater cognitive impairment, lower perceptions of medication benefits and higher specific concerns about medication. We now wish to pilot a brief intervention aimed at (a) helping patients establish a better medication-taking routine, and (b) eliciting and modifying any erroneous beliefs regarding their medication and their stroke. Methods/Design Thirty patients will be allocated to a brief intervention (2 sessions) and 30 to treatment as usual. The primary outcome will be adherence measured over 3 months using Medication Event Monitoring System (MEMS) pill containers which electronically record openings. Secondary outcomes will include self reported adherence and blood pressure. Discussion This study shall also assess uptake/attrition, feasibility, ease of understanding and acceptability of this complex intervention. Trial Registration Current Controlled Trials ISRCTN38274953 Background Adherence to therapies is a primary determinant of treatment success. Poor adherence attenuates optimum clinical benefits and therefore reduces the overall effectiveness of health systems, yet it is estimated that in developed countries, only 50% of patients who suffer from chronic diseases adhere to treatment recommendations [1]. In the treatment of hypertension, it has been estimated that only 30-50% of patients regularly take their antihypertensive drugs as prescribed, and that nonadherence may cause half of antihypertensive drug "failures" [2]. A recent illustrative example from the field of cardiovascular disease is provided by the Duke Databank for Cardiovascular Disease for the years 1995 to 2002, which assessed the annual prevalence and consistency of self-reported use of aspirin, β-blockers, lipid-lowering agents, and combinations of the 3 drugs in patients with coronary artery disease. Rates of consistent self-reported medication use were sobering: aspirin (71%), β-blocker (46%), lipid-lowering agent (44%), aspirin and β-blocker (36%), and 21% for all 3 medications. Overall, consistent use was associated with lower adjusted mortality, although in this study the authors were unable to differentiate patient non-adherence from physician non-prescription [3]. In a further study of drug adherence and mortality in 31,455 survivors of myocardial infarction who were taking statins and β-blockers, patients were divided into 3 adherence categories: high, intermediate and low. After 1 year, compared with the high-adherence group, low adherers to statin therapy had a 25% increased risk of mortality [4]. Thus, polypharmacy is the norm, self-reported adherence is often suboptimal, and this is associated with elevated mortality risk [5]. The most recent Cochrane review of interventions to improve medication adherence concluded that, "Current methods of improving adherence for chronic health problems are mostly complex and not very effective, so that the full benefits of treatment cannot be realized. High priority should be given to fundamental and applied research concerning innovations to assist patients to follow medication prescriptions for long-term medical disorders" [6]. Our aim is to improve medication adherence in the secondary prevention of stroke. Stroke is the third most common cause of death in the UK, and is the most common cause of severe physical disability amongst adults. The National Audit Office recently estimated that the annual cost of caring for people with stroke is £7 billion per year in the UK alone [7]. The risk of a recurrent stroke is 30-43% within 5 years, and it is estimated that, currently, 11,626 strokes occur annually in the Scottish population [8]. Large randomised controlled trials and meta-analyses have identified several drugs which significantly reduce the risk of future vascular events after stroke. The Scottish Intercollegiate Guidelines Network (SIGN) guidelines for secondary prevention after stroke now recommend antiplatelet therapy and reduction of both blood pressure and cholesterol level [9]. The estimated efficacy of these drugs in helping prevent a further stroke in Scotland is outlined in Table 1. A major risk factor for recurring vascular events or death is therefore non-adherence to medication, but only limited data are available on patient adherence to medication intended to prevent recurrent stroke. However, there is no reason to believe that stroke patients should demonstrate better adherence than in other chronic conditions. In fact, the reverse is more plausible, given that stroke often causes memory impairment, which is known to cause adherence problems. In a study of over 3,000 patients in Germany, Hamann et al. [10] reported that 84% were still taking aspirin at oneyear post stroke, 77% oral anticoagulants, but only 61% who were prescribed clopidogrel at discharge were still taking it one year later. Sappok et al. [11] also reported from a follow-up study one year after stroke and found that only 70% of patients were still taking cholesterolreducing treatment. Data from the Netherlands revealed that by 1 year after ischaemic stroke, 22% of patients who had been taking oral anticoagulation had stopped, half of whom did so "for non-medical reasons" such as perceived adverse effects, patient request etc. [12]. Thus, the available data on stroke patients suggest that adherence is often sub-optimal, and that many patients are consequently at a significantly increased risk of a further stroke and/or cardiovascular event. Fractionating Adherence Adherence is the end result of a complex set of perceptions, attitudes, cognitive abilities, intentions and behaviours. It has proved useful to distinguish between deliberate non-adherence (intentional) and non-deliberate non-adherence (non-intentional) [1]. The aim of this pilot project is to improve intentional adherence by addressing beliefs that act as a barrier to adherence and to reduce non-intentional non-adherence by developing plans to help reduce forgetting. Intentional non-adherence Our theoretical framework is based around Leventhal's self-regulation theory [13]. This theory posits that patients have a common-sense model of their illness in terms of beliefs regarding how long a condition will last, whether it is acute or chronic, what sort of treatments will help, etc. Superimposed upon this framework of illness beliefs are beliefs about treatment, particularly the perceived necessity of medication versus concerns about possible harmful effects of medication [14]. Our recently completed study [15] on determinants of adherence in stroke patients supported self-regulation theory: it identified that stroke patients' concerns about their medication (e.g. dependence, toxicity, too many tablets) were key determinants of poor adherence. We therefore aim to elicit and attempt to modify erroneous beliefs about medication and stroke in this pilot randomised trial of a brief intervention. Non-intentional non-adherence Many patients forget to take their medication as directed. Our previous study on adherence in stroke patients established that cognitive impairment (as measured by the Mini Mental State Examination (MMSE)) was significantly associated with poor adherence [15]. An impressive body of evidence has accumulated showing that brief and easy-to-complete implementation intentions interventions are effective at reducing forgetting and in improving medication adherence [16]. These involve patients writing down exactly when and where they will take their medication, using the format of an if-then plan ("If it is time X in place Y and I am doing Z, then I will take my pill dose"). The evidence clearly indicates that if-then planning makes people highly sensitive to the cues that they have written down, and means that they can act swiftly and effortlessly as soon as these cues are encountered, thus environmentally cued habits are established. Implementation intentions remove the burden of having to think about and remember when to act by using environmental cues to trigger the desired behaviour. The load on prospective memory is reduced as habitual responses are established (e.g. first cup of tea at breakfast in the kitchen cues taking morning medication). In a recent example, Brown et al. [17] showed that a simple if-then plan in epilepsy patients resulted in intervention participants showing improved adherence relative to controls on all three outcomes: doses taken in total (93.4% vs. 79.1%), days that correct dose was taken (88.7% vs. 65.3%), and doses taken on schedule (78.8% vs. 55.3%), all p < .01. Importantly, participants with the greatest degree of cognitive impairment benefited most from the intervention [17]. Framework We have developed our intervention using the new Medical Research Council (MRC) Guidance on developing and evaluating complex interventions [18]. We have completed the development work of the MRC framework and have identified the evidence base, utilised an appropriate theoretical model of adherence (self-regulation), and identified process variables that relate to both intentional and non-intentional adherence. We now wish to embark on feasibility/piloting where we will test a brief intervention, in terms of recruitment, retention, acceptability and efficacy, and use the results of the pilot to inform the sample sizes required for a larger, more definitive trial. The intervention has two components, tackling both intentional and non-intentional non-adherence, and each component has a strong, supportive evidence base. We recently found that 30% of stroke patients reported sub-optimal medication adherence at interview. Approximately one third of the variance in self-reported poor adherence was predicted by the following four variables: (1) younger age, (2) greater cognitive impairment, (3) lower perceptions of medication benefits, and particularly, (4) greater specific concerns about medications (toxicity, side effects etc.) [15]. Our qualitative interview findings confirmed the questionnaire results by showing that (a) medication concerns were key determinants of medication taking behaviour and also (b) the establishment of a habitual routine for medication taking was seen as vital. The findings from that study justify the evaluation of a pilot intervention trial aimed at targeting both intentional and non-intentional components, with the goal of improving medication adherence. Aims To pilot the feasibility of a brief intervention in stroke patients exhibiting sub-optimal adherence with the aim of: (a) establishing a better medication taking routine using an implementation intentions intervention (b) eliciting and modifying any emergent erroneous beliefs regarding the patient's medication and their stroke. We will test whether medication routines and beliefs are changeable, and if the results are promising, this will pave the way for a larger randomised controlled trial (RCT) to determine whether adherence is improved, physiological risk is changed (e.g. via reduction in blood pressure), and rate of recurrent vascular events is reduced. Research questions (a) Is the brief intervention feasible, understandable and acceptable (e.g. regarding uptake/attrition)? (b) Does the intervention improve adherence? (c) Is improvement in adherence mediated by (i) changes in illness and medication beliefs and/or (ii) reduced forgetting? (d) What effect size is observed to inform the power calculation for a larger, more definitive study? Methods/Design Recruitment In order to maximise recruitment and the representativeness of the sample, we shall attempt to recruit and obtain consent from consecutive patients who are discharged from the Edinburgh Western General Hospital stroke units and clinics and who are prescribed secondary antihypertensive medication. Our previous experience suggests that this approach will significantly improve trial recruitment. Currently approximately 300 in-patients and 400 out-patients are discharged per year, with over 60% prescribed antihypertensive medication. We will screen 400 first-time stroke (ischaemic and haemorrhagic) or Transient Ischaemic Attack (TIA) patients, and expect a 75% response rate (300). We plan to include both stroke and TIA patients since both groups of patients are treated in a similar way with secondary prevention drugs and are likely to have similar issues with respect to non-adherence [9]. We have decided to focus on the early months following stroke in order to maximise the likelihood of preventing a further stroke. Poor adherence will have a much greater effect on stroke risk in the first few months because the risk of stroke (and thus the absolute risk reductions associated with drug treatment) is highest at this stage [19]. Furthermore, following Petrie et al. [20], we believe that the efficacy of the intervention will be enhanced by eliciting and correcting dysfunctional stroke and medication beliefs soon after preventative drugs are started. However, we have to strike a careful balance between early intervention, and also allowing enough time for participants to demonstrate variance in adherence, so that we can specifically target those showing sub-optimal adherence, thus our decision to assess adherence at 3 months post stroke or TIA. Consenting participants will therefore be contacted by post 3 months after their event and asked to complete the Medication Adherence Self Report Scale (MARS) [21] and return it in a stamped addressed envelope. In this mailing, we will also ask participants to complete the Brief Illness Perceptions Questionnaire [22] and the Beliefs about Medication Questionnaire [23]. This will allow us to economically test the robustness and replicability of the association we previously observed between specific medication concerns and self-reported adherence in a new large sample of TIA/stroke patients. We will then invite all those reporting sub-optimal adherence on the MARS (score of 24 or less) to participate. Based on our earlier study we conservatively estimate that approximately 30% will report some degree of poor adherence, and of those, up to 30% may then decline to participate. We therefore estimate that if we invite 90 patients, 60 will agree to participate. They will then be randomly allocated to brief intervention or treatment as usual (TAU). We are allowing for a further 10% attrition rate during the trial, however, one of the main purposes of this pilot is to obtain data on uptake, acceptability and attrition. Inclusion criteria We aim to be as inclusive as possible and recruit all patients who had their first ischaemic or haemorrhagic stroke or TIA 3 months earlier and were discharged from the ward or clinic on any secondary preventative medication and are living at home. Exclusion criteria We will only exclude people who are not on antihypertensive medication 3 months after their stroke/ TIA, or whose degree of aphasia (Frenchay screen <13/ 20) or MMSE <23 makes completion of the study measures not feasible. Those who report already using Dosette boxes to improve their adherence, or are not responsible for taking their own medication, will also be excluded. Design A pilot RCT with patients allocated to intervention versus treatment as usual (TAU). Web-based randomisation at the patient level into intervention or TAU will be provided by the Edinburgh Clinical Trials Unit, using minimisation with a random element to ensure that the two trial arms are not significantly different on three key variables: age, number of pills taken per day and baseline MARS adherence. A CONSORT flowchart of the trial design is shown in Figure 1. Setting A single-centre trial at a large teaching hospital in Scotland. Ethical Approval Ethical approval has been granted by Lothian NHS Board, South East Scotland Research Ethics Committee (REC ref. no. 09/S1102/36). Measuring Adherence There is no agreed "gold standard" when measuring adherence [21]. Our previous study clearly established that assay of urinary aspirin levels in stroke patients lacked sensitivity and was unhelpful [15]. Garber et al. [24] showed that appropriately framed self-report questionnaires show good concordance with electronic cap monitors and blood and urine measurement. The MARS attempts to reduce social desirability effects by framing questions so to make non-adherent responses socially acceptable. The MARS has high internal and test-retest reliability, and has been shown to predict clinical outcome (blood pressure within range [21]). We used the MARS as our primary outcome measure in our previous study, establishing that stroke patients found it easy to use and understand, and we demonstrated that MARS scores were not correlated with a social desirability measure but were prone to ceiling effects [15]. We will therefore use the MARS as a viable and economic screen for patients demonstrating sub-optimal adherence, but will then use MEMS (Medication Event Monitoring System, MEMS® Aardex Ltd, Switzerland) pill containers which electronically record openings as our primary outcome measure in the evaluation of this pilot RCT. Intervention Two brief sessions, two weeks apart with a trained Research Fellow, lasting approximately 30-45 minutes each. Participants will be given the choice of having home visits or coming into a local hospital-based Clinical Research Facility. Session 1 will focus on helping each patient draw up a specific plan, so as to establish a better medication-taking routine using an implementation intentions approach. Patients will be helped to complete an individualised worksheet plan for each scheduled daily dose of antihypertensive medication, following Brown et al. [17]. The participant and Research Fellow will both keep a copy of the plan. Baseline blood pressure readings will also be taken during Session 1. Session 2. The effectiveness of the implementation intentions plan and any barriers/difficulties in following the plan will be reviewed in session 2, with individuallytailored coping strategies/plans developed collaboratively, following the methods outlined by Sniehotta et al. [25]. This session will also focus on eliciting and, if appropriate, challenging patients' beliefs regarding their medication, e.g. beliefs regarding toxicity, dependence, fears regarding medications interacting harmfully etc., using the participants' responses on the BIPQ and the BMQ as a basis. The aim here will be to correct any misperceptions and provide evidence so that participants' medication necessity beliefs come to outweigh their medication concerns beliefs. Previous work has demonstrated that better adherence results when medication necessity beliefs outweigh concerns [14]. Modification of erroneous beliefs about stroke will be based on the model of Petrie et al. [20] who elicited and modified patients' dysfunctional beliefs regarding their recent myocardial infarction. This resulted in faster return to work and lower angina symptoms at 3 months. If the Research Fellow is unable to answer any specific questions regarding the patient's stroke or medication, then immediately following the interview, the RA will email the query to one of the stroke consultant experts on the research team, and the RA will then telephone the patient with the information within 7 days of the interview. At the end of session 2, the Research Fellow will fill each participant's MEMS medication bottle with the following month's supply of antihypertensive medication. (We propose using the patients' existing supply of antihypertensive medication. A check will be taken at session 1, and if supplies are running low, participants will be asked to obtain their repeat prescription in advance of session 2). For each of the next three months (Sessions 3-5), the Research Fellow will repeat this process, and also take an electronic reading from the MEMS cap, downloading the data on to a laptop PC for later analysis. At the first of these follow-up visits (Session 3), participants will again complete the Brief Illness Perceptions Questionnaire (BIPQ) [22] and the Beliefs about Medications Questionnaire (BMQ) in order to test whether the intervention has resulted in changes to stroke and/or medication beliefs [23]. At the final visit, at 3-month follow-up (Session 5), the outcome measures will be administered. Control condition Participants in the control group will receive the same number of Research Fellow visits and will complete the same questionnaires at the same timepoints as the intervention group. MEMS readings and BP recordings will also be taken, as detailed in the intervention arm. During the first 2 sessions, the Research Fellow will also engage control group participants in non-medication related conversation, e.g. how they are feeling, how they are spending their time etc. in an attempt to provide some control for non-specific effects of attention/social contact. In both conditions, all interviews will be timed and digitally audio-recorded and transcribed for supervision feedback and check on treatment fidelity. Primary outcomes Medication adherence will be recorded using MEMS (Medication Event Monitoring System, MEMS® Aardex Ltd, Switzerland) pill containers which electronically record openings. Following Brown et al. [17], and in line with previous studies using this method, we shall use the following main outcomes, counting each opening as a presumptive dose: (a) percentage of doses taken (versus doses prescribed), (b) percentage of days on which the correct number of doses was taken, and (c) percentage of doses taken on schedule. Again, following Brown et al. [17], we designate doses as having been taken on schedule if the MEMS bottle was opened within a 3-hour (plus or minus) time window for each dose. The electronic monitoring caps can be connected to a personal computer that reads the data from the pill caps' microprocessors and generates a printout of every pill bottle opening over an extended time period (in this case, the preceding month). Because patients will usually be on a variety of medications, we will target antihypertensive medication for MEMS measurement, particularly as there is clear evidence regarding poorly treated blood pressure significantly increasing the risk of future vascular events, and as stated in the introduction, it is estimated that only 30-50% of patients regularly take their antihypertensive drugs as prescribed [2]. If patients are taking more than one antihypertensive, we shall target the drug that is taken most frequently. MEMS have been successfully used in a variety of medication adherence interventions e.g. Brown et al. [17]. However, as with most methods in this area, MEMS measurement is not immune from the Hawthorne effect, i.e. medicationtaking behaviour is often improved in the short-term as a direct consequence of it being measured [26]. We shall therefore use MEMS containers to record medication taking in both intervention and control arms for 3 months. We predict in the control arm that adherence will gradually drop off over the 3 months (as the Hawthorne effect fades), whereas there will be an increase or no change over the 3 months in the intervention arm. A recent three-month RCT aimed at improving adherence to medication in HIV-affected individuals also used this MEMS evaluation of outcome, and in the intention to treat analysis showed no change in the TAU arm, but a significant improvement in the psychological intervention arm, with a controlled effect size of 1.0 [27]. We acknowledge the limitation in this pilot that the Research Fellow will not be blind to treatment arm; however, as our primary outcome is MEMS automated recording of days per month that the correct dose of hypertensive was taken, the potential for bias is significantly reduced. Secondary outcomes will include (a) MARS selfreported adherence of all secondary preventative medication, (b) systolic and diastolic blood pressure. Analysis MEMS data will be analysed using an intention to treat protocol in a repeated-measures mixed-design (2 groups * 3 time points). One of the primary aims of this pilot study is to determine the effect size achieved by this pilot to inform the sample size calculation for a larger, more definitive multi-centre study. (We acknowledge that the effect size observed may differ from what one would see in a large trial, however, the effect size sought in the large trial should be within the 95% CI of the estimate derived from this pilot. The pilot will thus give a useful measure of the variability of the primary outcome measures). Using G-Power [28] for this pilot, we calculate that with 2 arms of 30 participants, focusing on the treatment group by time interaction term, we should be able to detect an effect size of 0.2, with a power of 0.80 and alpha set at 0.05 (i.e. a 0.2 of a standard deviation improvement in the brief intervention arm in number of days per month that the correct antihypertensive medication is taken, relative to the TAU arm). Control variables At Session 1, following Trewby et al. [29] and our previous study, all participants will be asked to complete a baseline measure of their perception of the benefit (0-100%) provided by their current stroke prevention medication using a simplified graphical presentation card. Participants will also complete the Mini-Mental State Examination (MMSE), as we previously demonstrated that younger age, lower MMSE and low perceived benefit of medication were all related to poor adherence [15]. Finally, we will determine whether these, and other baseline characteristics (e.g. prior use of reminder packaging and partner involvement in reminding to take medication), are related to treatment outcome. Evaluation The effects of the intervention will be evaluated in all participants via measurement of the primary and secondary outcome variables listed above. The Research Fellow will also take baseline and 3-month follow-up blood pressure measurements using an OMRON M10-IT BP monitor following a standardised protocol (mean of 3 recordings). The blood pressure results will be fed back to the participants and their GPs in both intervention and TAU arms, together with simple information regarding ideal values. (NB we do not expect significant change in these physiological variables in this pilot, but these may change -albeit over a longer time period -in a larger trial and we wish to assess feasibility and patient acceptability). On completion, patients in the intervention arm only will also take part in a brief semi-structured interview to assess their views regarding the intervention. They will also be asked to complete Likerttype scales assessing (a) ease of understanding, (b) acceptability and (c) perceived benefit for each of the intervention components, namely, 1) medication routine planning worksheet, 2) discussion/information regarding medication, 3) discussion/information regarding stroke, and 4) blood pressure measurement. Additional file 1: Table S1 details all patient contacts and assessments at each time point. Process evaluation We will test if any change in adherence is mediated by changes in medication beliefs (Beliefs about Medications Questionnaire -BMQ) and/or illness beliefs (Brief Illness Perceptions Questionnaire). Timetable This is a 31-month project. The Research Fellow will spend months 1-3 learning techniques for eliciting and modifying patients' dysfunctional beliefs regarding their stroke and their medication. Patients will be invited to participate from month 3 onwards. It is anticipated that the running of the trial will take 24 months (months . Postal recruitment will be used to collect baseline data on the MARS, BMQ and BIPQ self-report measures, and to identify participants who are suitable for the intervention. Each participant recruited into the intervention will participate for roughly four months, thus we envisage 6*4 month blocks, with approximately 10 patients being run concurrently in any one of these blocks. They will each require 5 face-to-face contacts (choice of home visit or meeting in the local Clinical Research Facility) over the participation period. Session 1 is when the remaining baseline measures will be completed and the implementation worksheet plan drawn up. Session 2 is two weeks later when the plan is reviewed and elicitation and modification of illness and medication beliefs will be conducted, and the MEMS containers filled for the following month. Sessions 3 and 4 are brief monthly meetings to take a MEMS cap reading and refill the MEMS bottle. BMQ and BIPQ scores will also be collected at Session 3. The final outcome measures will be taken at Session 5. Months 28-31 will be spent analysing the data and in preparing the final report and papers for conference presentation and publication. Discussion Improving adherence to appropriate prescriptions of existing efficacious treatments may well represent the best investment for improving self-management of longterm medical conditions [1]. This work has the potential to improve adherence, treatment efficacy and reduce health service waste. A health economic evaluation would be central to any subsequent large-scale trial. This pilot trial is a logical development from our recent study of determinants of medication adherence in stroke patients [15], and is novel in that it is targeting both intentional and non-intentional non-adherence via belief modification and implementation intentions respectively. The intervention is based on self-regulation theory and is brief, practicable and capable of being delivered by trained non-specialist health workers in an NHS setting. In a recent critical evaluation of adherence interventions [21], 6 consistent weaknesses in the field were identified: 1) narrow focus for intervention, in particular a failure to consider both intentional and non intentional nonadherence, 2) "one size fits all" approach i.e. not patientcentred, 3) failure to specify the content of the intervention, 4) "black box" evaluation, 5) lack of theoretical framework and 6) little or no process evaluation. Our proposed study addresses each of these weaknesses in that we are tackling intentional and non-intentional non-adherence, are using a patient-centred approach by eliciting individualised concerns, are describing the nature and content of the intervention, thus allowing for replication, are using Leventhal's self-regulatory model as our theoretical framework and are testing process by assessing whether change in adherence is attributable to changes in illness and/or medication beliefs and/or reduced forgetting. This work has the potential to significantly improve the efficacy of a broad range of treatments in the NHS. If the results of the pilot are promising, a larger more definitive study in stroke patients would be planned and similar evaluations in other chronic conditions should also be considered.
v3-fos-license
2020-06-18T09:05:54.461Z
2020-06-01T00:00:00.000
219948248
{ "extfieldsofstudy": [ "Medicine", "Political Science" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "http://www.thelancet.com/article/S254251962030125X/pdf", "pdf_hash": "f9e33b2044a42d72b7fc646bf0482c0ccb6e8ef9", "pdf_src": "Adhoc", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41090", "s2fieldsofstudy": [ "Environmental Science", "Sociology" ], "sha1": "0384e94167601efe845a8c804333241a6858979f", "year": 2020 }
pes2o/s2orc
Human mobility, climate change, and health: unpacking the connections. During the past decades, there has been increasing interest to understand the climate change–migration– health nexus. Building on this work, this Comment underscores the importance of understanding and addressing the health of climate-related migrants as well as the health of people who migrate into or remain in sites with climate-related health risks. Efforts to understand the connections between the climate change–migration–health nexus typically start with climate change as a driver of mobility. The UCL–Lancet Commission on Migration and Health, for example, states that climate change could trigger substantial increases in migration. First, climate change is understood to shape mobility, with consequences for health (figure). Mobility responses to climate change impacts (eg, sea level rise, extreme weather, and disrupted livelihoods) are delineated through three categories: migration, planned relocation, and forced displacement. The current and potential scale of climate-related mobility is contested. And a large body of literature points out that climatic changes interact with political, economic, social, demographic, and environmental drivers to alter and amplify the scale and patterns of migration. Nonetheless, climate-related mobility is an issue of sociopolitical and humanitarian concern, with mobile populations variously positioned as frontline victims, a security risk, or adaptive agents responding to climate change impacts. It is important to understand and address health outcomes for people on the move. Outcomes will be diverse, and depend on the nature of mobility and health determinants in sites of origin and return, transit, and migration. Notably, most climaterelated mobility will occur within low-income countries and regions where there are existing population health challenges. Second, climate change-related health risks could shape mobility decisions (figure). Climate change affects health through direct exposures, such as heatwaves or extreme weather conditions, and through complex exposure pathways, such as altered food yields, water insecurity, and changes in disease transmission and vector ecology. There is some evidence that climate-related health risks contribute to migration decisions. Some people living in drought contexts have been found to migrate temporarily or permanently to improve food security. Yet, many studies indicate that key drivers of migration from sites of food shortage are livelihood diversification and structural determinants. Noting that many people globally live and remain in places with considerable health risk, the extent to which climate-related health risks will drive out-migration is uncertain. However, two pathways in the climate change– migration–nexus are considerably underexplored and undertheorised. These pathways start with human migration and immobility, rather than climate change impacts, to trace connections within the climate change–migration–health nexus. First, people move into sites where climate change impacts have consequences for health (figure). In 2019, there were around 272 million international migrants (3·5% of the global population) and more than 740 million internal migrants. The overwhelming majority of people migrate internationally for reasons related to work, family, and study. Others leave their homes and countries for sociopolitical reasons including conflict and persecution. An emerging body of research documents health risks for migrants who move into sites of climate vulnerability. For example, a study of Nepali migrant workers employed in construction in Qatar in 2009–17 documented deaths associated with Human mobility, climate change, and health: unpacking the connections During the past decades, there has been increasing interest to understand the climate change-migrationhealth nexus. 1,2 Building on this work, this Comment underscores the importance of understanding and addressing the health of climate-related migrants as well as the health of people who migrate into or remain in sites with climate-related health risks. Efforts to understand the connections between the climate change-migration-health nexus typically start with climate change as a driver of mobility. The UCL-Lancet Commission on Migration and Health, 3 for example, states that climate change could trigger substantial increases in migration. First, climate change is understood to shape mobility, with consequences for health (figure). Mobility responses to climate change impacts (eg, sea level rise, extreme weather, and disrupted livelihoods) are delineated through three categories: migration, planned relocation, and forced displacement. The current and potential scale of climate-related mobility is contested. And a large body of literature points out that climatic changes interact with political, economic, social, demographic, and environmental drivers to alter and amplify the scale and patterns of migration. 3,4 Nonetheless, climate-related mobility is an issue of sociopolitical and humanitarian concern, with mobile populations variously positioned as frontline victims, a security risk, or adaptive agents responding to climate change impacts. 5 It is important to understand and address health outcomes for people on the move. Outcomes will be diverse, and depend on the nature of mobility and health determinants in sites of origin and return, transit, and migration. Notably, most climaterelated mobility will occur within low-income countries and regions where there are existing population health challenges. Second, climate change-related health risks could shape mobility decisions (figure). Climate change affects health through direct exposures, such as heatwaves or extreme weather conditions, and through complex exposure pathways, such as altered food yields, water insecurity, and changes in disease transmission and vector ecology. 6 There is some evidence that climate-related health risks contribute to migration decisions. Some people living in drought contexts have been found to migrate temporarily or permanently to improve food security. 7 Yet, many studies indicate that key drivers of migration from sites of food shortage are livelihood diversification and structural determinants. 8 Noting that many people globally live and remain in places with considerable health risk, the extent to which climate-related health risks will drive out-migration is uncertain. However, two pathways in the climate changemigration-nexus are considerably underexplored and undertheorised. These pathways start with human migration and immobility, rather than climate change impacts, to trace connections within the climate change-migration-health nexus. First, people move into sites where climate change impacts have consequences for health (figure). In 2019, there were around 272 million international migrants (3·5% of the global population) and more than 740 million internal migrants. The overwhelming majority of people migrate internationally for reasons related to work, family, and study. Others leave their homes and countries for sociopolitical reasons including conflict and persecution. 9 An emerging body of research documents health risks for migrants who move into sites of climate vulnerability. For example, a study of Nepali migrant workers employed in construction in Qatar in 2009-17 documented deaths associated with Figure: Human (im) excessive heat exposure. 10 Most migrants worked in high temperatures (>31°C), with cardiovascular disease being the major cause of death. The study found that most deaths were probably due to serious heat stroke, with extreme heat due to climate change increasing health risks. In this example, climate change impacts are understood to amplify health risks for populations that migrate for broader social, economic, political, and demographic reasons. Second, immobile populations live in sites of climate risk with associated health consequences ( figure). This includes what are termed trapped populations, who do not have the resources, assets, or networks to enable migration, and voluntarily immobile populations, who choose to remain for reasons of place attachment, sociocultural continuity, and values. 4,5,11 Little empirical research has examined the health impacts of immobile populations. However, some researchers argue that immobile populations living in sites of climate vulnerability might experience adverse health impacts that emerge from changes in water and food security, disease ecology, flooding and saltwater intrusion, and the psychosocial impacts of disrupted livelihoods. 12 Frameworks that connect climate change, migration, and health can shape research agendas and policy responses. The framework proposed here highlights health outcomes for climate-related migrants. Importantly, it also includes those who move into or remain (voluntarily or involuntarily) in sites with climaterelated health impacts. This inclusion matters because, globally, most human (im)mobility occurs for reasons other than climate change. This framework seeks to broaden population health concerns beyond so-called climate refugees to consider more complex connections between human (im)mobility, climate change, and human health.
v3-fos-license
2021-10-17T15:13:03.556Z
2021-10-15T00:00:00.000
239013548
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://cepsj.si/index.php/cepsj/article/download/1150/538", "pdf_hash": "6d7b2454b35036ab35c80003300a0b895aeed618", "pdf_src": "ScienceParsePlus", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41091", "s2fieldsofstudy": [ "Education" ], "sha1": "95f98be7592ff16a0d5664bcb3a350dceb9a4cbf", "year": 2021 }
pes2o/s2orc
Lost Trust? The Experiences of Teachers and Students during Schooling Disrupted by the Covid-19 Pandemic • This paper aims to help understand how relational trust between students and teachers embedded in the teaching-learning process unfolded during the emergency distance and flexible hybrid education in Serbia in 2020. It also identifies niches in student-teacher relationships that hold potential for repairing and building trust. For the student-teacher relationship to be trust-based and thus conducive to students’ learning and wellbeing, a consensus about role expectations must be achieved. As the Covid-19 crisis interrupted schooling and education, participants faced uncertainties and ambiguities in role enactment, and the cornerstones of relational trust were disrupted. In an effort to understand 1) the context in which trust was challenged, 2) the ways in which trust was disrupted, and 3) the opportunities for its restoration, we relied on a multi-genre dynamic storytelling approach to data collection and values analysis for data processing. A total of 136 students and 117 teachers from 22 schools wrote 581 narratives in three genres: stories, letters and requests. The analysis yielded 22 codes that allowed further understanding of how changes in structural and institutional conditions affected both students’ and teachers’ expectations of each other, and how incongruence of these expectations fed into feelings of helplessness for both students and teachers, disengagement from learning for students, and heavy workload and poor performance for teachers. In addition, the narratives account for positive outcomes when these expectations were met, and for opportunities for trust-building if students’ and teachers’ perspectives are brought to each other’s attention and negotiated locally. Finally, recommendations for restoring trust are given. Introduction The year 2020 was exceptional for education systems globally.The Covid-19 pandemic led to unprecedented disruptions in schooling around the world (Bertling et al., 2020), which exacerbated already known education policy fractures and put the quality, equity and effectiveness of education at risk (Schleicher, 2020).It forced school closures, the introduction of emergency distance education, and experimentation with various hybrid and blended educational models, thus leaving more than 1.5 billion children temporarily out of school (UNESCO, 2020a) and exposing all education participants to an incommensurable degree of uncertainty and ambiguity (Bergdahl & Nouri, 2020;Gudmundsdottir & Hathaway, 2020;Hodges et al., 2020;Trust & Whalen, 2020). Teachers were at the forefront in the abrupt change in their everyday practice (Reimers & Schleicher, 2020).The dissolution of the education setting, their professional role and their work habits are all potential high stressors for teachers (Kim & Asbury, 2020).Teachers were the first to face the challenges of technological, content, pedagogical and monitoring readiness (UNESCO, 2020a), all subsumed in conducting distance education.In doing so, teachers' basic psychological needs for autonomy, competence and relatedness were jeopardised (Kim & Asbury, 2020).They were coping with a multitude of new requirements under conditions that stripped them of feedback from students and drove them to a high risk of burnout (Hargreaves & Fullan, 2020).At the same time, teachers were also citizens, facing the same dilemmas, threats and anxiety as everyone else, and overseeing the home schooling of their own children while enacting their school-teacher role.Nevertheless, teachers' positive experiences were also detected, such as "increased flexibility in learning and teaching, more opportunities for differentiation in lessons, and increased efficiency in working, teaching, and learning" (van der Spoel et al., 2020, p. 629).Teachers also reported transformative experiences of finding a way out of uncertainty (Kim &Ashbury, 2020), discovering that "schooling is about much more than learning" (Moss et al., 2020, p. 4) and understanding the socio-emotional needs of students and the community both during school lockdown and in discovering how to deliver a blend of physical and remote teaching (Moss et al., 2020). Students' experiences are less often documented.Especially concerning was the fact that at least one third of students globally were excluded from distance education (UNICEF, 2020).Many studies point to the consequences of unequal access to distance education in terms of losses in students' learning and wellbeing (Bertling et al., 2020), and consequently to potential long-term lapses in countries' economic development (Hanushek, 2020).Mostly studies focusing on students' mental health and wellbeing have been conducted thus far, pinpointing emerging behavioural problems (Loades et al., 2020;Orgiedes et al., 2020), while students' experiences of the learning process in the context of school lockdown are still rare.Studies that include students portray the educational hardships they face in distance education as well.For example, Niemi and Kousa (2020) report how the tasks and requirements reaching the students were felt as demanding and overwhelming throughout several weeks, while teachers did not register or acknowledge the burden students felt.Distance education can also trigger reliance on student self-regulation and self-motivation (Kovacs Cerović et al., 2021), but not if students are without adequate support (Černe & Jurišević, 2018). The disruption and transformation of schooling elicited strong reactions from all education participants.The medical threat and feelings of frustration and/or exploration regarding the new situation had the potential to strongly unite teachers and students and create a strong transformational partnership in facing and overcoming the hardships.On the other hand, the isolation and remoteness resulting from school closure and stress had the potential to move the participants in quite the opposite direction.The disruption also created new niches to explore how the education process twists, transforms or deteriorates in unforeseen ways and gets reinterpreted and (hopefully) reconstructed by the education participants. Relational trust According to Bryk and Schneider (2002), relationships between teachers and students, teachers and other teachers, teachers and parents, and between all of these actors and the principal, are characterised by mutual dependencies in the effort to achieve desired outcomes.These dependencies are attached to school actors' understanding of the roles and obligations of others, as well as to expectations they hold of each other.Therefore, for a school community to be successful, a consensus about roles, obligations and expectations must be achieved in all role relationships.Such relational trust is grounded in respectful exchanges between school actors, genuine listening and taking others' views into consideration in subsequent actions, which make all school actors feel valued and respected.Competence in core role responsibilities is what produces desired outcomes and thus meets others' expectations.Moreover, holding each other in personal regard discerns trust as it spurs from the willingness of actors to enact more than just what the professional role requires (e.g., openness to others).In addition, perceptions about personal integrity (e.g., keeping one's word, moral-ethical perspective) affect school actors' judgments of trustworthiness (Bryk & Schneider, 2002). Benefits of relational trust An abundance of research results demonstrate multiple benefits of relational trust in the context of educational changes, school improvement and student achievement and wellbeing. Relational trust is at the core of teachers' experience of educational change (Bryk & Schneider, 2002;Cranston, 2011).Trust-based relationships with colleagues and principals make professional learning communities adapt to the continuously changing demands (Cranston, 2011;Tschannen-Moran, 2009), while teachers' perceptions of personal integrity (Louis, 2007;Tschannen-Moran & Gareis, 2015) as well as their perception of the professional competence of change administrators influence teachers' willingness to take risks and to test untested hypotheses (Bryk, 2010;Tschannen-Moran, 2009). Collegial trust among teachers has also proven to influence teachers' commitment to students (Lee et al., 2011); the collaborative community provides opportunities for teachers to share experiences, ask for support and get feedback from colleagues, which in turn enhances their efficacy on instructional strategies and student discipline. Consequently, students benefit from teachers' trust and their achievements are likely to increase, even in poverty-stricken schools (Goddard et al., 2001;van Maele & van Houtte, 2011).Moreover, students' sense of wellbeing strengthens when schools encourage and promote authentic forms of students' voices that warrant students' psychological and emotional involvement in schooling (Smyth, 2006). Students also benefit from their own trust in teachers in terms of learning and achievement (Goddard et al., 2001;Goddard, 2003) as well as prevention and mitigation of discipline problems (Gregory & Ripski, 2008); when students are confident in their expectation that teachers act reliably and competently, their engagement within the learning processes is higher. However, relational trust is moderated by contextual factors.A history of untrustful role-relationships within a school community is a barrier to the development of relational trust in the present.Institutionalised mistrust, such as negative long-term experiences with school leadership (Tennenbaum, 2018) or distrust in the system in general (Louis, 2007), prevent the establishment of relational trust between education participants. Another moderator of trust between school actors is the positional power they bear.In other words, more powerful actors hold trust for others based on the perceptions of their competence, while more dependent actors give trust based on perceptions of more personal characteristics (Weinstein et al., 2018).For example, teachers' trust in students is associated with their perceptions of the students' ability to meet their expectations, while students' trust in teachers is predicted by their experiences of trust teachers attribute to them (van Maele & van Houtte, 2011). The Covid-19 crisis and relational trust Relational trust becomes even more important in times of crisis, such as the Covid-19 pandemic, as the risks are greater and the stakes are higher (Myung & Kimner, 2020).Relational trust between school actors embedded in the culture of safety and respect is therefore of utmost importance for organised, quick and effective change, as it is conducive to the participants' resilience and school improvement (Myung & Kimner, 2020). To the best of our knowledge, research on relational trust in the context of the Covid-19 crisis does not yet exist.However, appeals for its establishment and maintenance, throughout school closure and especially during school reopening, have been noted.Myung and Kimner (2020) call for shared purpose, mutual trust, structures and resources that foster collaborative work.Viner et al. (2021) advocate health and protection protocols that maintain the trust of teachers, students and the public in education institutions.Darling-Hammon and Hyler (2020) appeal to policymakers to develop strategies that support educators in meeting the socio-emotional and academic needs of students (e.g., supporting mentoring and the development of new teacher roles, and creating time for educators to collaborate with each other and key partners). Education during the pandemics in Serbia During the 2020 pandemic, Serbian schools used two different approaches.As in many other countries in Europe and worldwide (UN, 2020), full school closure with distance education started in mid-March and lasted until the end of the school year.In autumn, a flexible hybrid approach was introduced, combining contact instruction (albeit with reduced hours and class size) with distance learning, allowing schools to design the option that fitted their students' needs and school capacities best, and allowing parents to individually choose the type of instruction they preferred for their child.The two approaches to schooling were not only different in organisation, but were also embedded in two different contexts.In spring, a six-week state of emergency with a major lockdown and a harsh curfew was enforced in parallel with school closure, while autumn brought a return to near normal organisation of life, albeit with social distancing, masks and no nightlife.Distance education during the school closure and as part of hybrid education included a combination of low-tech and high-tech tools from the UNESCO suggested list (UNESCO, 2020b), such as TV instruction, Viber groups, email, Messenger, Google Classroom, Google Meet or Zoom, but occasionally also no-tech solutions of providing printouts for parents or students to pick up at the school entrance. Research goals and questions Given the pricelessness of relational trust in times of crisis and the dynamics of education provision in Serbia during 2020, we wondered how teacher-student interactions unfolded and what expectations were involved.Did the transformed education process instil relational trust and trust in education itself, or did it challenge it? This paper puts the spotlight on the intricacies of the teaching-learning process and the relational trust embedded in it from the perspective and through the experiences of schoolchildren and teachers during the school closure and reopening in Serbia in 2020.The paper aims to help understand how relational trust between students and teachers was unfolding, distilling, diminishing, or reconstructing itself during emergency distance and flexible hybrid education experienced. The research questions that guided this research endeavour are: 1. How did students and teachers experience distance and hybrid education?2. Was trust disrupted and, if so, how? 3. What are the opportunities for repairing and strengthening relational trust in this challenging context? Method As the aim of our study called for exploring nascent experiences saturated with feelings and search for meaning, we selected a narrative methodology that utilises the dynamic storytelling approach (Daiute & Kovač-Cerović, 2017) as a data collection framework, and Values Analysis (VA) (Daiute, 2013;Daiute et al., 2020) as a type of qualitative analysis that fully respects the narrators' stances.This analysis builds on the understanding that narration is a communicative act and that narrative expressions communicate messages and meanings that the narrator chooses as important and valuable to share.Therefore, VA does not refer to the social-psychological notion of value, but to the communicative value of a message.Furthermore, guided by our interest in delving into the intricacies of the interactions between key education participants, we opted for multi-genre narratives.As prior research has proven (Daiute & Kovač-Cerović, 2017), different narrative genres provide opportunities for narrators to relate to different actual or imagined audiences with dynamic, different and even contradicting stances and voices, thus enriching the perspectives conveyed. Data collection: Instrument and procedure We constructed an online instrument containing prompts for narratives and basic group identifiers.The prompts for both teachers and students were designed to elicit narration in two different genres: in the form of a story about schooling in the altered conditions, and in the form of a letter to a peer who is about to face schooling in altered conditions.We additionally prompted students to narrate in a request genre by writing about what they would like to be different in the current schooling conditions. Both instruments were disseminated online to schools in two waves: first in June 2020, during the lockdown and distance education only, and several months later, in December 2020 and January 2021, when education was organised in a flexible hybrid model.In both waves, links to online instruments were distributed via school management and participation was voluntary. Sample of participants and narratives A total of 136 students and 117 teachers from 22 schools completed the questionnaires in two waves.In the first wave (June 2020), 45 students (64% female, average age 14.3) and 59 teachers (94% female, average work experience 15.9 years) took part in the study.In the second wave, another 91 students (59% female, average age 11) and 58 teachers (85% female, average work experience 15.5 years) participated in the research (no first-wave participants took part in the second wave of data collection).The participants wrote a total of 581 narratives.Table 1 shows the sample of narratives per wave, subsample of participants, and narrative genre. Analysis The narrative materials were segmented into thought units, usually consisting of one sentence per unit, which were coded.A coding manual was developed after the first wave of data collection.Three researchers collaboratively read a sample of materials to identify the organising principles and important messages, i.e., "values" communicated through each unit, and to assign codes and then fine-tune the coding system on another sample of materials.A sample of the narratives from the second wave was used to adjust the coding manual to newly emerged values.Prior to final coding, a reliability check was carried out.Cohen's Kappa coefficient showed a strong agreement between two coders (κ = 0.82, p = .000),who further coded the materials from both waves. Results A total of 2,346 thought units were coded with 22 codes, subsequently grouped into two broad themes: context and trust.The context theme is organised around codes that refer to the experiences of conditions set by the pandemics during distance and hybrid education, while trust is thematised through codes representing relations, perceptions and evaluations of self and others, around which trust is devised. The distribution of the 22 codes in the complete sample of coded thought units (across both stakeholders and both waves of data collection) is shown in Figure 1.The three pivotal codes emerging from students' and teachers' narratives, regardless of the education model, are Learning process and outcomes (14.1% of all coded units), Unusual way of schooling (11%), and Heavy workload (9.6%), while eight codes are distributed with frequencies close to 5%, and another eleven with lower frequencies, all calling for further detailed analysis. The detailed meaning and content of all of the codes within the two themes is described in the next section, where answers to the research questions are presented.Distributions of codes by stakeholders and education models for context and trust are shown in Figures 2 and 3, respectively.Both students' and teachers' narratives most frequently point to the heavy workload during distance education (Figure 2; code: Heavy workload; 36% of units in students' narratives and 23% of units in teachers' narratives), and how unusual (code: Unusual way of schooling) and confusing (code: Organisational confusion) it was.Students' expressed that they were overwhelmed with homework, and that too many different online platforms had to be used, which made it impossible to keep up with all of the teachers.One student summed this up as follows: "every week is more challenging than the other, every assignment harder than the previous, every piece of homework bigger, and all that without teachers to help you" (student Marija).Similarly, teachers described a surplus of obligations resulting from unclear top-down guidance, unequal outreach to students, and their lack of digital skills.One teacher portrayed this period as a "virtual darkness with no access to feedback of any kind" (teacher Vesna).Additionally, many teachers struggled to balance two very important roles (code: Multiplicity of roles) -teacher and parent -and, more frequently than students, they narrate about the anxiety caused by contradictory discourses about the virus (code: Virus -uncertainty and restrictions).Both students and teachers faced many technical problems: lack of equipment or outdated equipment, poor internet connection, sharing ICT devices with siblings and family (code: Technical requirements). School reopening and the transition to a hybrid model of schooling was less demanding and confusing for students, similarly demanding and confusing for teachers, but significantly more unusual for both (Figure 2): "this hybrid model makes no sense" as one student said (student Borko).Unusual and ambiguous students' experiences are related to the restrictive health measures (code: Health measures) of in-person instruction (e.g., short in-person lessons and half-empty classrooms), and to the mismatch between TV lessons and school-based work (code: Organisational confusion), which all somehow conveyed a lack of much needed contact with peers and teachers (code: Reduced peer contact).For teachers, this model triggered feelings of "helplessness in a controlled chaos" (teacher Slavica). These results show that the transition to distance teaching as well as the hybrid model of education brought many uncertainties and ambiguities in the enactment of students' and teachers' roles (Figure 2; code: Overwhelming negative emotions). Was trust disrupted and, if so, how? The narratives describe how all four core dimensions of trust in student-teacher relationships were disrupted: competence, respect, personal regard and integrity (Bryk & Schneider, 2002).In this section, we describe students' accounts of student-teacher relationships in relation to the four trust cornerstones and contrast them with teachers' views. Competence This is the most elaborated dimension of trust by both students and teachers and can be traced through 7 out of 13 codes: Coping strategies, Self-evaluations, Evaluations of others, Necessity for self-regulation, Perception of others' competence, Learning process and outcomes (a code prominent in all subsamples), and Active and creative uses of ICT. Students refer to the quality of distance and hybrid education (Figure 3, code: Learning process and outcomes), the (in)competence of teachers (code: Perception of others' competence), the need for self-regulation of learning (code: Necessity for self-regulation), and to much needed creative uses of ICT in education (code: Active and creative uses of ICT).They assess the changed schooling as not very effective, as they generally felt "less actively involved in learning" (student Alisa), and perhaps even more so in hybrid education.The incompetence of teachers is seen through underused features of digital platforms, applications and video chats.On the other hand, they describe cases of teachers' creative uses of ICT that helped them learn, develop new skills and feel good about education (codes: Active and creative uses of ICT and Self-evaluations). The code Learning process and outcomes in teachers' narratives is most frequently elaborated in terms of the equity of distance education as well as their own contribution to this dimension.For example, they explain how unequal students' participation was due to the unequal distribution of resources, and how now more than ever it was important for teachers to get to know their students, to praise their work and engagement, to find ways to motivate them, and to make careful and creative choices about teaching methods and learning materials.Additionally, teachers speak about their digital (in)competence (codes: Coping strategies, Self-evaluations, Active and creative uses of ICT): how the digital environment is unfamiliar to them, how collegial exchanges helped them find their way around digital platforms, how they need to further develop digital skills, how (un)creatively they used digital technologies, and why they should be used more in the future. Some narratives illustrate how organisational, technical and logistical day-to-day problems prevented teachers from responding to students' needs on time and from devoting themselves more to capacity building (codes: Necessity for self-regulation, Coping strategies): "each time you think you're on the right track, you know that an incoming instruction from the Ministry will push you into a ditch" (teacher Borjana). Respect Students' and teachers' accounts of mutual respect differ in their focus.Students point more frequently to disruptions, while teachers also elaborate their efforts to establish caring exchanges.Codes that inform understanding of respect are Need for understanding and Communication styles. The sources of disruption pointed out by students are similar across time (distance and hybrid model).Disruptions occurred when teachers expressed anger, resentment or lack of patience (Figure 3, code: Communication styles, e.g., when homework was not submitted on time) or when they expressed demands that the students perceived as beyond their capacities (code: Need for understanding, e.g., long tests in a very short time).In their narratives, students asked for more understanding and patience, and a better mood from teachers, as "some teachers are more hostile than before" (student Nenad).The notions of disrespect in communication found in the students' narratives may indicate disruptions in this dimension (code: Communication styles): "If students don't respect you, this is because you disrespect them" (student Lena). Teachers narrate about attempts to find ways to include all students in daily exchanges, especially at the beginning of lockdown, to maintain caring communication, to show understanding, to convey the importance of mutual support, and to teach them empathy (codes: Communication styles and Need for understanding).Teachers also suggest how respectful communication on the part of students is lacking (code: Communication styles) -politeness, nice behaviour, discipline, listening to what teachers are saying -which is mostly related to behaviour during lessons.However, teachers' considerations of respectfulness of their own communication related to instruction or assessment were not noted. Personal regard Students and teachers do not talk explicitly about the willingness of others to enact more than what their roles require.However, this can be understood through codes such as Teacher support, Peer support, Expectations of role enactment, and Communication styles. Students thus talked about the readiness of teachers to support them in resolving the same problems repeatedly, and about their availability for communication at all hours (Figure 3, code: Teacher support): "The good thing is that our teacher did his best to explain the lesson before and then again after the test" (student Neda). Some teachers describe their perception of students' responsibility (code: Expectations of role enactment).In doing so, they refer to their expectations that education should have been the students' priority during the pandemic, and that students should be motivated and have working habits.It was therefore especially rewarding for teachers when students, despite all of the obstacles, succeeded in responding to all of the assignments and in staying motivated throughout the lockdown, while their failing to live up to these expectations made teachers feel less valued and respected: "I feel bad and helpless because children won't turn their cameras on during lessons and they cheat with homework" (teacher Dara). Personal integrity Perceptions of personal integrity are also related to instruction and assessment in both the students' and teachers' narratives, and are highlighted through the code Ethics and moral of the other.Almost all notions refer to experiences during school closure and distance education. Students speak of a difference in power positions and point to times when "some teachers make claims and act inappropriately, just because they can" (student Veljko), which they see as unjust.In turn, they tend to withdraw from interaction and to meet only minimum requirements for that particular subject. Teachers note the unethical conduct of students, such as family members doing the homework instead of students or students making up unrealistic excuses for their absence in online education.One teacher stated that they have been assessing "moms, dads, older siblings, good aunts and helpful neighbours" (teacher Bojana).This was very challenging for teachers, as they had to intensify communication with parents, who negate such behaviours, while at the same time trying to avoid negative assessment in order to prevent adding burden and negative feelings in already hard times. What are the opportunities for repairing and strengthening relational trust? Although the results presented thus far point to disruptions of all four cornerstones of trust, niches that safeguard opportunities for repairing trust between students and teachers are registered too. Similarities and differences in the discernment of codes that are relevant to relational trust across students' and teachers' narratives can be noted across waves.During school closure, comparable discernment through the narratives was found in relation to Learning process and outcomes (about 20% of the coded units in students' and teachers' narratives).During hybrid education, however, teachers' accounts of Necessity of self-regulation become more frequent, thus approaching its distribution in students' narratives (about 10% of the coded units).Similarly, the frequency of students' accounts of Coping strategies came closer to its distribution across teachers' narratives in the second wave (hybrid education). These niches hold potential for restoring trust if they are brought to the attention of and negotiated by students and teachers. Commitment to learning goals and outcomes should be negotiated Caring communication on the part of teachers and openness towards providing socio-emotional and learning support makes students feel more comfortable and safer in times of crisis and strengthens their confidence in teachers' devotion to students' advancement.For example, students' words such as "we are not going to bother you… please don't get angry with us… it hasn't been a month since we started school… I hope we will get on with each other well" (student Marko) or "don't get angry with us if we don't complete assignments on time, not all of us have equal access to online platforms" (student Jovana) can be translated into teachers' notions such as "we should have in mind that some students are not digitally competent and that they need more support… also, not all students have equal access to online teaching… be patient because they will ask for support a lot!" (teacher Fatima). Expectations of role relationships should be agreed Students' demands for creative uses of online platforms and digital technologies by teachers, as this actively engages them in learning, should be met with understanding and competence by teachers.As one teacher explained: "Don't expect students to be online every day at the same time.They should learn at their own pace.Set realistic deadlines.Give new assignments on particular days, not every day, and choose them according to the outcomes you want to achieve… take kids to virtual museums… enjoy students' work and products…" (teacher Sofija). On the other hand, students' owning the responsibility for their own learning and ethical conduct leverages teachers' trust and commitment to students.Some students explained how distance learning was not too hard, as they "followed certain rules during the whole semester: behave respectfully towards teachers, actively participate in lessons, and study regularly" (student Milica). Discussion and policy recommendations In this section, we discuss the meaning and significance of the results in light of prior research on relational trust.In addition, we relate the results to considerations of systemic measures that can contribute to trust building between students and teachers. Discussion This research described how relational trust between students and teachers -as defined by Bryk and Schneider (2002), i.e., consensus about roles, obligations, and mutual expectations -became ruptured during the emergency distance and hybrid education in Serbia in 2020. Firstly, our data bears witness to the manifold challenges for the enactment of both students' and teachers' roles created by the transition to distance and hybrid education.Students narrate about overwhelming amounts of homework and negative and ambiguous emotions, while teachers speak about a surplus of obligations and feeling of helplessness.At the same time, they all encounter numerous technical problems.Similar experiences were noted in other research that included students and teachers during the Covid-19 crisis: losses in wellbeing, feeling of belonging, and confidence in their competences (Bertling et al., 2020;Kim & Asbury, 2020;Niemi & Kousa, 2020;Trust & Whallen, 2020). The fear and uncertainty that everyone faced needed to be mended and overcome through peer, collegial and teacher-student exchanges.However, as our findings suggest, trustful social exchanges were rarely available to students and teachers.All four cornerstones of relational trust were compromised during the period of distance and hybrid education: competence, respect, personal regard and integrity (Bryk & Schneider, 2002). Students described how a lack of confidence in teachers' competences diminished relational trust.They assessed that teachers' underdeveloped competences in the online environment negatively influenced students' engagement in meaningful learning, much as has been found in the case of face-toface instruction (e.g., Goddard et al., 2001).Moreover, teachers' abuse of their power position (lack of personal integrity) negatively affected students' trust in teachers: students withdrew from interaction and met only minimal class requirements.On the other hand, students described positive emotions and engagement in schooling when they perceived teachers' respectful communication and readiness to support students (respect and personal regard).These results suggest that when teachers encourage students' expression and psychological and emotional involvement in schooling, students' sense of wellbeing is strengthened (Smyth, 2006), including during education in times of crisis. For teachers, students' inability to meet the expectations of being motivated and persistent in learning during distance and hybrid education prevented them from holding students in high personal regard and consequently affected relational trust between them.The same happened when teachers perceived a lack of personal integrity among students (e.g., unethical conduct) or disrespect in communication (e.g., impoliteness).According to the literature on relational trust (van Maele & van Houtte, 2011;Weinstein et al., 2018), teachers' divulging trust to students based on the perception of their competence is a common feature of contact instruction, while students more often give trust to teachers based on their personal characteristics (e.g., personal integrity).However, our research showed that both competence and more personal cornerstones of trust, such as personal regard, respect and integrity, are a very powerful basis for trustful role-relationships between teachers and students in times of crisis. An important finding is how collegial exchanges during this time helped teachers in relation to capacity building and navigating the rapidly changing and overwhelming context.As previous research has shown, horizontal exchange and collaboration correlate with teachers' resilience in times of crisis (Cranston, 2011;Tschannen-Moran, 2009). The findings also point to how contextual features moderated relational trust (Louis, 2007;Tennenbaum, 2018).According to teachers, the frequent but conflicting top-down demands created a surplus of administrative obligations to be fulfilled within tight deadlines.These tasks often prevented them from responding to students' needs or establishing caring and empathic communication with them.Teachers report knowing that this led to students' disappointment, but they also felt a lack of understanding on the part of students, feelings that altogether jeopardised relational trust. As other authors have noted, shared purpose and mutual trust (Myung & Kimner, 2020) as well as meeting the socio-emotional and academic needs of students (Darling-Hammond & Hyler, 2020) are conducive to school actors' resilience and school improvement in times of crisis.In line with this, the results of the present research point to opportunities for strengthening the culture of safety and respect in schools, which is profoundly important for the Covid-19 education process.Negotiation of commitment to learning goals and outcomes, as well as consensus on expectations of role relationships, can contribute to students' and teachers' resilience and wellbeing during crisis, students' greater engagement in schooling, teachers' commitment to students, and more positive overall teacher experience of rapid change.With this in mind, we provide policy recommendations below. Policy recommendations Clear and timely guidance from education authorities.According to teachers' narratives, frequently changing and confusing top-down instructions prevailed even during hybrid education.Policymaking should thus establish a clear framework for emergency and remote teaching in terms of goals and outcomes, curriculum, platforms and assessment, with margins for possible directions of changes due to the evolving health situation.Guiding and support sessions and materials for teachers are also needed in order to reduce uncertainty and confusion.In turn, we expect, students' perceptions of teachers' competence and integrity would not be as compromised as they are now (Lee et al., 2011;van Maele & van Houtte, 2011), and their engagement in learning would increase (Goddard et al., 2001;Goddard, 2003). Building pedagogical digital competences of teachers.Both teachers and students pointed to the drawbacks and benefits of (ill-)prepared instruction, and (un)transparent assessment in distance and hybrid education, in terms of learning outcomes.Well-designed inclusive instruction in the digital environment is related to the culture that teachers build around implementing technology (McMahon & Walker, 2019), which is embedded in the school context and local realities (Kovacs, 2018), as well as in wider societal discourses on digitalisation in education (Vivitsou, 2019).Therefore, capacity building should aim to develop pedagogical skills in the digital environment, formative assessment, and a relational approach to instruction and learning (UNESCO, 2020a), and it should be articulated in horizontal exchange and collaboration within schools, allowing for the exploration of teachers' preconceptions and previous practices of integrating digital technologies into classroom instruction. Distance and hybrid education should offer opportunities for negotiation of role expectations.This research has demonstrated how a lack of transparent, respectful, timely and meaningful communication left students and teachers unaware of each other's needs and capacities, resulting in learning losses and reduced wellbeing.In order to avoid such negative effects of relational mistrust, education in times of crisis and rapid change should offer frequent opportunities for students and teachers to talk about their positions and to jointly define and plan the education process: defining obligations, establishing rules of conduct, planning course schedules, choosing the time and space for support in learning and socio-emotional support, etc. Negotiation of role-relationships should be institutionally supported (Louis, 2007;Tennenbaum, 2018) by enabling resources that students and teachers can use as needed (e.g., "learning hubs" for students who struggle with digital learning and lack of interaction during lockdown - Darling-Hammond et al., 2021). Conclusions The aim of this paper was to understand why and how relational trust between students and teachers was challenged during distance and hybrid education in Serbian primary schools.Furthermore, it illuminated niches of role relationships that hold potential for repairing and strengthening trust as they emerge from the data, and offered recommendations for trust-building that target students, teachers and policymakers. The results showed how students expected to rely on teachers to address uncertainties and resolve ambiguities that distance and hybrid education brought, through coordinated instruction at the school level, creativity and diversity of instruction, provision of support for learning, transparent and just assessment, and caring communication.When teachers met these expectations, students narrate about positive learning outcomes and benefits for their wellbeing; otherwise, they felt overwhelmed, burdened and confused, and narrate about learning losses, etc.Therefore, lapses in resolving uncertainty and ambiguity made students question teachers' competence, credibility, integrity and respect.On the other hand, our research highlighted teachers' experiences in the ruptured education system, their perspectives on relational trust, as well as the structural and institutional conditions that affected their conduct and competence.Considering this, recommendations for trust building suggest raising awareness of both students and teachers about each other's perspectives and their negotiation locally, as well as policy support to create opportunities for trustful student-teacher relationships in the course of emergency distance education and other crises. Finally, we consider the limitations of our findings and implications for further research.Since the data collection in our study was conducted online, the percentage of narratives obtained from students with limited access to internet and ICT devices is not proportional to the structure of the student body in the chosen schools.Consequently, relational trust of these students and teachers was not well explored.The methodology for future studies on relational trust needs to be more inclusive of students from vulnerable groups.Furthermore, this research did not take into consideration histories of institutional trust and previous accounts of role relationships in the schools from which our sample came, nor did it consider teachers' experiences of new technologies in education.Therefore, we were not able to discuss their contribution to the current state of role relationships, even though the relevant literature emphasises its necessity. Figure 1 Figure 1 Distribution of coded thought units across two broad themes (Characteristics of context, and Dimensions of trust) Table 1 Sample of narratives per wave, subsample of participants, and narrative genre
v3-fos-license
2023-01-25T15:18:44.984Z
2021-05-26T00:00:00.000
256201388
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://doi.org/10.1186/s13578-021-00603-7", "pdf_hash": "1c90d1d8453bc6ac6e6b44095ee3d653ff4434e0", "pdf_src": "SpringerNature", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41093", "s2fieldsofstudy": [ "Biology" ], "sha1": "1c90d1d8453bc6ac6e6b44095ee3d653ff4434e0", "year": 2021 }
pes2o/s2orc
Semaphorin3A increases M1-like microglia and retinal ganglion cell apoptosis after optic nerve injury The mechanisms leading to retinal ganglion cell (RGC) death after optic nerve injury have not been fully elucidated. Current evidence indicates that microglial activation and M1- and M2-like dynamics may be an important factor in RGC apoptosis after optic nerve crush (ONC). Semaphorin3A (Sema3A) is a classic axonal guidance protein,which has been found to have a role in neuroinflammation processes. In this study, we investigated the contribution of microglial-derived Sema3A to progressive RGC apoptosis through regulating paradigm of M1- and M2-like microglia after ONC. A mouse ONC model and a primary microglial-RGC co-culture system were used in the present study. The expression of M1- and M2-like microglial activation markers were assessed by real-time polymerase chain reaction (RT-qPCR). Histological and Western blot (WB) analyses were used to investigate the polarization patterns of microglia transitions and the levels of Sema3A. RGC apoptosis was investigated by TUNEL staining and caspase-3 detection. Levels of Sema3A in the mouse retina increased after ONC. Treatment of mice with the stimulating factor 1 receptor antagonist PLX3397 resulted in a decrease of retinal microglia. The levels of CD16/32 (M1) were up-regulated at days 3 and 7 post-ONC. However, CD206 (M2) declined on day 7 after ONC. Exposure to anti-Sema3A antibodies (anti-Sema3A) resulted in a decrease in the number of M1-like microglia, an increase in the number of M2-like microglia, and the amelioration of RGC apoptosis. An increase in microglia-derived Sema3A in the retina after ONC partially leads to a continuous increase of M1-like microglia and plays an important role in RGC apoptosis. Inhibition of Sema3A activity may be a novel approach to the prevention of RGC apoptosis after optic nerve injury. Introduction Optic nerve injury resulting in progressive retinal ganglion cell (RGC) death is a serious and irreversible cause of blindness [1]. Retinal neuroinflammation is a leading factor limiting the recovery of RGC after primary optic nerve impairment [2,3]. Microglia makes up a significant portion of the resident glial population in the retina and are key mediators of neuroinflammation [2,4,5]. In previous studies, we have investigated the essential role of microglia in triggering retinal inflammation [6,7]. Optic nerve injury is followed by migration, activation, and proliferation of microglia [8,9]. Activated microglia, including retinal microglia, can be divided into two major types: pro-inflammatory type M1-like microglia and antiinflammatory type M2-like microglia [10,11]. M1-like microglia secrete pro-inflammatory cytokines, including TNFα, IL-23, IL-1, and IL-12, which contribute to neuronal damage. M2-like microglia, activated by IL-4 or IL-13, are anti-inflammatory and promote tissue repair and wound healing [4]. Sufficient evidence has suggested that reciprocal transformation of M1-like and M2-like microglia occurs under certain conditions. This reciprocal transformation can lead to either the increase or the subsidence of neuronal inflammation [12][13][14]. However, the initiating factors governing the polarization of microglia in RGC secondary injury is still not fully elucidated. Sema3A is involved in the negative regulation of neuronal axon and dendrite polarity [15][16][17][18][19] as well as acting as an effective regulator in some essential stages of inflammation and the immune response [16,[20][21][22]. For example, patients suffering from late-stage proliferative diabetic retinopathy have elevated levels of vitreous Sema3A, which attracts neuropilin-1(NRP-1) positive mononuclear phagocytes [23]. Sema3A also activates the transcription factor NF-κB via the TLR4 signaling pathway in macrophages to increase pro-inflammatory cytokine production and to augment inflammatory responses in a sepsis-induced cytokine storm [24]. Previous studies have confirmed that the level of Sema3A (mainly distributed in the RGC layer of the retina) increases significantly during the 3 days after optical nerve crush (ONC). Its expression can persist for 14-28 days after injury [19,25]. Anti-Sema3A antibodies (anti-Sema3A) can rescue RGCs from apotosis that occurs after optic nerve axotomy [26]. Neuropilin 1 (Nrp1) is a receptor of Sema3A. It has been shown that levels of microglia in Nrp1-floxed mouse retina are significantly lower than in the retinas of WT mice [23]. Nrp1 + microglia are present throughout the retina during vascular development, although they are more prevalent in nonvascularized retinal tissue [27]. This suggests a potential paracrine effect of microglial Sema3A/Nrp1 signaling in retinal development and pathogenesis. However, the role of Sema3A in retinal neuroinflammation and its interaction with retinal microglia remain unclear. In current study, we investigate the regulatory effect of Sema3A on M1-and M2-like microglia dynamics in vivo and in vitro. In a mouse ONC model, we demonstrate that a significant amount of Sema3A is secreted by retinal microglia post-ONC. The dynamic changes of M1/ M2-like microglia and neuronal apoptosis were evaluated. The levels of M1/M2-like microglia and the extent of RGC apoptosis were further verified in a co-culture model of primary microglia and RGCs. We found that the increases of Sema3A expression increased the proinflammatory M1-like phenotype and decreased the anti-inflammatory M2-like phenotype. In consequence, as pro-inflammatory cytokine release increased, RGC underwent apoptosis. anti-Sema3A treatment in vitro and in vivo decreased the M1-like phenotype and increased the M2-like phenotype, contributed to the amelioration of RGC apoptosis. Animals and surgery All animals were treated according to the ARVO Statement for Use of Animals in Ophthalmic and Vision Research. Experimental procedures were approved by the Institutional Animal Care and Use Committee of the Army Medical University. C57BL/6 mice were purchased from the Army Medical University. The mice were housed at an animal care facility with a 12/12-h light/dark cycle and ad libitum access to food and water. The classic model of optic nerve crush (ONC) was performed as previously described [6,28]. Adult C57BL/6J mice (male, aged 6-8 weeks; weight: 20-24 g) were anesthetized by intraperitoneal injection of pentobarbital sodium (30 mg/kg). The exposed optic nerve of the left eye was crushed for 10 s at a distance of 1.5 mm from the eye globe with ultrafine self-closing forceps without damage to the retinal vessels or the blood supply. The right eye was used as a sham control (Fig. 1a). The mice were killed at 3 and 7 days post-ONC. Primary microglia were cultured from newborn C57BL/6 mouse cortex for the in vitro studies. Microglia depletion The colony-stimulating factor 1 receptor antagonist PLX3397 was used for the pharmaceutical depletion of microglia. Male mice aged 8-10 weeks were given AIN-76 A chow containing 290 mg/kg PLX-3397 [29,30]. Age-matched controls were given AIN-76 A chow without PLX-3397. After 3 days of diet administration, the mice underwent the ONC procedure and were sacrificed. Intravitreous injections Adult mice received intravitreous injections of anti-Sema3A (1 µl, neutralizing antibody) [31] in their left eyes before the ONC procedure. The right eyes were used as sham controls and received saline injections. The intravitreal injection procedure was performed as described previously [32] without elevated intraocular pressure detected in any of the eyes after surgery. Whole-mounted retinal immunofluorescence Retinas were fixed in 4% PFA for 1 h, then dissected as whole-mounts. An orientation record was maintained as previously described [28]. Briefly, the intact retinas were incubated in PBS containing 5% BSA and 3% Triton-X-100 at 4 °C overnight. Primary mouse monoclonal anti-Tuj1 antibody (Covance, Cat. MMS435P) was added (1:500) and incubated overnight at 4 °C. The retinas were incubated with secondary anti-mouse IgG antibodies conjugated to Alexa Fluor 488 overnight at 4 °C and examined by confocal microscopy with the appropriate filters (SP8, Leica, Germany). Microglial cell culture Mixed glial cultures were isolated from the cerebral cortices of 1-day-old C57BL/6 mice as previously described [33]. Cells were dissociated under aseptic conditions, suspended in DMEM-F12 with 10 % FBS, and seeded at a density of 62,500 cells/cm 2 [34]. Cells were cultured at 37 °C and 5 % CO 2 for 15 days. Mixed glial cells were then shaken at 200 rpm in a rotary incubator overnight at 37 °C to dissociate the cells. The suspended cells were collected and replanted in DMEM-F12 with 10 % FBS. The purity of the microglia was confirmed by immunostaining using anti-Iba1. Then, three independent microglial cultures were treated for 6 h with vehicle (cell culture medium) Sema3A (R&D) and 100 ng/mL of LPS (026:B6 Escherichia coli serotype, Sigma Aldrich), and primary microglia were harvested after 1 day and 3 days for further experiments. RT-qPCR Total RNA was extracted from retinas and primary microglia using the RNeasy RNA isolation kit (Qiagen). Reverse transcription reactions were performed using the PrimeScript ™ RT reagent Kit (Takara) according to the company's instructions. RT-qPCR was performed using the SYBR method (Quanta BioSciences) on a CFX96 Real-Time PCR Detection/C1000 Thermal Cycler system (Bio-Rad). The results were normalized to β-actin. Relative quantification was calculated using the 2−ΔΔCq method. The primer sequences used are listed in Table 1. Statistical analysis Statistical analyses were performed using GraphPad Prism 7.0 software (GraphPad, La Jolla, CA, U.S.A.) and reported as mean ± S.E.M. Student's t-test, one-way ANOVA, and least significant difference post-hoc tests were also employed. All experiments were performed independently with n > 3. P values ≤ 0.05 were considered to be statistically significant. Significant values were marked as * (P values below 0.05), ** (P values below 0.01), and *** (P values below 0.001). After optic nerve crush, Sema3A increased, accompanied by microglial activation WB results showed that Sema3A expression in the mouse retina increased significantly and peaked at 3 days post-ONC. On day 7 post-ONC, Sema3A levels declined but remained higher than the control (Fig. 1b), consistent with previous results [19]. At 3-and 7-days post-injury, there was a concomitant increase in expression of Iba1, indicating that microglia were significantly activated (Fig. 1d). Based on the RT-qPCR analysis, there was a significant increase in mRNA expression of Sema3A and its receptor Nrp1 at 3 days and 7 days post-ONC (Fig. 1e, f ). Sema3A immunofluorescence staining was primarily located in the Ganglion Cell layer (GCL) (Fig. 1g) while Iba1 was distributed throughout the GCL, inner plexiform (IPL), and outer plexiform (OPL) layers. Detailed morphological analysis at 3-and 7-days post-ONC indicated that Iba1 + cells adopted a more irregular soma shape, with increased cell body size and retracted processes. Microglia depletion reduces Sema3A levels after ONC Sema3A can be produced by stressed neurons, activated astrocytes, and microglia [35][36][37]. To determine the contribution of Sema3A from microglia in the mouse model, mice treated with PLX3397, which depleted microglia in retina [30,38,39], underwent ONC. Immunofluorescence staining revealed a significant 50 % reduction in retinal microglia after PLX3397 treatment (Fig. 2a, b). WB results showed that Sema3A and Iba1 levels in retinal microglia from PLX-3397 treated mice were both significantly lower than that in untreated mice (Fig. 2c, d), suggesting that microglia serve as one of the sources of Sema3A protein. However, the remaining microglia were activated, similar to microglia in untreated ONC mice (Fig. 2e). Immunostaining of Sema3A and Iba1 in retinal cells showed that Sema3A protein co-localized with Iba1 in microglial cells (Fig. 2f ). M1-like microglia levels increase and M2-like microglia levels decrease in retinal tissues post-ONC, similar to cultured primary microglia treated with Sema3A There was robust induction of the M1-like marker CD16/32 and the M2-like marker CD206 in the retina at 3 days post-ONC (Fig. 3a-c). At 7 days post-ONC, the level of CD16/32 (M1-like) was observed to be higher than that observed at 3 days post-ONC. However, the expression levels of CD206 were greatly reduced (Fig. 3ac), which was confirmed by immunofluorescence staining (Fig. 3d). RT-qPCR revealed that cytokines secreted by M1-like microglia, including IL1-β, IL-6, iNOS, and TNF-α, were significantly up-regulated, while those of M2-like microglia, such as CD206 and Ym1, were downregulated (Fig. 3e). Cell staining with P2ry12 co-labeled with CD16/32 and CD206 was used to distinguish resident microglia from infiltrating macrophages. The results suggested that microglia polarization after optic nerve injury mainly occurred in resident microglia (Fig. 4). In vitro studies demonstrated that the treatment of primary microglial cells with Sema3A significantly increased the population of M1-like microglia (Fig. 5a). Doubleimmunofluorescence staining of CD16/32 and CD206 in primary microglia showed that 1 day after Sema3A and LPS treatment, CD16/32 + and CD206 + cells were both increased. However, 3 days after Sema3A and LPS treatments, an extra immunofluorescent staining in primary microglia showed increased M1-like microglia and decreased M2-like microglia, which supported the RT-qPCR results (Fig. 5b, c). These in vitro results indicate that Sema3A is involved in the pathological processes that increase the population of M1-like microglia and decrease the population of M2-like microglia after ONC. Anti-Sema3A administration significantly reduces RGCs apoptosis in vivo and in vitro Sema3A has been shown to rescue RGCs from cell death following optic nerve axotomy [26], yet the mechanisms vary. We performed intravitreal injections of anti-Sema3A and observed that, on day 7 post-ONC, the increase in the number of M1-like polarized cells slowed but the number of M2-like cells continued to increase (Fig. 6a). In the absence of anti-Sema3A treatment, whole-mounted retinal immunofluorescence staining with anti-Tuj1 antibody indicated that the number of RGCs in the retina had decreased significantly at days 3 and 7 post-ONC. Although no significant difference in RGC numbers was detected between the ONC3D group (3-day post-ONC) and the anti-Sema3A + ONC3D group, the numbers of RGCs in the ONC7D group were significantly lower than that of the anti-Sema3A + ONC7D group (Fig. 6b). These results indicate that intravitreal treatment with anti-Sema3A reduces M1-like microglia polarization, increases M2-like level, and reduces the loss of RGCs following ONC. Discussion Visual impairment caused by optic neuropathy is a primary cause of blindness. The mechanisms of irreversible RGCs death from conditions such as optic nerve injury and glaucoma remain unclear. Previous investigations have suggested that Sema3A plays a crucial role in the development of retinal inflammation and RGCs apoptosis post-ONC [24,26]. Sema3A is continuously expressed in the retina from development through adulthood [41,42]. It is known that Sema3A is significantly up-regulated in the retina post-ONC [19]. Our results demonstrated that the peak expression of retinal Sema3A occurs at day 3 post-ONC and continues to be expressed for 14-28 days postinjury [43] (Fig. 1b-f ). Astrocytes, endothelial cells, and activated microglia contribute to the increase of Sema3A secretion [44,45] (Fig. 2). This is confirmed by the observation that as the number of retinal microglia declined with PLX3397 treatment, a significant decrease of Sema3A was observed in the retina. Sema3A causes the collapse of the growth cone of regenerated axons and inhibits axon elongation by binding to the receptor complex Nrp1/PlexinA [19]. Blocking Sema3A binding to Nrp1/PlexinA effectively reduces growth cone collapse of the dorsal root ganglion after spinal cord injury and promotes nerve regeneration in a rat olfactory nerve axotomy model [46]. Emerging evidence suggests that Sema3A regulates B and T lymphocytes and contributes to the progression and development of diseases such as Systemic Lupus Erythematosus and cancer [47,48]. Sema3A also plays a vital role in the migration and transportation of dendritic cells and the recruitment of mononuclear phagocytes [49]. Our results show that microglia are activated and the number of pro-inflammatory M1-like cells are increased with increasing retinal Sema3A. At the same time, the number of anti-inflammatory M2-like microglia is decreased, accommodating the microglial polarization M1-/M2-like dynamic response and primarily occurring in resident microglia (Figs. 3,4 and 5). This finding extends the current understanding of the interaction between Sema3A and microglia and demonstrates the important role of Sema3A in neuroinflammation following optic nerve injury. Anti-Sema3A inhibition of Sema3A significantly reduced the rate of M1-like cell polarization and increased the rate of M2-like microglia polarization post-ONC in the retina (Fig. 6a). Similar results were obtained in vitro using primary microglia cultured with Sema3A protein (Fig. 7). RGCs apoptosis in mouse retina post-ONC and in primary microglial-RGCs co-culture was reduced by anti-Sema3A treatment (Figs. 5, 6 and 7, Additional file 1: Fig. S1). These findings demonstrate that elevated Sema3A induces RGCs apoptosis and inhibits the regeneration of RGCs axons by regulating M1/ M2-like microglia dynamics post-ONC (Figs. 1, 3, 4, 5, 6 and 7). Our results shed light on part of the RGCs apoptosis pathway induced by Sema3A post-ONC and suggest that Sema3A regulation is a possible new approach for the treatment of optic nerve injury. Microglia can be polarized along a continuum toward an inflammatory (M1) or a non-inflammatory (M2) state and microglial reciprocal transformation may participate in the progression of neurodegenerative diseases, such as Alzheimer's disease [14,50]. After spinal cord injury, TNF prevents the phagocytosis-mediated conversion of M1-like to M2-like cells and mediates an increase in iron-induced changes of IL-4-polarized M2 cells to M1 cells, which is detrimental to recovery [13,51]. Our results show that an increase in Sema3A levels post-ONC leads to a significant up-regulation of M1-like microglia and a concomitant decrease in M2-like microglia, consistent with the dynamic changes of M1/M2 phenotype after traumatic brain injury and stroke. The regulation of M1/M2-like microglia dynamics by Sema3A may play a role in M1/M2 transformation in the retina. The signaling pathways through which Sema3A regulates M1/M2-like microglia transformation requires further study. Evidence has shown that the NF-κB signaling cascade regulates the production of pro-inflammatory mediators and contributes to the M1/M2-like microglia transition [52,53]. Sema3A enhances LPS-induced acute kidney injury by increasing Rac1 (a key factor for activation of NF-κB) and p65 and augments LPS-induced macrophage activation and cytokine production in a plexin-A4-dependent manner [24,54]. Previous studies confirmed that retinal TLR4 expression is increased in a mouse model of ONC [55]. TRIF knockout (KO) inactivates the NF-κB signaling pathway and reduces pro-inflammatory cytokine release by inhibiting activation of microglia in mice retina [6]. The mechanism of endogenous degeneration of RGCs remains unclear. We speculate that Sema3A/Nrp1 is involved in activating the TLR4/ NF-κB signaling pathway, inducing the polarization of microglia toward the M1-like phenotype. However, the molecular pathways remain to be verified. In brief, we find that Sema3A directly affects neuron polarity and inhibits their regeneration, participates in M1/M2-like microglia dynamics regulation, and increases RGCs apoptosis. Sema3A has both direct and indirect effects on RGCs, presenting a potential therapeutic target for optic nerve injury treatment. Conclusions Our results provide the first evidence that retinal microglia are an important source of Sema3A protein post-ONC. Sema3A is associated with an increase in pro-inflammatory M1-like microglia, a decrease in anti-inflammatory M2-like cells, and increased RGCs apoptosis. Inhibition of Sema3A ameliorates RGCs apoptosis and promotes RGCs regeneration. Therefore, Sema3A could be a new therapeutic target for RGCs protection after optic nerve injury. (See figure on next page.) Fig. 7 Anti-Sema3A treatment in primary microglia increased M2-like microglia and decreased RGCs apoptosis in primary microglia and RGC co-culture. a Schematic diagram of primary microglia and RGC co-culture system. b Representative confocal images showing immunofluorescent staining of CD206 (green) and CD16/32 (red) with Hoechst (blue) nuclear staining in Sema3A treated primary microglia with or without anti-Sema3A treatment. Scale bar = 100 μm. Quantitative analysis indicates a significant increase of CD206 with anti-Sema3A treatment. (Mean ± SEM, n = 6). c TUNEL staining of RGCs in control, Sema3A + anti-Sema3A, Sema3A, and LPS groups. Scale bar = 100 μm. d Quantification of TUNEL-positive cells vs. Hoechst staining (n = 4). e-h Western Blot and quantitative analysis show expression of SMI32, MAP2, pro-caspase3, and cleaved-caspase3 in RGCs co-cultured with microglia. β-actin was used as a loading control. (Mean ± SEM, n = 5)
v3-fos-license
2018-12-11T23:14:49.036Z
2018-06-16T00:00:00.000
55242700
{ "extfieldsofstudy": [ "Biology" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/1999-4907/9/6/359/pdf?version=1529140380", "pdf_hash": "bbdb29e9977ed7bad6f8ce2a20aa7dac77679396", "pdf_src": "ScienceParseMerged", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41095", "s2fieldsofstudy": [ "Environmental Science" ], "sha1": "f1808666ee00074c83535692310e65400d7147ba", "year": 2018 }
pes2o/s2orc
Pine Stand Density Influences the Regeneration of Acacia saligna Labill Mediterranean plantations are the most suitable areas to assess vegetation dynamics and competitive interactions between native and exotic woody species. Our research was carried out in a coastal pine plantation (Sicily) where renaturalization by native species (Pistacia lentiscus L. and Olea europaea var. sylvestris) and invasion by Acacia saligna (Labill.) H.L.Wendl. simultaneously occur. The regeneration pattern of woody species in the pine understory was evaluated in six experimental plots along a stand density gradient, from 200 to approximately 700 pines per hectare. Both pine stand density and regeneration by native species had a significant negative relationship with Acacia natural regeneration. Olea regeneration was positively correlated with stand density, while Pistacia showed a non-significant relationship. Saplings of both native species were mostly less than 1 m high, whereas approximately 70% of Acacia individuals were higher than 1 m. We found that 400 pines per hectare should be considered a minimum stand density to keep Acacia under control, while favouring the establishment of native species in the understory. The successful control of Acacia requires an integrated management strategy, including different forest interventions according to stand density: thinning, control measures against Acacia, and renaturalization actions. Introduction The massive afforestation activities carried out on a large scale in the Mediterranean basin during the 20th century made extensive use of alien tree species [1,2].The need to ensure rapid soil cover and the highly degraded conditions of substrates and slopes over large areas determined a clear preference for trees belonging to the genera Acacia, Eucalyptus, and, predominantly, Pinus [3], due to their fast-growing, pioneer traits, ecological plasticity, and easy propagation [4].Another important reason was the supposed role of preparatory species: Introduced species were considered ideal for improving site conditions to allow recovery of native woody species.However, in many cases, afforestation/reforestation projects have slowed down, hindered, or completely stopped the natural evolutionary dynamics and the maturation of more complex, diversified, and stable Mediterranean forest ecosystems [5,6].The role played by forest plantations has been a matter of intense debate in the Mediterranean [7,8], due to the marked heterogeneity of the ecological conditions and management practices.Furthermore, owing to prevailing purposes of soil protection, autoecological aspects have not been adequately considered in the choice of plant species and propagation material, thus determining the unsuitability of many afforested sites.Hence, in most cases, forest interventions are necessary to allow the gradual replacement of alien planted species by native ones.However, such necessary interventions (e.g., thinning) have not been performed or they have been performed without the guidance of clear forest planning and management objectives.In some cases, this has led to the complete failure of plantations, which have neither adequately protected the soil, enhanced local succession dynamics, nor provided adequate biodiversity conservation.As a result, the current main management choice for Mediterranean plantations is renaturalization, i.e., the gradual conversion towards more stable, complex, and diversified ecosystems, dominated by native broadleaved and evergreen trees; the manifestation of potential climax vegetation [9,10].This objective is an absolute priority for protected coastal habitats, which have undergone rarefaction and marked alteration because of intense human exploitation of the territory. Massive reforestation activity has provided a huge, experimental forest laboratory where we can observe and evaluate the evolution and dynamics of woody vegetation in the Mediterranean, as influenced by competition and/or facilitation interactions between native and exotic woody species [8,9].New combinations between different tree species with varied characteristics, such as life-history strategies, origin, reproductive, and physiological traits, have resulted in artificial ecosystems, characterized by novel competitive interactions whose outcomes are not easily predictable nor can they be considered to have reached steady state [11].They are strongly influenced by both local environmental and management conditions, as well as by the type of species that come into contact.In some cases, introduced species have shown a remarkable ability to adapt to the new ecosystem, achieving performance levels greater than that of the native co-existing species, with abundant regeneration abilities and quick soil coverage, thus, displaying invasive behaviour [12].Such evidence further complicates the management of afforested areas in which interventions carried out to promote renaturalization may, at the same time, trigger the invasion process.This so-called paradox of ecological restoration occurs when the disturbance required to restore a forest ecosystem, which has been highly simplified by human intervention, may, on the contrary, favour invasive spread, bringing about significant negative ecological impacts [13].Generally, invasive alien species are particularly able to exploit increases in available resources (light, nutrients) found in gaps or areas with reduced forest cover, such as those resulting from forest utilizations or restoration treatments, or even through wildfires and grazing [13,14].Hence, selective cuttings are performed to counteract alien tree species of the overstory, while favouring the progressive replacement by native woody species, starting from the underlying layers [3]. Thinning within plantations is strictly necessary.On the one hand, such periodic interventions allow the enhancement of available resources for remaining trees, including light, soil water, and nutrients.Individual trees may grow better and increase in diameter size, and they may be healthier and less susceptible to pathogens, disease, and abiotic factors, thus rendering the whole forest stand more resistant and resilient.On the other hand, thinning has been proved to ameliorate understory microclimatic conditions, especially in terms of light available for native seedling establishment, growth, and development in coastal pine forests [15].However, invasive tree species may also benefit from interventions, greatly hindering the natural dynamics and recovery of the functionality of forest ecosystems.Therefore, for the aforementioned reasons, active forest management, which includes appropriate silvicultural interventions, is mandatory. Due to their high invasive potential many Acacia species are considered as ecosystem engineers [12], causing serious and long-lasting ecological impacts on entire ecosystems [16], including below-ground soil microbial communities [17] and native pollinators [18], as well as on soil and vegetation characteristics [16].Some general traits seem to have favoured the worldwide distribution of Acacias as invasive species: Nitrogen symbiotic fixation, rapid initial growth rates, large seed production, and reproductive potential.The introduction of a new functional trait or of a new biochemical process into a native ecosystem is bound, generally, to bring about greater and longer-lasting ecological consequences on ecosystems [19].The more diverse an invasive plant species is, compared to native communities, the greater the likelihood of it having larger negative impacts.Even after eradication, the altered conditions caused by an invasive species may remain for a longer period of time [20].Emblematic cases are the invasion of Mediterranean coastal ecosystems by Carpobrotus spp.[21] and by Acacias [17,22]. Acacia saligna (Labill.)H.L.Wendl.(hereinafter, A. saligna) is one of the most widely used tree species in coastal-areas afforestation, covering approximately 600,000 hectares worldwide of which almost 10% is in South Africa alone [23].It has been preferentially used in Mediterranean-climate areas to bind dunes and stabilize soils, along road systems, and to recover bare and degraded lands, in addition to use as a windbreak [24].Whilst functioning reasonably well in soil protection, a number of peculiar traits make A. saligna one of the invasive species most suited to Mediterranean coastal ecosystems, even with limited nutrient availability.Traits include high growth rates, early achievement of sexual maturity, striking resprouting ability following cutting or other physical disturbance factors, and drought and abiotic stress tolerance, as well as its ability to establish symbiosis for nitrogen fixation [25,26].A. saligna may have a profound effect on the structure and functioning of recipient ecosystems and seriously hamper vegetation dynamics of woody native species, significantly reducing their richness and cover [27].A. saligna may cause a significant shift in native community assemblages, reducing the occurrence of guiding or focal species, whilst favouring the spread of opportunistic and ruderal species, including other invasive species [28,29].In Mediterranean pine plantations, negative effects caused by A. saligna have been reported on small mammal communities and were attributed to the structural simplification and homogenization of the forest stand [30]. Only recently, in Italy, A. saligna has been listed amongst alien species posing a threat to native species and to almost all habitats of community interest linked to sandy shores and dune systems, including wooded dunes with Pinus pinea L. and/or Pinus pinaster Aiton (Habitat 2270) [29,31].A. saligna was introduced to Sicily in the late nineteenth century [32], whilst the first cases of naturalization date back to the early years of the last century [33]. A. saligna has increasingly spread in the last decades, mainly invading the coastal dune habitats of SW Sicily and seriously threatening the vegetation communities dominated by species of considerable scientific and biogeographic interest, such as Retama raetam subsp.gussonei (Webb) Greuter [34] and Juniperus oxycedrus subsp.macrocarpa (Sm.)Ball [35].The same plant communities are the most exposed to invasion by A. saligna in other Italian regions [31], as well as in other Mediterranean-type ecosystems [36].Hence, urgent interventions of control and eradication have been invoked [37,38]. We assessed the regeneration of A. saligna and of co-occurring native woody species in the understory of a Mediterranean coastal plantation along a gradient of pine stand density, ranging from 200 to 700 stems per hectare.In our study site, two natural processes of opposing significance occur simultaneously: On the one hand, active processes of renaturalization of native woody species, particularly Olea europaea L. var.sylvestris (Mill.)Lehr.and Pistacia lentiscus L. (hereinafter, Olea and Pistacia) and, on the other hand, invasion by an alien tree species (A.saligna).Whilst the most favorable pine density values for the establishment and recruitment of Mediterranean native woody species are known, at least in part [39], only a limited body of knowledge exists on density values that allow the renaturalization process to occur without triggering plant invasion.Yet, this is crucial information for ecosystems where native and alien species coexist in the understory and compete.We hypothesized that there could be a threshold density value which favours the renaturalization process, whilst preventing invasive spread.Overall, we provide the most recommended management options for forest pine plantations, including specific forest interventions addressing the pine canopy (thinning), A. saligna (eradication), and native woody species (renaturalization). Study Area and Vegetation Field surveys were carried out within the nature reserve "Foce del fiume Platani", a 3.5 km long coastal belt consisting of 207 hectares, localized in the Agrigento Province (SW Sicily, Figure 1).Average annual precipitation is 496.7 mm, while the average annual temperature is 18.7 • C (Osservatorio delle Acque, 2016).Hence, the study area falls within the Thermomediterranean upper dry bioclimatic belt [40].The dominant soil types in the study area are Typic Xeropsamments entisoils, with a sandy texture in the first meter of soil depth [41].The reserve includes the alluvial plain originated by the delta of the Platani River.This coastal strip has not been affected by marked anthropic alterations and still hosts residual aspects of coastal dune and Mediterranean maquis vegetation, with high potential biodiversity, linked to the presence of the sea, coastal dunes, and backdune habitats.The woody vegetation is characterized by the coexistence of Mediterranean native species, such as Ephedra fragilis Desf., Pistacia lentiscus L., Chamaerops humilis L., and Olea europaea var.sylvestris, with planted alien tree species, especially Pinus spp., Acacia saligna and Eucalyptus camaldulensis Dehnh., and alien shrub species like Myoporum tenuifolium G.Forster [41].The study area also harbours plant species of particular conservation value, such as Retama raetam subsp.gussonei and Juniperus turbinata Guss., characteristic of the association Ephedro-Juniperetum macrocarpae Bartolo, Brullo & Marcenò 1982 [34]. Study Area and Vegetation Field surveys were carried out within the nature reserve "Foce del fiume Platani", a 3.5 km long coastal belt consisting of 207 hectares, localized in the Agrigento Province (SW Sicily, Figure 1).Average annual precipitation is 496.7 mm, while the average annual temperature is 18.7 °C (Osservatorio delle Acque, 2016).Hence, the study area falls within the Thermomediterranean upper dry bioclimatic belt [40].The dominant soil types in the study area are Typic Xeropsamments entisoils, with a sandy texture in the first meter of soil depth [41].The reserve includes the alluvial plain originated by the delta of the Platani River.This coastal strip has not been affected by marked anthropic alterations and still hosts residual aspects of coastal dune and Mediterranean maquis vegetation, with high potential biodiversity, linked to the presence of the sea, coastal dunes, and backdune habitats.The woody vegetation is characterized by the coexistence of Mediterranean native species, such as Ephedra fragilis Desf., Pistacia lentiscus L., Chamaerops humilis L., and Olea europaea var.sylvestris, with planted alien tree species, especially Pinus spp., Acacia saligna and Eucalyptus camaldulensis Dehnh., and alien shrub species like Myoporum tenuifolium G.Forster [41].The study area also harbours plant species of particular conservation value, such as Retama raetam subsp.gussonei and Juniperus turbinata Guss., characteristic of the association Ephedro-Juniperetum macrocarpae Bartolo, Brullo & Marcenò 1982 [34]. Forest Management and Disturbance History The forest plantation in the study area was established in 1952 [41], with the main aim of stabilizing sand dunes and backdunes, and as a windbreak to protect agricultural areas inland [24].The plantation was mostly made with exotic tree species: Pines (Pinus pinea, Pinus halepensis Mill.and Pinus canariensis C.Sm.), eucalypts (Eucalyptus camaldulensis and Eucalyptus occidentalis Endl.), and Acacias, such as Acacia cyclops G.Don and Vachellia karroo (Hayne) Banfi & Galasso. A. saligna was also planted, although, interestingly, its felling began just 15 years later to favour the establishment of the pine forest [42].However, until 1975, forest interventions were mainly limited to the elimination of dead trees.Subsequently, thinning was periodically carried out with the elimination of one tree row every three rows.The actual nature reserve was then established in 1984, with the decree 216 of the Regional Department of Territory and Environment, and, in 1988, it Forest Management and Disturbance History The forest plantation in the study area was established in 1952 [41], with the main aim of stabilizing sand dunes and backdunes, and as a windbreak to protect agricultural areas inland [24].The plantation was mostly made with exotic tree species: Pines (Pinus pinea, Pinus halepensis Mill.and Pinus canariensis C.Sm.), eucalypts (Eucalyptus camaldulensis and Eucalyptus occidentalis Endl.), and Acacias, such as Acacia cyclops G.Don and Vachellia karroo (Hayne) Banfi & Galasso. A. saligna was also planted, although, interestingly, its felling began just 15 years later to favour the establishment of Forests 2018, 9, 359 5 of 16 the pine forest [42].However, until 1975, forest interventions were mainly limited to the elimination of dead trees.Subsequently, thinning was periodically carried out with the elimination of one tree row every three rows.The actual nature reserve was then established in 1984, with the decree 216 of the Regional Department of Territory and Environment, and, in 1988, it was assigned to the Regional Department of Rural and Territorial Development (DRSRT).The main goals were, "to ensure the conservation of bird communities, to promote the re-establishment of Mediterranean maquis vegetation, of halophilic plant communities and of dune fauna" (free authors' translation from the original decree).Over the last two decades, periodic understory clearing and thinning of the exotic species were carried out to promote the gradual conversion towards native-species-dominated forest ecosystems.However, the increasing spread of A. saligna in the understory is making the achievement of this desirable goal extremely difficult.Continuous selective thinning interventions were always performed without the guidance of a forest management plan, whilst the control of A. saligna was never clearly addressed.No wildfires or grazing have occurred in the protected site over the last 50 years. Experimental Plots and Sampling Design To assess the natural regeneration performed by A. saligna, field surveys were carried out in 2017 over six experimental plots along a density gradient of dominant pine (Pinus pinea and Pinus halepensis), ranging from 200 to about 700 stems per hectare.Such values fall among the most commonly observed in thinned Mediterranean pine afforestations [43][44][45].Pinus spp.are the only tree species composing the upper layer.Each study plot consisted of a 21 × 21 m square (441 m 2 ) and was established after an in-depth investigation of all the study area to ensure homogeneity of main environmental factors (e.g., soil characteristics, slope, stoniness, herbaceous cover, distance from the sea) and silvicultural history.Specifically, the altitudinal difference was <5 m, the distance between plots was <500 m, all plots were located in a range of 250-350 m from the sea, and they were established with a minimum distance of 20 m from unpaved roads to minimize potential edge effects.All target tree species were present in all plots and in their surroundings.Hence, we considered the pine stand density as the most relevant driving factor of tree regeneration in the understory. Within each plot, besides identification of the Pinus species, the following dendrometric parameters were surveyed for each individual tree: Diameter at breast height (DBH = D 1.30 m ), total tree height, and basal area.To assess natural regeneration by native woody species and by A. saligna, three 5 × 5 m subplots (25 m 2 ) were established within each plot.Natural regeneration was evaluated in terms of species identity, density (N seedlings or saplings m −2 ), origin (seed-borne or vegetative sprout), and development (diameter at root collar, height of the largest saplings).We considered only regenerating individuals higher than 5 cm and the following four height (H) classes: Class 1: H ≤ 50 cm; Class 2: 50 cm < H ≤ 100 cm; Class 3: 100 cm < H ≤ 200 cm; Class 4: 200 cm < H ≤ 400 cm. Within the upper height classes (3 and 4), we measured the diameter at the root collar in 10 randomly chosen individuals from each class.To determinate the age and the growth rate of A. saligna, we collected three discs from the stem of mature individuals above ground level (diameters = 3.0, 8.5 and 9.5 cm).The surface of the stem discs was sanded with progressively finer grades of sandpaper (up to 1000 grits) to produce a flat, polished surface on which tree ring boundaries were clearly visible under magnification.Growth rings were counted under the microscope (Leica Microsystems, Heerbrugg, Switzerland, Leica DFC 420C©) along three radii per disc, with close attention paid to the occurrence of wedging, missing, and/or false rings. Statistical Analysis Before performing statistical analysis, the assumption of data normality was assessed through the Anderson-Darling test.Both A. saligna (p > 0.15) and the native species (p > 0.15) regeneration density, as well as that of the pine stand (p > 0.15), followed a normal distribution.Linear regression analysis (p < 0.05) was used to assess the correlation between the density of A. saligna and the pine stand, and between the density of A. saligna and the native woody species.We also assessed differences between the height classes, regardless of woody species, and within each woody species between the height classes.As A. saligna data achieved normality after log transformation (Anderson-Darling Test; p = 0.114), we performed a one-way ANOVA followed by a Tukey's HSD (Honestly Significant Difference) test.As Olea data achieved normality after log transformation (Anderson-Darling Test; p = 0.127), but variance was unequal, we performed a one-way ANOVA followed by a Tamhane's T2 Test.As Pistacia data did not achieve normality, we performed a Kruskal-Wallis test (non-parametric) followed by a Conover-Inman Test.As data within each height class were not normally distributed, we again performed a Kruskal-Wallis test (non-parametric) followed by a Conover-Inman Test.Statistical analysis was performed using Systat Software, Inc., San Jose, CA, USA, 2009 (Version No.13.00.05). Dominant Pine Cover Density and dendrometric parameters (DBH, height, and basal area) of the pine canopy cover showed a clear pattern (Table 1).First, a strong positive (r 2 = 0.99) correlation between tree density and tree size, in terms of basal area, was found. A strong negative exponential relationship (r 2 = 0.770) was found between the regeneration density of the native woody species and A. saligna (Figure 3).The intersection between the two trend lines allowed us to find a reference value, corresponding to a pine stand density of 356.75 ha −1 . A strong negative exponential relationship (r 2 = 0.770) was found between the regeneration density of the native woody species and A. saligna (Figure 3).The intersection between the two trend lines allowed us to find a reference value, corresponding to a pine stand density of 356.75 ha −1 . A strong negative exponential relationship (r 2 = 0.770) was found between the regeneration density of the native woody species and A. saligna (Figure 3).Table 2. Distribution in height class (%) of the woody species regeneration.The relative contribution (%) of each species to each height class is reported within round brackets.Means are followed by the standard error.Overall, regeneration was found to be mainly concentrated in the smallest height classes (1 and 2), together accounting for more than 85% of all the individuals (Table 2).Height classes 3 and 4 accounted for a little over 12% of all the individuals, which was almost exclusively represented by A. saligna.Olea reached the highest regeneration density (mean value: 4.7 individuals m −2 ), followed by A. saligna (mean value: 2.2 individuals m −2 ), and Pistacia (mean value: 0.6 individuals m −2 ).However, large differences, in terms of height class, were found between species .Olea regeneration was clearly dominant in the smallest height classes (1 and 2), whilst A. saligna was by far the most prevalent in height classes 3 and 4. In terms of abundance, height classes 1 and 2 were not different from each other, but both were significantly higher than height classes 3 and 4, which, in turn, were equal with each other (Kruskal-Wallis 23.815, p < 0.001).Regarding A. saligna regeneration, we found very little variation in the diameter at the root collar in relation to plant height, with values ranging from 1 to 1.5 cm in height class 3 and from 2 to 3 cm in class 4. No significant differences between the height classes were found in A. saligna Labill.regenerating individuals (F = 0.975, p = 0.414) (Figure 4).In Olea, height classes 1 and 2 were not different from each other and both were significantly higher than height classes 3 and 4 which, in turn, were equal with each other (F = 16.498,p < 0.001) (Figure 4).In Pistacia, height class 1 was significantly higher than the other height classes.Height class 2 was significantly higher than height classes 3 and 4 which, in turn, were equal with each other (Kruskal-Wallis 22.971, p < 0.001).Overall, regeneration was found to be mainly concentrated in the smallest height classes (1 and 2), together accounting for more than 85% of all the individuals (Table 2).Height classes 3 and 4 accounted for a little over 12% of all the individuals, which was almost exclusively represented by A. saligna.Olea reached the highest regeneration density (mean value: 4.7 individuals m −2 ), followed by A. saligna (mean value: 2.2 individuals m −2 ), and Pistacia (mean value: 0.6 individuals m −2 ).However, large differences, in terms of height class, were found between species .Olea regeneration was clearly dominant in the smallest height classes (1 and 2), whilst A. saligna was by far the most prevalent in height classes 3 and 4. In terms of abundance, height classes 1 and 2 were not different from each other, but both were significantly higher than height classes 3 and 4, which, in turn, were equal with each other (Kruskal-Wallis 23.815, p < 0.001).Regarding A. saligna regeneration, we found very little variation in the diameter at the root collar in relation to plant height, with values ranging from 1 to 1.5 cm in height class 3 and from 2 to 3 cm in height class 4. No significant differences between the height classes were found in A. saligna Labill.regenerating individuals (F = 0.975, p = 0.414) (Figure 4).In Olea, height classes 1 and 2 were not different from each other and both were significantly higher than height classes 3 and 4 which, in turn, were equal with each other (F = 16.498,p < 0.001) (Figure 4).In Pistacia, height class 1 was significantly higher than the other height classes.Height class 2 was significantly higher than height classes 3 and 4 which, in turn, were equal with each other (Kruskal-Wallis 22.971, p < 0.001).Regarding wood samples, it must be underlined that there are divergent results from different authors about the presence of growth rings in A. saligna.According to InsideWood database [46] and El-Sahhar et al. [47], the growth rings in this species are indistinct, whereas Crivellaro [48] showed that the growth rings boundaries are distinct.In our wood samples, the growth rings of A. saligna were visible in the wood anatomy as bands of marginal parenchyma that run around the entire disc.Counts of parenchyma bands produced disc ages of 6-12 years.We found that the average growth rate was 0.7 cm per year; similar results were found in Kenya by Jama [49]. Discussion Acacia saligna is one of the most widespread invasive species in the Mediterranean basin, where it has been widely used for afforestation purposes and it now threatens native biodiversity and alters ecosystem structure and functioning of large areas.On the other hand, in many areas, Mediterranean afforestations are significantly affected by renaturalization processes by native woody species.Both processes are increasingly common in the Mediterranean basin and they require appropriate management practices.In the understory of a Mediterranean pine plantation, the regeneration pattern of woody native species and the invasive A. saligna were found to be significantly affected by pine stand density and, also, by forest management.The overall management of pine afforestation within the Platani Reserve would require varying types of forest interventions: Thinning of the dominant pine canopy, control measures against A. saligna, and renaturalization actions to favour native woody species. Pine Management and Thinning Stand density and dominant tree size both contribute to determining the amount of available resources in the understory, thus, affecting the competitive dynamics between woody species and altering the complex balance of facilitation and inhibition interactions [5,7,8,50].Light, as well as soil water and nutrient content are known to play a crucial role in pine understory dynamics [5,43,51].This is particularly true in Mediterranean coastal pine forests where water supply during summer is considered the main limiting factor of plant growth [9].Very high stand densities (approximately 1150 pines per hectare) have strong inhibitory effects for shrub species, resulting from a reduction in light, soil water, and nutrient availability [6].There is a large consensus and much field evidence regarding the negative role played by an excessive pine cover in the renaturalization process of Mediterranean pine plantations, as well as in the richness of the understory [8,13,30].For this reason, thinning is, generally, performed with the aim of accelerating secondary succession and increasing the overall biodiversity, enhancing the heterogeneity and structural diversity of the forest stand and improving the status of the remaining trees [7,13,52].Different thinning intensities are usually required for different forest types and species [52,53]. Within Mediterranean pine afforestations, moderate thinning is considered the most suitable option to enable the establishment of most native woody species, including Olea and Pistacia, with a recommended density ranging from approximately 330 to 550 pines per hectare [39,54,55].Densities greater than 500 pines per hectare are generally required for thermophilous oaks, such as Quercus ilex L., which are shade-demanding at the seedling stage [8,56,57].However, previous research was carried out in the absence of light-demanding invasive species, such as A. saligna.The variation in pine stand density in our study site depends on periodic thinning that felled, mainly, the largest tree individuals, while the remaining trees have not yet had enough time to grow.Indeed, larger pines were found in areas with higher stand density.Conversely, the number of individuals per unit area and average tree size are inversely correlated in natural forest stands [58].Our research suggests the need to adopt different forest management practices according to the pine stand density.With densities of up to 400 pines per hectare, no thinning should be performed as a further decrease in stand density would cause the undisputed spread of A. saligna.A density greater than 400 is the minimum threshold to contain the invasive potential of A. saligna, while favouring the recruitment and establishment of native species.Densities higher than 500 pines per hectare were related to an increase in the regeneration density of approximately 6.5-fold for Olea and 2-fold for Pistacia, together with a concomitant decrease in A. saligna of approximately 8-fold.Low-intensity, localized, and gradual thinning is strongly recommended for densities exceeding 600 pines per hectare in which higher shading has kept A. saligna relatively under control.Mediterranean woody species, such as Quercus spp.and Olea, which may survive under a closed canopy during the early stages of life, need more light in subsequent sapling and juvenile phases [57].Such necessary interventions should be preceded by the clearing of any adult A. saligna plants to avoid the risk of a new invasion. Regeneration Pattern in the Understory In our study site, we assessed the current regeneration pattern of Acacia saligna and native woody species in the understory of a Mediterranean coastal pine plantation.We found evidence of a species-specific response, with A. saligna being negatively correlated to the stand density, Olea being positively correlated, and Pistacia being not significantly correlated.At high pine densities (>500 pines per hectare), Olea regeneration is by far the most prevalent, while A. saligna regeneration is prevalent under low pine densities (<400 pines per hectare).At intermediate conditions (400-500 pines per hectare), Pistacia tends to prevail slightly, while both A. saligna and Olea are represented by just over 21% of regenerating individuals.As only current regeneration has been assessed, it should be considered that long-term fluctuations could arise and cannot be excluded. Olea and Pistacia showed quite different behaviour based on the stand density.Olea attained an impressive regeneration density that linearly increased with stand density, with the maximum value exceeding 120,000 seedlings per hectare (mean: 47,110 N ha −1 ).Previous regeneration data obtained from field investigations carried out in the same protected area suggest that Olea has experienced a marked increase in the last 10-15 years [41].Such a sizable change may be due to an increase in the effective seed dispersal by birds and to possible shifts in their natural population [59].Conversely, Pistacia showed a much lower regeneration density (mean: 5,700 N ha −1 ), with non-significant variations based on the stand density.Such a pattern was not completely unexpected as Pistacia was found to dominate the understory in high pine cover conditions [30] and its considerable ecological plasticity to solar radiation is recognized [60].However, saplings of both native species were much less developed than A. saligna, being mostly less than one meter high in Olea (>99%) and Pistacia (>91%).Conversely, approximately 70% of A. saligna saplings were taller than one meter, suggesting a clearly higher reproductive potential. Acacia Management Our field surveys and widespread evidence collected worldwide suggest that the control and eradication of A. saligna is a difficult and time-consuming task.After heavy thinning or a natural disturbance event, this species seems to be better equipped than native species to occupy free space and exploit available resources, thus proving more competitive.Under such circumstances, A. saligna may exert strong negative effects on the growth and establishment of native woody species, seriously hindering the renaturalization process and any chance of an autonomous evolution towards more complex and diversified forest stands.A. saligna has proven to be particularly favoured by high light availability, such as can be found in natural gaps or recently cut areas [29].However, its regeneration has also been found in high pine densities, underlining its remarkable adaptability to a wide spectrum of light conditions [50].Furthermore, pine cover may have played a key role in the initial stages of invasion, offering suitable microclimatic conditions for regeneration in the backdunes and providing protection from direct sun exposure, marine salt-spray, and frequent winds, as well as, presumably, higher soil water and nutrient availability [45,51]. The temporal dynamics of A. saligna invasion should be considered carefully as time has significant consequences for the efficacy of control actions.The ecological impacts A. saligna exerts on the recipient ecosystem [16] and the average time needed for ecosystem recovery may vary significantly with the duration of the invasion.The alterations in soil chemical characteristics, such as increased pH and soil organic matter and nutrients (N and P), are time-dependent effects that are generally associated with A. saligna [27,61,62].For fast-growing trees, like Acacias, a time span of 20-25 years is already considered a long-term invasion [61,62].A factor explaining the massive regeneration of A. saligna in recent decades could be the improvement in edaphic conditions due to Acacia litter accumulation, as was found for Acacia cyclops on the Island of Lampedusa [63].Time has also allowed A. saligna to express its maximum reproductive potential.Annually, A. saligna mother plants may release approximately 5,400 seeds per square meter to the soil [64].While less than 10% of seeds germinate in the first year, the remaining seeds help to build up a consistent and long-lasting soil seed bank [65,66].In South African invaded stands, more than 44,000 seeds per square meter may be found in the seed bank, including the soil and litter layers [67,68].This huge amount depends on the long viability of the seed in the soil, resisting up to 50 years in the absence of mechanical scarification [65] and even 10 years after clearing [62, 66,68].Resistance to fire events is also highly influential for Mediterranean ecosystems [65]. A. saligna could have reached its maximum potential soil seed bank in the Platani nature reserve as this is related to the stem diameter reaching 6 cm [68], a value commonly found in our field surveys. The successful management of A. saligna can be achieved only if its reproductive potential is destroyed or at least severely limited.The interventions required should aim to reduce the population of adult individuals and, therefore, to curb the current seed production and to deplete seed banks by destroying seeds and/or triggering mass germination [65].Regarding clearing treatments, excluding the use of herbicides, a possible option could be felling and burning for the combined effect on A. saligna living trees and regeneration, as well as on the soil seed bank [27,64,66].However, in our protected site, the use of prescribed burning is not recommended due to difficulties in controlling the spread of fire outside well-confined areas and, also, due to negative ecological effects on the overall biodiversity.Simply stopping seed fall may lead to reductions in the soil seed bank of 80% in four to six years [67].Hence, it is crucial to intervene with systematic cuttings of A. saligna mature plants before dissemination, i.e., every year before summer.This is not simple as A. saligna reaches reproductive maturity extremely quickly: We observed individuals at just one meter tall already fruiting in the study area.It is also essential to eliminate seedlings that may emerge after a disturbance event [68].The absence of continuous forest interventions may lead to the comeback of the seed bank at pre-intervention levels in only seven years [67].After stand clearing, the altered soil conditions caused by the invasion of A. saligna can last up to 10 years [62] and may, especially, threaten the presence of native species which have a narrow ecological niche and are accustomed to ecosystems with limited nutrient availability [69].In our study site, it is highly likely that no fewer than 10 years of continuous and capillary interventions will be necessary for the effective eradication of A. saligna.Considering this, an effective possibility could be the production of woodchips of A. saligna and other wood residuals (i.e., from pruning, thinning, ecc.). Another factor that further complicates the management of pine afforestation is the concomitant occurrence of other non-native tree species, such as Acacia cyclops A.Cunn.ex G.Don and Pinus canariensis [70], Eucalyptus occidentalis Endl.[71], Vachellia karroo (Hayne) Banfi & Galasso, and Myoporum tenuifolium G.Forst.All these species experienced full naturalization in the protected site and could spread further in the near future, completely altering local vegetation dynamics, with unpredictable consequences.Such potentially invasive species may benefit from the increased nutrient availability provided by A. saligna and the clearing treatments against it.The threat also posed by potential secondary invasion has to be carefully evaluated during the necessary periodic monitoring of the study area [72]. Renaturalization After eradication, especially in areas with sufficient cover of native woody species, specific interventions addressing renaturalization of the forest plantation could be carried out.It is necessary to assess whether local species can autonomously recover their role in the ecosystem, starting from the understory.This opportunity depends on various ecological, management, and landscape characteristics.Three main aspects must be considered: Soil seed bank dynamics, the dispersal ability of native species, and the occurrence and abundance of suitable seed dispersers.Firstly, it should be noted that A. saligna is able to reduce the soil seed bank of native species over time [61].Forest ecosystems in the surrounding areas of the Platani reserve are very rare and fragmented; mature woody individuals, therefore, are too few and/or too far away, thus compromising the chance of passive restoration processes via bird communities [73].Active restoration through direct sowing or planting of native woody species is required [8,27,62].The effective renaturalization is, however, quite difficult to achieve as many biotic and abiotic factors limiting seed and seedling establishment and affecting mortality rates need to be considered.For instance, Li et al. [74] found that the successful strategy to reintroduce native woody species in the understory of plantations is using seedlings older than six months or fencing the area. Then, as native woody species in the Platani reserve all bear fleshy fruits or acorns, their dissemination strongly depends upon the occurrence of seed dispersers, especially frugivorous birds [73,75,76].Gómez-Aparicio et al. [8] found that 4 km is the maximum seed dispersal distance by Garrulus glandarius L. to enable the effective establishment of Holm oak in Mediterranean contexts.Some scattered regenerating individuals of Quercus ilex and Quercus pubescens Willd.s.l.have been found in the study area, probably due to dissemination by the jay [76].The recovery of woody vegetation could be relatively fast as many bird seed dispersers, such as the Eurasian jay (Garrulus glandarius Linnaeus), the song thrush (Turdus philomelos Brehm), the common starling (Sturnus vulgaris Linnaeus), and the blackbird (Turdus merula Linnaeus) are already present in the protected site or in its surrounding area (T.La Mantia and R. Bueno, pers.obs.). Conclusions In Mediterranean pine plantations threatened by A. saligna, forest interventions should differ according to pine stand density.The present case study may offer many useful pointers for the management of similar Mediterranean afforestations where thinning, aimed at favouring renaturalization by native woody species, could, otherwise, trigger the spread of invasive alien species, thus counterbalancing the positive effects of necessary interventions.In the absence of any active management in the Platani reserve, an inexorable increase in the invasion process by A. saligna is likely.Adaptive forest management capable of maintaining sufficient pine cover in the areas most at risk of invasion, while increasing the light available in areas with excessive stand density, appears to be the best trade-off to foster natural succession processes, whilst preventing invasion by A. saligna.The complex understory dynamics and competitive interactions between native species and A. saligna highlight the need for periodic interventions, as well as for follow-up treatments and regular and constant monitoring. Figure 1 . Figure 1.The geographic location of the "Foce del Fiume Platani" Nature Reserve on the southwestern coast of Sicily. Figure 1 . Figure 1.The geographic location of the "Foce del Fiume Platani" Nature Reserve on the southwestern coast of Sicily. ForestsForests 16 Figure 2 . Figure 2. Linear regression analysis between the pine stand density and the native woody species and Acacia saligna regeneration. Figure 3 .Table 2 . Figure 3. Relationship between the density of the native woody species and Acacia saligna regeneration. Figure 2 . Figure 2. Linear regression analysis between the pine stand density and the native woody species and Acacia saligna regeneration. Figure 2 . Figure 2. Linear regression analysis between the pine stand density and the native woody species and Acacia saligna regeneration. Figure 3 . Figure 3. Relationship between the density of the native woody species and Acacia saligna regeneration. Figure 3 . Figure 3. Relationship between the density of the native woody species and Acacia saligna regeneration. Figure 4 . Figure 4. Height class distribution of the woody species regeneration in the understory.Vertical bars show the standard error of the mean (±SE).Figure 4. Height class distribution of the woody species regeneration in the understory.Vertical bars show the standard error of the mean (±SE). Figure 4 . Figure 4. Height class distribution of the woody species regeneration in the understory.Vertical bars show the standard error of the mean (±SE).Figure 4. Height class distribution of the woody species regeneration in the understory.Vertical bars show the standard error of the mean (±SE). Table 1 . Main dendrometric parameters of the pine canopy layer of the study plots.Means are followed by standard deviations. Table 2 . Distribution in height class (%) of the woody species regeneration.The relative contribution (%) of each species to each height class is reported within round brackets.Means are followed by the standard error.
v3-fos-license
2021-05-01T06:17:19.341Z
2021-04-25T00:00:00.000
233462279
{ "extfieldsofstudy": [ "Computer Science", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/1424-8220/21/9/3020/pdf", "pdf_hash": "ab505130d6888472f7799dac739fe950b8d31b85", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41096", "s2fieldsofstudy": [ "Materials Science", "Chemistry" ], "sha1": "ec5d0b6ba5010cea3c87a2ad49d7c387e3ea9edc", "year": 2021 }
pes2o/s2orc
Design and Characterization of Electrochemical Sensor for the Determination of Mercury(II) Ion in Real Samples Based upon a New Schiff Base Derivative as an Ionophore The present paper provides a description of the design, characterization, and use of a Hg2+ selective electrode (Hg2+–SE) for the determination of Hg2+ at ultra-traces levels in a variety of real samples. The ionophore in the proposed electrode is a new Schiff base, namely 4-bromo-2-[(4-methoxyphenylimino)methyl]phenol (BMPMP). All factors affecting electrode response including polymeric membrane composition, concentration of internal solution, pH sample solution, and response time were optimized. The optimum response of our electrode was obtained with the following polymeric membrane composition (% w/w): PVC, 32; o-NPOE, 64.5; BMPMP, 2 and NaTPB, 1.5. The potentiometric response of Hg2+–SE towards Hg2+ ion was linear in the wide range of concentrations (9.33 × 10–8−3.98 × 10–3 molL–1), while, the limit of detection of the proposed electrode was 3.98 × 10–8 molL–1 (8.00 μg L–1). The Hg2+–SE responds quickly to Hg2+ ions as the response time of less than 10 s. On the other hand, the slope value obtained for the developed electrode was 29.74 ± 0.1 mV/decade in the pH range of 2.0−9.0 in good agreement with the Nernstian response (29.50 mV/decade). The Hg2+–SE has relatively less interference with other metal ions. The Hg2+–SE was used as an indicator electrode in potentiometric titrations to estimate Hg2+ ions in waters, compact fluorescent lamp, and dental amalgam alloy and the accuracy of the developed electrode was compared with ICP–OES measurement values. Moreover, the new Schiff base (BMPMP) was synthesized and characterized using ATR–FTIR, elemental analysis, 1H NMR, and 13C NMR. The PVC membranes containing BMPMP as an ionophore unloaded and loaded with Hg(II) are reported by scanning electron microscope images (SEM) along with energy-dispersive X-ray spectroscopy (EDX) spectra. Introduction Mercury exists in nature at trace and ultra-trace levels. However, it is one of the most toxic heavy metals on earth. Among the different valence states, Hg (II) is the most toxic even when present in very trace amounts [1]. Mercury can enter and accumulate in the human body through the food chain causing severe health problems such as vital organ damage, nervous system impairment, kidney failure, and cancer [2][3][4][5]. Thus, monitoring trace concentrations of this toxic element has become a vital necessity. Various analytical techniques have been developed and used for the determination of mercury species in different samples. These techniques include spectroscopic measurements in the UV-Vis region [6], cold vapor atomic absorption spectrometry (CV-AAS) [7], atomic emission spectroscopy (AES) [8], cold vapor atomic fluorescence spectrometry (CV-AFS) [9], measurements using inductively coupled plasma mass spectrometry (ICP-MS) [10], X-ray fluorescence [11], ion chromatography [12,13] and electrochemical sensors [14]. Despite the 2 of 17 fact that these techniques have high sensitivity and accuracy, they have some disadvantages in terms of high costs, maintenance, and complicated data analysis. Furthermore, highly trained skilled technicians are needed for data interpretation and operation [15]. As a result, developing a simple, fast, low-cost, accurate, sensitive, and selective analytical technique is necessary. Since it is relatively inexpensive, simple to operate, and provides a real-time measurement, the ion-selective electrode (ISE) is one of the most popular electroanalytical techniques used to determine the concentration of a wide variety of metal ions in various samples such as food, soil, and waters. Therefore, ISE can monitor the change of activity of ion with time [16][17][18]. It is well known that Schiff bases can form stable complexes with most transition metal ions including mercury (II) ions [40,41]. Thus, many of these bases were prepared and used as ionophores for developing ISEs of mercury (II) [42][43][44][45][46]. Due to the good sensitivity of the selective electrodes dependent on Schiff bases, the present work will aim to prepare a new Schiff base and its use as a neutral ionophore for construction of Hg 2+ selective electrode. Electrode membrane containing a new Schiff base will be characterized before and after loading Hg 2+ ions using SEM micrographs and EDX spectra. The developed electrode will use for the determination of mercury in some real samples. Reagents and Chemicals All reagents and chemicals used in this work were of analytical grade and used without further purification. Bromo-2-hydroxybenzaldehyde, 4-methoxyphenyl amine, o-nitro-phenyloctylether, sodium tetraphenylborate, high molecular weight polyvinyl chloride (PVC), and metal salts (as nitrates) were purchased from MilliporeSigma (Saint Louis, MO, USA). and used with no further purification. Organic solvents were obtained from Thermo Fisher Scientific (Hampton, NH, USA). All solutions were prepared using deionized water. Instrumentation A Perkin Elmer analyzer model 2400 (Waltham, MA, USA) was used for determining the elemental compositions of the BMPMP ligand. A 1 H NMR and 13 C NMR spectrum of the BMPMP ionophore was obtained in DMSO-d 6 solvent using Bruker FT-NMR spectrometer (model DRX-500, Billerica, MA, USA). ATR-FTIR spectra of membrane used in Hg 2+ -SE was recorded in the range of 400-4000 cm -1 using JASCO 4600 FT-IR spectrometer (Tokyo, Japan). Samples were directly introduced using the unit of the attenuated total reflectance model ATR PRO ONE Single-Reflection. Morphological studies and elemental distributions on the synthesized electrode surface were investigated using a JEOL JEM-6390 scanning electron microscope combined with a unit of energy-dispersive X-ray spectroscopy (Peabody, MA, USA). Determination of concentration of mercury in aqueous solutions was carried out by Perkin Elmer ICP-OES spectrometry model Optima 2100 DV (Waltham, MA, USA). Acidic reaction (pH) measurements were performed using an advanced bench pH meter (model 3510, Jenway, Staffordshire, UK). pH-meter was calibrated by three different buffer solutions (pH 4.01-AD7004, pH 7.01-AD7007, Construction of Membrane Electrode Nine membranes were prepared using the technique described in the literature, with different concentrations of polymer (PVC), ionophore (BMPMP), ionic additive (NaTPB), and plasticizer (o-NPOE). [42]. A mixture of previous components with various percentages shown in Table 1 was dissolved in 6 mL tetrahydrofuran (THF) with shaking for 5 min. The solution was transferred into a petri dish and left at room temperature (25 ± 2 °C) until solvent was evaporated. Thereafter, a tube with a diameter of 15 mm was immersed into the previous mixture for about 10 s to obtain a transparent membrane with the aid of an adhesive solution prepared by dissolving PVC in THF. After 24 h, the tube was separated from the mixture and filled with an internal solution of saturated KCl containing Hg(NO3)2 (1 × 10 -3 molL -1 ). The internal reference electrode was Ag/AgCl electrode. The membranes were conditioned overnight in a solution of Hg(NO3)2 with a concentration of 110 -2 molL -1 . Potentiometric Measurements Potentiometric measurements were performed on the engineered PVC membranes, with the prepared Hg 2+ selective electrode and reference electrode inserted in 50 mL of Hg(NO3)2 solution at concentration levels of 1.00 × 10 -2 to 1.00 × 10 -8 molL -1 before the potential reading became stable. All measurements were carried out at 25 ± 2 °C and pH 6 with magnetic stirring. The potential of the electrochemical cell, including Hg 2+ -SE was calculated using the Nernst equation: Construction of Membrane Electrode Nine membranes were prepared using the technique described in the literature, with different concentrations of polymer (PVC), ionophore (BMPMP), ionic additive (NaTPB), and plasticizer (o-NPOE). [42]. A mixture of previous components with various percentages shown in Table 1 was dissolved in 6 mL tetrahydrofuran (THF) with shaking for 5 min. The solution was transferred into a petri dish and left at room temperature (25 ± 2 • C) until solvent was evaporated. Thereafter, a tube with a diameter of 15 mm was immersed into the previous mixture for about 10 s to obtain a transparent membrane with the aid of an adhesive solution prepared by dissolving PVC in THF. After 24 h, the tube was separated from the mixture and filled with an internal solution of saturated KCl containing Hg(NO 3 ) 2 (1 × 10 -3 molL -1 ). The internal reference electrode was Ag/AgCl electrode. The membranes were conditioned overnight in a solution of Hg(NO 3 ) 2 with a concentration of 110 -2 molL -1 . Potentiometric Measurements Potentiometric measurements were performed on the engineered PVC membranes, with the prepared Hg 2+ selective electrode and reference electrode inserted in 50 mL of Hg(NO 3 ) 2 solution at concentration levels of 1.00 × 10 -2 to 1.00 × 10 -8 molL -1 before the potential reading became stable. All measurements were carried out at 25 ± 2 • C and pH 6 with magnetic stirring. The potential of the electrochemical cell, including Hg 2+ -SE was calculated using the Nernst equation: where E cell , E • , R, T, and F are the potential of electrochemical cell potential, the standard potential, gas constant, absolute temperature, and Faraday constant, respectively, while, z is the ion charge, and a is its activity. The ion activities were calculated using Debye-Huckel equation [48]. Calibration curves of tested Hg 2+ -SEs were obtained by plotting E cell in mV versus-loga Hg 2+ . Selectivity Measurements The potentiometric selectivity coefficients (K pot Hg,M ) of the proposed Hg 2+ -SE against interfering ions were determined according to the separate solution method (SSM) [49]. It was conducted as follows: pH value of a solution of primary ion of Hg(NO 3 ) 2 (2.23 × 10 −4 molL −1 ) was adjusted to an optimal value of 6.0 using HCl 1 molL −1 and/or NaOH 1 molL −1 . A constant concentration of the interfering ion solution (2.23 × 10 −4 molL −1 ) was added to a solution of primary ion of Hg(NO 3 ) 2 (2.23 × 10 −4 molL −1 ) until the same potential change (∆E) was achieved. For each interferent the selectivity factor K pot Hg,M was calculated using the following equation: where E M is the standard potential of the interfering ion at the activity a M and E Hg is the standard potential of the primary ion at the activity a M . Preparation of Real Samples Four real samples of tap water, sea water, compact fluorescent lamp, and dental amalgam alloy were used to evaluate the efficiency of Hg 2+ -SE as an indicator electrode for the potentiometric determination of Hg 2+ ions. Preparation of Water Samples Tap water was sampled from laboratories of the department of chemistry, Taif University, KSA. Sea water was collected from the Red Sea, Jeddah City, western Saudi Arabia. Water samples were filtered through Whatman filter paper (No. 1 with a diameter of 150 mm). Each water sample was transferred into a clean 50-mL volumetric flask. The pH of samples was adjusted at 6 using a mixture of HCl and NaOH. The determination of Hg 2+ ions in water samples was carried out at 25 ± 0.2 • C by potentiometric titration using the developed Hg 2+ -SE as an indicator electrode. For comparison, ICP-OES measurements were performed according to the method mentioned in [50]. Preparation of Compact Fluorescent Lamp A compact fluorescent lamp sample was obtained from Alfanar Company, Riyadh city, KSA. The obtained sample was treated according to the mentioned procedure [51]. Briefly, the sample was digested using a mixture of concentrated nitric acid and H 2 O 2 with a concentration of 30% for 1 h. The solution obtained after digestion was neutralized by NaOH (5 molL -1 ) and diluted to 50 mL. A part of solution was subjected to potentiometric titration using Hg 2+ -SE as an indicator electrode for the determination of mercury in a compact fluorescent lamp. Preparation of a Dental Amalgam Alloy A dental amalgam capsule alloy was purchased from Dentsply Sirona Company. According to the manufacturer, the alloy contains 33.0% Ag, 8.5% Sn, 16.5% Cu, and 42.0% Hg. An accurate weight of alloy was digested using HNO 3 (20 mL, 60%) at 60-70 • C for 2 h. The residue was washed with deionized water and filtered into a 50-mL volumetric flask. The solution pH was adjusted at 6 using a mixture of HCl and NaOH solutions. The content of mercury in alloy was analyzed by standard addition method where change in voltage is monitored after each addition of the standard solution of Hg(NO 3 ) 2 (3.0 × 10 -3 molL -1 ). Moreover, the mercury concentration in the dental amalgam sample was also determined using the ICP-OES method [6,45]. Optimization of PVC Membrane Compositions BMPMP ligand, synthesized in this study, contains active sides (imine and phenolic OH) that may react with Hg (II) ions to form a stable complex. Therefore, this Schiff base was used as a new ionophore to prepare the selective electrode for Hg 2+ ions. It was previously known that the potential of ion-selective electrodes (ISEs) is fundamentally dependent on the amount and nature of the ionophore, plasticizer, and lipophlic additives [52]. On the other hand, the plasticizer/PVC ratio plays a main role to obtain optimized response [53]. o-NPOE was chosen as plasticizer due to good solubility of membrane components as well as moderate dielectric constant while NaTPB was used as a lipophlic additive owing to its important role in increasing the sensitivity and selectivity of the electrode as well reduces anionic interference. Thus, the impact of membrane composition on the performance of the proposed Hg 2+ -SE was investigated by designing nine ISEs containing different membranes as shown in Table 1. Calibration curves were plotted for each ISE and shown in Figure 1. Findings in Table 1 and Figure 1 reveal that the electrodes of ISE1 and ISE2 did not respond to the change in Hg 2+ concentration. This behavior is most likely attributed to the absence of the BMPMP ionophore in the membrane matrix. However, membranes grafted with BMPMP as an ionophore provided better responses towards Hg 2+ ion (ISEs 3-9). The response of these ISEs for Hg 2+ ion may be attributed to interaction between Hg 2+ ions and BMPMP molecule. ISE7 provided the optimized response where Nernstian slope was 29.78 ± 0.15 mV/decade in good agreement with the value of 29.5 mV/decade of the divalent ions. Moreover, ISE7 gives fast and linear response over a wide range of Hg 2+ ion concentration (9.33 × 10 -8 -3.98 × 10 -3 molL -1 ) with the detection limit of 3.98 × 10 -8 molL -1 at optimized experimental conditions ( Figure 2). Thus, PVC membrane of ISE7 was studied using ATR-FTIR, SEM micrographs, and EDX spectra before and after soaking in an aqueous solution of Hg 2+ ions. The electrodes of ISE3, ISE4, ISE5, ISE6, ISE8, and ISE9 provided low performance compared with that of ISE7. This behavior is most likely attributed to saturation of membrane and its inhomogeneity [54]. The composition of ISE7 was selected for designing an ion-selective electrode for mercury determination in a variety of real samples. 3.16 × 10 −5 -7.94 × 10 −3 10-15 a All slope values reported represent the mean and ±SD of three measurements. b D.L., L.R., and R.T. denote the lower detection limit, linear working range and response time respectively. The results are based on three replicate measurements. 3.16 × 10 −5 -7.94 × 10 −3 10-15 a All slope values reported represent the mean and ±SD of three measurements. b D.L., L.R., and R.T. denote the lower detection limit, linear working range and response time respectively. The results are based on three replicate measurements. ATR-FTIR Investigation of Hg 2+ -SE Membrane Based on BMPMP as an Ionophore ATR-FTIR spectra of Hg 2+ -SE membrane containing optimized composition were recorded before and after using for sensing Hg 2+ ions to obtain information on the ionligand interaction and specify the active sites available in BMPMP molecule that can coordinate with Hg 2+ ion. A thin layer of membrane was used to record ATR-FTIR spectra shown in Figure 3. ATR-FTIR spectrum of PVC membrane that does not contain BMPMP was recorded and subtracted from the spectra shown in Figure 3A,B. Characteristics bands of PVC membrane before soaking with analyte solution ( Figure 3A) are observed at 3400, 1642, 1523, and 1407 cm −1 corresponding to ν(O-H), ν(C = N), ν(C = C), and δ(O-CCH 3 ), respectively. Significant changes in the spectrum of membrane loaded with Hg(II) indicate an interaction between BMPMP and Hg 2+ ions in a PVC membrane matrix ( Figure 3B). The disappearance of peak at 3400 cm −1 corresponding to ν(OH) and the appearance of a new peak of ν(Hg-O) at 549 cm −1 reveal that BMPMP ligand has coordinated with Hg 2+ ions by the phenolic OH. Coordination of Hg 2+ ions through nitrogen atom of BMPMP molecule is confirmed by the red shift of the ν(C = N) band from 1642 to 1600 cm −1 and the appearance of new peak at 510 cm −1 corresponding to ν(Hg-N) [55]. δ(O-CCH3), respectively. Significant changes in the spectrum of membrane loaded with Hg(II) indicate an interaction between BMPMP and Hg 2+ ions in a PVC membrane matrix ( Figure 3B). The disappearance of peak at 3400 cm −1 corresponding to ν(OH) and the appearance of a new peak of ν(Hg-O) at 549 cm −1 reveal that BMPMP ligand has coordinated with Hg 2+ ions by the phenolic OH. Coordination of Hg 2+ ions through nitrogen atom of BMPMP molecule is confirmed by the red shift of the ν(C = N) band from 1642 to 1600 cm −1 and the appearance of new peak at 510 cm −1 corresponding to ν(Hg-N) [55]. SEM-EDX Investigations of Hg 2+ -SE Membrane The morphology of the PVC membrane containing BMPMP as the ionophore was studied by SEM images before being superimposed in the ISE. The SEM micrograph shows a microporous membrane ( Figure 4A). The surface of the membrane is somewhat smooth with few small protrusions. There are significant changes in membrane morphology after soaking in an aqueous Hg 2+ solution as demonstrated in Figure 4B. The surface of the electrode is rougher with white patches spreading over the surface of the membrane providing an indication of the presence of analyte in the membrane matrix. The presence of Hg 2+ cation in the membrane used for sensing Hg 2+ was confirmed by EDX analysis of SEM micrograph displayed in Figure 4B. The characteristic peaks of mercury at 1.7, 2.3, and 10.2 keV were observed in Figure 5A. EDX spectrum of control membrane was recorded and shown in Figure 5B for comparison. It should be noted that the absence of the Cl peak may be due to the leaching of anionic impurities [56,57]. SEM-EDX Investigations of Hg 2+ -SE Membrane The morphology of the PVC membrane containing BMPMP as the ionophore was studied by SEM images before being superimposed in the ISE. The SEM micrograph shows a microporous membrane ( Figure 4A). The surface of the membrane is somewhat smooth with few small protrusions. There are significant changes in membrane morphology after soaking in an aqueous Hg 2+ solution as demonstrated in Figure 4B. The surface of the electrode is rougher with white patches spreading over the surface of the membrane providing an indication of the presence of analyte in the membrane matrix. The presence of Hg 2+ cation in the membrane used for sensing Hg 2+ was confirmed by EDX analysis of SEM micrograph displayed in Figure 4B. The characteristic peaks of mercury at 1.7, 2.3, and 10.2 keV were observed in Figure 5A. EDX spectrum of control membrane was recorded and shown in Figure 5B for comparison. It should be noted that the absence of the Cl peak may be due to the leaching of anionic impurities [56,57]. The Influence of the Internal Solution Concentration An aqueous solution of Hg(NO3)2 was employed as an internal solution in the developed Hg 2+ -SE. Therefore, the concentration influence of this solution on the potential of Hg 2+ -SE was studied in the range of 1.0 × 10 -2 -1.0 × 10 -4 molL -1 . The results outlined in Table 2 ( Figure 6) reveal that the optimized slope, wide linear concentration range, lower detection limit, and fast response time were obtained with the concentration of 1.0 × 10 -2 molL -1 . This is likely due to the high activity of Hg(NO3)2 solution at this concentration that enhanced the potential of Hg 2+ -SE. Thus, this concentration was employed in subsequent work. The Influence of the Internal Solution Concentration An aqueous solution of Hg(NO 3 ) 2 was employed as an internal solution in the developed Hg 2+ -SE. Therefore, the concentration influence of this solution on the potential of Hg 2+ -SE was studied in the range of 1.0 × 10 -2 -1.0 × 10 -4 molL -1 . The results outlined in Table 2 ( Figure 6) reveal that the optimized slope, wide linear concentration range, lower detection limit, and fast response time were obtained with the concentration of 1.0 × 10 -2 molL -1 . This is likely due to the high activity of Hg(NO 3 ) 2 solution at this concentration that enhanced the potential of Hg 2+ -SE. Thus, this concentration was employed in subsequent work. The pH Effect on the Proposed Electrode Response Two standard solutions containing 1.0 × 10 -2 and 1.0 × 10 -3 molL -1 of Hg 2+ ion were used to test the pH effect. The test solutions pH was adjusted to desired values (0.5-10.0) by adding HCl or NaOH (0.1 molL -1 ). Linear deficiency in the potential response was noticeable in pH range of 0.5 to 2 ( Figure 7). However, the potential remains constant from Sensors 2021, 21, 3020 9 of 17 pH 2.0 to 8.5. Then, a sharp deficiency was observed at higher pH values higher than 9. (Figure 7). The precipitation of Hg 2+ ions as Hg(OH) 2 at pH higher than 9 is a possible cause of this deficiency [58,59]. pH 6.0 was selected as the optimized value to adjust sample pH in the next work due to the fast response at this value. The pH Effect on the Proposed Electrode Response Two standard solutions containing 1.0 × 10 -2 and 1.0 × 10 -3 molL -1 of Hg 2+ ion were used to test the pH effect. The test solutions pH was adjusted to desired values (0.5-10.0) by adding HCl or NaOH (0.1 molL -1 ). Linear deficiency in the potential response was noticeable in pH range of 0.5 to 2 ( Figure 7). However, the potential remains constant from pH 2.0 to 8.5. Then, a sharp deficiency was observed at higher pH values higher than 9. (Figure 7). The precipitation of Hg 2+ ions as Hg(OH)2 at pH higher than 9 is a possible cause of this deficiency [58,59]. pH 6.0 was selected as the optimized value to adjust sample pH in the next work due to the fast response at this value. Response Time of the Proposed Hg 2+ -SE The response time of the proposed Hg 2+ -SE was investigated at a different concentration of Hg(NO 3 ) 2 (1.0 × 10 −7 -1.0 × 10 −4 molL -1 ). The potential versus response time was plotted in Figure 8. The response time of developed Hg 2+ -SE becomes fast upon the concentration increases. However, the response of the electrode reached a steady-state potential in less than 10 s after analyte addition. The steadiness attained in a short response time indicates fast kinetics of Hg 2+ ions interaction with the ionophore (BMPMP) occurring at the test solution-membrane interphase to reach chemical equilibrium [60]. Life Time of the Proposed Hg 2+ -SE Generally, the lifetime of ISE basically relies on the electrode compositions and the number of times of use [61]. The lifetime of our electrode was investigated by measuring the slope value weekly over a 16-week period (112 days). The results shown in Figure 9 reveal that there is no significant change in the slop value (29.78 mV/decade) during the first 10 weeks. Therefore, the developed Hg 2+ -SE can be used successfully during this period for determination of Hg 2+ ions. However, the slope value of Hg 2+ -SE decreased dramatically from 22.85 after the twelfth week to 6.15 mV/decade at 16 weeks. The expected reason for the decrease in the value of the electrode slope over time is leaching plasticizer, ionophore, additive, or PVC as a matrix from the membrane into the sample solution during use [62]. used to test the pH effect. The test solutions pH was adjusted to desired values (0.5-10.0) by adding HCl or NaOH (0.1 molL -1 ). Linear deficiency in the potential response was noticeable in pH range of 0.5 to 2 (Figure 7). However, the potential remains constant from pH 2.0 to 8.5. Then, a sharp deficiency was observed at higher pH values higher than 9. (Figure 7). The precipitation of Hg 2+ ions as Hg(OH)2 at pH higher than 9 is a possible cause of this deficiency [58,59]. pH 6.0 was selected as the optimized value to adjust sample pH in the next work due to the fast response at this value. Response Time of the Proposed Hg 2+ -SE The response time of the proposed Hg 2+ -SE was investigated at a different concentration of Hg(NO3)2 (1.0 × 10 −7 -1.0 × 10 −4 molL -1 ). The potential versus response time was plotted in Figure 8. The response time of developed Hg 2+ -SE becomes fast upon the concentration increases. However, the response of the electrode reached a steady-state potential in less than 10 s after analyte addition. The steadiness attained in a short re- Life Time of the Proposed Hg 2+ -SE Generally, the lifetime of ISE basically relies on the electrode compositions and the number of times of use [61]. The lifetime of our electrode was investigated by measuring the slope value weekly over a 16-week period (112 days). The results shown in Figure 9 reveal that there is no significant change in the slop value (29.78 mV/decade) during the first 10 weeks. Therefore, the developed Hg 2+ -SE can be used successfully during this period for determination of Hg 2+ ions. However, the slope value of Hg 2+ -SE decreased dramatically from 22.85 after the twelfth week to 6.15 mV/decade at 16 weeks. The expected reason for the decrease in the value of the electrode slope over time is leaching plasticizer, ionophore, additive, or PVC as a matrix from the membrane into the sample solution during use [62]. Response of the Proposed Hg 2+ -SE towards Other Ions (Selectivity) The selectivity coefficient K pot A,M is used to describe the influence of interfering ions on the response of ISEs. When K pot A,M is less than 1, ISE preferentially responds to primary ion (analyte). The selectivity coefficient of Hg(II)-SE K pot Hg,M was calculated according to IUPAC recommendations using the matched potential method. From findings in Table 3 and Figure 10, all tested metal ions have selectivity coefficient less than 3.0 × 10 −3 molL -1 . Therefore, our electrode provides high selective to the Hg 2+ ion in the presence of wide variety of cations. reveal that there is no significant change in the slop value (29.78 mV/decade) during the first 10 weeks. Therefore, the developed Hg 2+ -SE can be used successfully during this period for determination of Hg 2+ ions. However, the slope value of Hg 2+ -SE decreased dramatically from 22.85 after the twelfth week to 6.15 mV/decade at 16 weeks. The expected reason for the decrease in the value of the electrode slope over time is leaching plasticizer, ionophore, additive, or PVC as a matrix from the membrane into the sample solution during use [62]. Response of the Proposed Hg 2+ -SE Towards Other Ions (Selectivity) The selectivity coefficient ( , ) is used to describe the influence of interfering ions on the response of ISEs. When , is less than 1, ISE preferentially responds to primary ion (analyte). The selectivity coefficient of Hg(II)-SE ( , ) was calculated according to IUPAC recommendations using the matched potential method. From findings in Table 3 and Figure 10, all tested metal ions have selectivity coefficient less than 3.0 × 10 −3 molL -1 . Therefore, our electrode provides high selective to the Hg 2+ ion in the presence of wide variety of cations. Potentiometric Titrations Using Hg(II)-ISEs Based on BMPMP The proposed Hg 2+ -SE based on BMPMP as an ionophore was employed as an indicator electrode in potentiometric titrations to test the electrode's ability for monitoring mercury (II) concentration in aqueous solutions. Figure 11 shows potentiometric titration curve of 60 mL of Hg(NO3)2 (2.0 × 10 -3 molL -1 ) with 3.0 × 10 -2 molL -1 of EDTA as a titrant. As seen in Figure11, the potential response before the end point remains almost steady, due to the low concentration of EDTA in the solution. After the end point, the potential response remains constant which is referred to the low concentration of free Hg 2+ ions in Potentiometric Titrations Using Hg(II)-ISEs Based on BMPMP The proposed Hg 2+ -SE based on BMPMP as an ionophore was employed as an indicator electrode in potentiometric titrations to test the electrode's ability for monitoring mercury (II) concentration in aqueous solutions. Figure 11 shows potentiometric titration curve of 60 mL of Hg(NO 3 ) 2 (2.0 × 10 -3 molL -1 ) with 3.0 × 10 -2 molL -1 of EDTA as a titrant. As seen in Figure 11, the potential response before the end point remains almost steady, due to the low concentration of EDTA in the solution. After the end point, the potential response remains constant which is referred to the low concentration of free Hg 2+ ions in the solution. The end point of the titration was found to be~4.0 mL. This indicates that the developed Hg 2+ -SE at optimized conditions is a suitable analytical tool for the potentiometric determination of Hg 2+ ion in aqueous solutions. Analytical Applications The accuracy of Hg 2+ -SE designed in this work was tested using a dental amalgam capsule alloy (Dentsply Sirona company). Mercury concentration in this alloy estimated by the proposed Hg 2+ -SE was 41.8% (w/w) with a small difference from certified value (42.0%). However, student's t-test showed that no significant difference between the two concentrations at the 95% confidence level because the tabulated value of t (2.78) is greater than the calculated one (2.65) for five replicate measurements. Therefore, electrode accuracy is acceptable from the point of view of analytical chemistry. Four real samples shown in Table 4 were used to evaluate the developed electrode. All samples were treated as above-mentioned and part of their aqueous solutions was subjected to potentiometric measurements using the developed Hg 2+ -SE as an indicator electrode. All samples were analyzed before and after spiking with known concentrations of Hg(II) ions. According to the results shown in Table 4, it is clear that the recovered amounts of the mercury from the real samples were almost quantified. Moreover, the samples were analyzed using ICP-OES. The results of ICP-OES measurements were in good agreement with those of Hg 2+ -SE as shown in Table 4. The statistical evaluation using F-test has been applied, and the results revealed that no statistical difference between two methods where the calculated values of F were always less than the tabulated F value (3.179) for ten replicate measurements. Therefore, we can say that the precision of both methods is statistically acceptable and there is no significant difference between them at the 95% confidence level. Analytical Applications The accuracy of Hg 2+ -SE designed in this work was tested using a dental amalgam capsule alloy (Dentsply Sirona company). Mercury concentration in this alloy estimated by the proposed Hg 2+ -SE was 41.8% (w/w) with a small difference from certified value (42.0%). However, student's t-test showed that no significant difference between the two concentrations at the 95% confidence level because the tabulated value of t (2.78) is greater than the calculated one (2.65) for five replicate measurements. Therefore, electrode accuracy is acceptable from the point of view of analytical chemistry. Four real samples shown in Table 4 were used to evaluate the developed electrode. All samples were treated as above-mentioned and part of their aqueous solutions was subjected to potentiometric measurements using the developed Hg 2+ -SE as an indicator electrode. All samples were analyzed before and after spiking with known concentrations of Hg(II) ions. According to the results shown in Table 4, it is clear that the recovered amounts of the mercury from the real samples were almost quantified. Moreover, the samples were analyzed using ICP-OES. The results of ICP-OES measurements were in good agreement with those of Hg 2+ -SE as shown in Table 4. The statistical evaluation using F-test has been applied, and the results revealed that no statistical difference between two methods where the calculated values of F were always less than the tabulated F value (3.179) for ten replicate measurements. Therefore, we can say that the precision of both methods is statistically acceptable and there is no significant difference between them at the 95% confidence level. Conclusions For the first time, Schiff base (BMPMP) is synthesized and used as a neutral carrier to design a new PVC membrane for Hg 2+ ions. The interaction between BMPMP and Hg 2+ ions in the PVC membrane matrix was studied by ATR-FTIR spectra, SEM images, and EDX spectra. The study of ATR-FTIR spectra recorded using the electrode membrane revealed that the Hg 2+ ion could be coordinated with a BMPMP molecule through nitrogen and oxygen atoms. The analysis of SEM images and EDX spectra confirmed the presence of analyte in the membrane matrix. The membrane composition of 32% PVC, 64.5% o-NPOE, 2% BMPMP, and 1.5% NaTPB provides a better analytical performance with high selectivity towards Hg 2+ ions over a wide concentrations range 9.33 × 10 -8 -3.98 × 10 -3 molL -1 (0.0933-3980 µM). The electrode developed in this work offers a relatively fast response, less interference, reasonable long-term stability, and potential stability. The fabricated electrode was successfully applied for the determination of Hg(II) in real samples.
v3-fos-license
2019-05-06T14:07:28.944Z
2013-04-01T00:00:00.000
145067273
{ "extfieldsofstudy": [ "Psychology" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "http://journals.sagepub.com/doi/pdf/10.1177/2158244013489689", "pdf_hash": "7c90b2d81ee22b6667024da9bd55c227271a850c", "pdf_src": "Sage", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41099", "s2fieldsofstudy": [ "Psychology" ], "sha1": "14ae674528ad485ed4891562e62c4445b9bd4c3c", "year": 2013 }
pes2o/s2orc
Examination for Professional Practice in Psychology Examination for Professional Practice in Psychology (EPPP) taken by professional (Vail Model) graduates had a failure rate of 30.82% in contrast to 7.60% for EPPP licensing exams taken by traditional (Boulder Model) graduates of clinical psychology programs. Thus, exams taken by professional graduates were 4.06 more likely to result in failure. It was acknowledged that it is not known whether EPPP performance is related to psychotherapy outcome. It has now been more than a decade since the correlates of performance on the national licensing exam, the Examination for Professional Practice in Psychology (EPPP) of clinical psychology graduates was reported (Templer & Tomeo, 1998b;Yu et al., 1997). In that research with 45 professional school (Vail Model) programs and 83 traditional (Boulder Model) programs, EPPP scores correlated .78 with 1988-1995 median Verbal + Quantitative + Analytical Graduate Record Examination (GRE) scores, .65 with minimum Verbal + Quantitative + Analytical GRE, .56 with American Psychological Association approval, −.49 with ratio of fulltime students to core faculty, −.46 with number of doctorates awarded, .46 with minimum Analytical GRE, and −.44 with number of full-time students enrolled. The findings that were most controversial and most professionally relevant involved the high EPPP scores of the traditional program graduates, which were especially obvious at both ends of the distribution. The top of the distribution was dominated by traditional programs and the bottom of the distribution was dominated by professional programs. The traditional programs were superior not only on the EPPP total score and the Research subtests but also on the Diagnostic, Intervention, and Professional/Ethics subtests . Subsequent research demonstrated that a salient difference still existed for EPPP scores 1997-2005 (Templer, Stroup, Mancuso, & Tangen, 2008). The findings are especially noteworthy to professional schools because they purport to produce a superior professional product. Independent professional practice is not possible without passing the EPPP. Unfortunately for the image of professional schools, their graduates tend not to do well on other indicators of professional functioning. They are less likely to be a diplomate in the American Board of Professional Psychology (ABPP), less likely to be president of state psychological associations, and less likely to be directors of the Association of Psychology Postdoctoral and Internship Centers (APPIC; . The generally less satisfactory performance of professional school graduates has led to the inference that there are two tiers of clinical training in the United States (Templer, 2005b). The present study went beyond the previous EPPP research comparing professional (Vail Model) and traditional (Boulder Model) clinical psychology graduates in two ways. (a) The Educational and Reporting Service had reported mean EPPP scores of programs and such was used in the previous research. The Educational and Reporting Service currently provides number and percentage of exams taken and passed for each program. Kenkel, DeLeon, Albino, and Porter (2003) suggested that pass rate may be more important than mean score. This appears to be a reasonable perspective. The psychologist with a perfect EPPP score receives the same license as the one who barely passes. (b) The previous research on the EPPP compared mean scores of professional and traditional graduates has actually attenuated the true differences because professional school means are based on a much larger N. The larger N is a function of both professional schools graduating more students per program and because of having to take the EPPP again. In the present study, we not only report pass/fail mean program rates but also consider the number of exam taken and passed/failed by the composite of professional programs and traditional programs. Method The EPPP pass rate information was obtained from the Association of State and Professional Provincial Boards (ASPPB; 2010) for 2005-2009 for 217 clinical psychology programs: 154 traditional programs (140 American and 14 Canadian) and 63 professional (all American) programs. ASPPB only included programs that had three or more exams. An additional caveat is that pass is based on a score of 500, which is the passing score of the preponderance of states and provinces. If one wishes to determine the exact pass rate of a jurisdiction, one should communicate with that jurisdiction. It should be borne in mind that N pertains to number of exams taken and not necessarily the number of different graduates having taken the exam. That is, the number of exams taken includes an unknown number of retakes. The Alliant/ California School of Professional Psychology (CSPP)-Berkeley 11 exams were combined with the 268 Alliant/ CSPP-Alameda graduates because there was a change in location but not program. Only straight clinical psychology programs were included. Clinical-neuropsychology, clinicalhealth, clinical-ethnic minority, clinical-family child, clinical psychodynamic, combined clinical/counseling/school, clinicalcommunity, child-clinical, clinical school, combined professional scientific, clinical psychology-health and medical, clinical psychology-law, physiological-developmental clinical, and adult-clinical were excluded. There were 31 programs excluded due to being combined with other areas in contrast to 217 solely clinical programs whose data were analyzed. It should be further noted that the American Psychological Association accredits not only clinical psychology programs but also counseling and school programs. The latter two categories were not used because virtually all of the professional schools have clinical psychology programs, although some may have counseling and school elements. Results The readers should bear I mind that the past pass/failure rates pertain to the exams taken and not number of graduates. It is not known whether a failure is for the 1st time the graduate took the test or the 20th time. It is conceivable that a small percentage of professional graduates experience multiple failures but the overwhelming preponderance of professional graduates pass the EPPP on the first attempt. Table 1 contains the number of exams taken and pass rates for the 63 professional (Vail Model) and 154 traditional (Boulder Model) programs. Tied programs are listed in alphabetical order. Professional programs have a "P" in front of them and are capitalized. Traditional programs have a "T" in front of them. Nine (14.30%) of the 63 professional programs and 128 (83.02%) of the 154 traditional programs had mean pass rates higher than the overall mean pass rate of 86.97%, χ 2 (1, N = 217) = 163.42, p < .001. The 63 professional schools had a mean pass rate of 73.41% (SD = 14.02) and the traditional programs had a mean pass rate of 92.77 (SD = 8.49), t(df = 84.77) = 10.35, p < .001. As stated above, however, such analyses mask the true differences between professional and traditional clinical psychology programs because most professional schools have many more EPPP exams taken and because these data include retakes. In the present study, in one state, there were six professional programs that each had more than 200 exams taken for a total of 1,775 exams, and 690 exams failed. This is over twice that of the 154 traditional programs combined. Of the 3,447 EPPP exams taken by traditional clinical psychology graduates, 3,185 (92.20%) were passed and of the 7,202 exams taken by professional clinical psychology graduates, 4,946 (69.20%) were passed, χ 2 (1, df = 10,647) = 5,321.53, p < .001. Failure rate on the EPPP of 30.82% was 4.06 greater than that of the traditional graduates' failure rate of 7.60%. Discussion The vastly superior quality of the students of the traditional programs must certainly contribute to the higher EPPP pass rate of the traditional programs. Traditional programs accept 10% and professional schools 44% of applicants (Templer, 2005a(Templer, , 2005bTempler & Bedi, 2003). The mean grade point average of traditional programs is two standard deviations higher (Templer, 2005b). Their mean GRE scores are more than two standard deviations higher (Templer & Arikawa, 2004). The GRE difference may be especially important because it correlated substantially with Wechsler Adult Intelligence Scale-Revised IQ (Carvajal & Pauls, 1995). This is especially unfortunate in view of the current demand for evidence-based assessment and treatment (e.g., Levant, 2004). Less talented people are less capable of examining scientific evidence. It should be noted that not all professional schools have high failure rates. Five professional programs have a pass rate of more than 95%. For example, the Rutgers professional program mean pass rate of 95.80% is almost identical to that of the Rutgers traditional program rate of 95.50%. The Rutgers professional school mean pass rate of 95.80% is almost identical to that of the Rutgers traditional program rate of 95.50%. The Rutgers professional program acceptance rate of 6% compares most favorably with the general professional school rate of 44% and is even lower than the traditional acceptance rate of 10%. Templer and Tomeo (1998a) warned Canadian psychologists, "Don't let it happen in Canada." They said that if Canada starts professional schools, they should "be located in very strong universities and they should have the most elite admission standards possible" (p. 4). Rutgers appears to fit such a description better than most professional schools. Unfortunately, most professional programs are free-standing or in smaller universities. They must accept less than optimal applicants for financial survival. Forty-five percent (N = 1,009) of the total of 2,220 professional schools' failed exams can be accounted for by 8 of the 65 professional programs. A fundamental problem, in view of the huge differences in selection ratio and GRE score, is that there is simply insufficient, very high-level talent to go around. It would appear to be difficult to find creative solutions for this deficiency without reducing the number of students admitted or the phasing out of some of the weaker programs. A troubling situation is that the National Council of Schools of Professional Psychology (NCSPP) lists 19 associate members who do not meet the criteria for full membership. The EPPP information on these programs is not provided by the ASPPB and therefore is not contained in Table 1. The NCSPP also lists four programs that are in the process of developing doctoral programs in clinical psychology. All 23 of these programs appear to be free-standing or in nonprominent universities. The prognosis for psychology's professional school problem does not appear to be remarkably good. A final caveat is that the EPPP assesses mastery of academic knowledge but may or may not be related to the very important therapeutic ingredient of client-clinician relationship. An even more fundamental gap in our knowledge is that we do not know if there is a relationship between EPPP performance and client outcome. Research on this issue is recommended. Declaration of Conflicting Interests The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article. Funding The author(s) received no financial support for the research and/or authorship of this article.
v3-fos-license
2022-01-31T16:11:21.798Z
2022-01-28T00:00:00.000
246423888
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/1996-1944/15/3/1019/pdf", "pdf_hash": "0a80a889e31a4954e1cd23844bee3dd497ccd0ef", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41101", "s2fieldsofstudy": [ "Physics", "Materials Science" ], "sha1": "0fa45d0aab42a9f66dd82aa331a893f1c5ba09a8", "year": 2022 }
pes2o/s2orc
Tuning Ferromagnetism in a Single Layer of Fe above Room Temperature The crystallographic and magnetic properties of an Fe monolayer (ML) grown on 2 ML Au/W(110) substrate are studied with spin-polarized low-energy electron microscopy, density functional theory, and relativistic screened Korringa–Kohn–Rostoker calculations. The single layer of iron atoms possesses hexagonal symmetry and reveals a ferromagnetic order at room temperature. We experimentally demonstrate the possibility of tuning the Curie temperature and the magnitude of magnetization of the Fe monolayer by capping with Au. Taking into account several structural models, the calculation results mostly show ferromagnetic states with enhanced magnetic moments of Fe atoms compared to their bulk value and a further increase in their value after covering with Au. The theoretically calculated Curie temperatures are in fair agreement with those obtained in the experiments. The calculations, furthermore, found evidence for the presence of frustrated isotropic Fe–Fe exchange interactions, and a discussion of the structural effects on the magnetic properties is provided herein. Introduction The Curie temperature and the magnitude of magnetization are among the most important properties of ferromagnetic ultrathin films, which define their usefulness in technology. It is well known that they strongly depend on the layers' crystallographic properties, including the type of structure (fcc, hcp, etc.), crystal face, and the distance between atoms. It has also been found that both of these features depend on the surrounding nonmagnetic materials [1]. Moreover, it is known that as the thickness of the film decreases down to a single atom limit, its magnetic properties drastically change. In principle, in the monolayer case the ferromagnetic order should be suppressed at nonzero temperatures, as stated by the Mermin-Wagner theorem [2]. However, in the presence of a strong enough anisotropy, thermal fluctuations can be overcome, resulting in the appearance of a longrange order of the magnetic moments. In real systems ferromagnetic films are formed on substrates, which act as a source of anisotropy, but due to the finite-size effects, the Curie temperature becomes strongly reduced and usually is well below room temperature (RT). A model system often used in the studies of ferromagnetism of single layers is a monolayer of Fe epitaxially grown on substrates with different symmetry, including W(110) and Au(001), which possess rectangular and square lattice, respectively. Its magnetic properties, including Curie temperature of about 210 K [3] and about 315 K [4], respectively, have been reported and confirmed in a number of papers. It has also been shown that coating with other metals changes the Curie temperature and the magnetic moments of the Fe monolayer [1,3]. In the case of single layers without coating, there are only several exceptional cases of ferromagnetic films with the Curie temperature exceeding RT. These include the recently discovered VSe2 (strictly speaking the system is built of three atomic layers) with strong magnetization above 300 K [5] and among transition metals-Fe monolayer on Au(001) [4] and on double layer Au on W(110) [6]. In the latter case, the substrate morphology strongly influences the magnetic properties of the Fe layer. The width of the tungsten terraces that are separated by monatomic steps determines the Curie temperature (T C ) of 1 ML Fe as follows: on wide terraces T C clearly exceeds room temperature while on very narrow ones it is close to or even below RT [6]. In this report we explore the basic magnetic properties of a single layer of iron with hexagonally arranged atoms. We demonstrate the tuning of the Curie temperature and the magnitude of magnetization of a single layer of Fe atoms by the adsorption of Au on its top. We experimentally show that T C increases with Au coverage up to 2 ML. Simultaneously, the magnitude of magnetization significantly increases up to 1 ML of Au, then decreases. Our calculations reveal that the single layer of Fe atoms surrounded by Au can be ferromagnetically ordered at temperatures above room temperature. Experiment The experiments were performed in a spin-polarized low-energy electron microscope (SPLEEM) (ELMITEC, Clausthal-Zellerfeld, Germany) with a base pressure in the high 10 −11 mbar range. The instrument is a conventional low-energy electron microscope (LEEM) equipped with a spin-polarized electron gun emitting polarized electrons and a spin manipulator that allows rotation of the polarization vector P of the incident electron beam in any desired direction [7][8][9][10]. In order to obtain a magnetic image, two images with opposite P and intensities I ↑ and I ↓ are taken. Both I ↑ and I ↓ images contain structural and magnetic information, the latter being proportional to P·M, where M is the local sample magnetization. The structural contribution is removed by image subtraction, leaving a purely magnetic image, which is normalized by the sum of the intensities and the degree of polarization P of the incident beam, resulting in the so-called exchange asymmetry A ex = 1 P I ↑ −I ↓ I ↑ +I ↓ . Depending upon M and P, which with standard GaAs photoemission cathodes are between 20 and 30%, the time necessary to obtain an image with a good signal-to-noise ratio is in the range of hundreds of ms to several seconds. This limits a time resolution of SPLEEM. The system is equipped with water cooled, resistively heated Au and Fe evaporators. During Au and Fe deposition, the pressure was kept below 5·10 −10 mbar. The W(110) crystal was cleaned in the usual way by heating in oxygen at about 1400 K and then flashing the remaining oxygen at about 2000 K. First, two monolayers of Au were deposited at 600 K. The elevated temperature during the gold growth assures monolayer-by-monolayer growth mode. The completion and appearance of the first and second Au layers are visible as a contrast change in LEEM. Next, the Fe monolayer was deposited at about 550 K. At this temperature Fe initially forms two-dimensional islands. Strong contrast between these islands and the substrate allows precise determination of the completion of the monolayer. In addition, the second iron layer produces a strong quantum-size effect contrast. These contrast mechanisms allow for a control of total coverage, with an accuracy better than 5% in the monolayer coverage range. Finally, a gold layer with a thickness up to about 2 MLs was deposited at RT onto the Fe monolayer. Deposition of Au at RT did not cause a contrast change in the LEEM image at the energy used for imaging. Therefore, the thickness of the Au capping layer was determined from the rate calibration during Au deposition onto the bare W(110) surface, resulting in an uncertainty of the thickness of the total Au capping layer of 0.15 ML. In order to determine the Curie temperature of the Fe monolayer, asymmetry images were recorded while the sample was heated in steps by radiation from a filament placed behind the tungsten substrate. The temperature was measured in the separate experiment with an Fe-constantan thermocouple attached directly to the side of the W(110) crystal, keeping exactly the same heating conditions, time, and power, as during SPLEEM image recording [6]. Calculations Theoretical calculations were carried out using density functional theory (DFT). The optimizations of the atomic structure of the various considered Au n /Fe 1 /Au 2 /W (n = 0,1,2, all subscripts mean the number of atomic layers) interface models (see Section 3.4 for more details) were performed by using the Vienna Ab Initio Simulation Package (VASP) [11,12] within the Perdew-Burke-Ernzerhof (PBE) parametrization [13] of the Generalized Gradient Approximation (GGA) for the exchange-correlation functional. The bottom two atomic layers of slab models in the supercell were always fixed, and the atomic positions of the other atomic layers were freely relaxed in three dimensions for the (8 × 1) supercells (having a lateral dimension of 25.40 Å × 4.49 Å) and were out-of-plane relaxed for the smaller supercells (bcc and fcc surface unit cell models containing 1 atom per atomic layer, see Section 3.4 for more details). A minimum vacuum thickness of 15 Å, perpendicular to the slabs, was applied in all cases. For the considered (8 × 1) supercells a 7 × 1 × 1 k-point sampling, and for the smaller supercells a 21 × 21 × 1 k-point sampling of the Brillouin zone, were considered. The energy cutoff for the plane-wave expansion of the single electron wave functions was set to 300 eV. The magnetocrystalline anisotropy energies (MAE) were calculated as the total energy differences between configurations having ferromagnetic Fe spins pointing to different crystallographic directions (for example, The optimized interlayer distances of the smaller supercell models were taken into account in the subsequent self-consistent relativistic screened Korringa-Kohn-Rostoker (SKKR) [14][15][16] calculations, whereas the large (8 × 1) supercells were excluded from further analysis due to the limitations of the complex geometry treatment within SKKR. The magnetic interface models sandwiched between the semi-infinite vacuum (at the free surface side) and metallic substrates (at the substrate side) were treated self-consistently. The local spin-density approximation of DFT, as parameterized by Vosko et al. [17], and the atomic-sphere approximation with an angular momentum cutoff of l max = 2 were used. In order to describe the magnetic interactions in the Fe monolayer in the interface models, a generalized classical Heisenberg spin model was used following the sign conventions of Ref. [18] with the following Hamiltonian: Here, s i is the spin unit vector at atomic site i, J ij is the magnetic exchange coupling tensor containing the isotropic Heisenberg exchange interaction (J ij = 1 3 Tr J ij ), antisymmetric Dzyaloshinsky-Moriya (DM) interaction ( 1 2 s i J ij − J T ij s j = D ij s i × s j ) with D ij the DM vector), and traceless symmetric parts [19,20], and K is the on-site anisotropy matrix. The magnetic interaction tensors (J ij ) were determined for all pairs of atomic Fe spins up to a maximal distance of 4a 2D in terms of the relativistic torque method, based on calculating the energy costs due to the infinitesimal rotations of the spins at selected sites with respect to the ferromagnetic state oriented along different crystallographic directions [20]. Based on the DFT-parametrized spin model, the magnetic ground states of the corresponding systems were determined at zero temperature by using atomistic spin-dynamics simulations following Ref. [21]. The Curie temperature for ferromagnetic ground states was obtained from the magnetization curve M 2 (T) resulting from finite temperature atomistic spin-dynamics simulations [22], a recent example using this method is reported in Ref. [23]. Growth and Crystallographic Structure The growth conditions, the substrate surface cleanness, and its temperature during Au and Fe growth are extremely important for the evolution of the magnetic order in the Fe monolayer at RT. The criterion for the cleanness of the substrate surface was the step-flow growth of iron on the bare W(110) surface at an elevated temperature. Moreover, the Au double layer grown on the clean W(110) surface shows additional, low intensity, and sharp double scattering diffraction spots around the integer spots of the LEED pattern, Figure 1b. Small amounts of contamination, not visible in LEED, cause the appearance of many nucleation centers on the W(110) terraces during the initial stages of Fe or Au deposition. A large number of nucleation centers causes the growth of a defected layer. In the case of the Au double layer grown on a partially clean surface, there are no additional spots or they are barely visible. atomistic spin-dynamics simulations [22], a recent example using this method is reported in Ref. [23]. Growth and Crystallographic Structure The growth conditions, the substrate surface cleanness, and its temperature during Au and Fe growth are extremely important for the evolution of the magnetic order in the Fe monolayer at RT. The criterion for the cleanness of the substrate surface was the stepflow growth of iron on the bare W(110) surface at an elevated temperature. Moreover, the Au double layer grown on the clean W(110) surface shows additional, low intensity, and sharp double scattering diffraction spots around the integer spots of the LEED pattern, Figure 1b. Small amounts of contamination, not visible in LEED, cause the appearance of many nucleation centers on the W(110) terraces during the initial stages of Fe or Au deposition. A large number of nucleation centers causes the growth of a defected layer. In the case of the Au double layer grown on a partially clean surface, there are no additional spots or they are barely visible. The spot marked by the arrow in Figure 1b comes from the tungsten substrate, the two strong spots next to it belong to the Au double layer. They are associated with two equivalent Au domains rotated by 3.3° ± 0.4°, relative to each other. The positions of the double scattering spots are shown in a double scattering construction for the 2 ML thick Au film in Ref. [24]. The Au double layer has slightly distorted hexagonal symmetry, in which the W [001] direction approximately coincides with the Au 11 0 direction. At first glance it can be considered as a 2.9% compressed Au(111) layer with the lattice constant of 3.96 Å ± 0.08 Å, instead of 4.08 Å of bulk Au. Detailed analysis indicates that the distance between Au atoms along Au 〈112 〉, which is approximately parallel to W 11 0 , is 4.93 Å ± 0.08 Å and is almost the same as in the bulk Au (5.00 Å). Along other 〈112 〉 directions in the Au (111) plane the corresponding distance is 4.82 Å ± 0.08 Å, which is about 2.2% less compared to the formerly mentioned direction (4.93 Å). Whether the Au double layer is perfect or not is also visible during the growth of the Fe monolayer on top of Au. The growth is more random with many nucleation centers when there are no additional diffraction spots around the (00) spot in the LEED pattern. The deposition of Fe on the well-prepared Au film makes the growth more smooth and the whole 1 ML Fe/2 ML Au/W(110) system reveals double scattering spots. Within the experimental error bar, the Fe monolayer has the same crystallographic structure as the gold film below, Figure 1c. This means that the Fe monolayer has hexagonal symmetry with the lattice constant of 3.94 Å ± 0.08 Å corresponding to a nearest-neighbor distance of 2.79 Å. The distance between the Fe atoms along all 〈112 〉 directions is the same within the error bar, although the obtained values of 4.86 Å and 4.80 Å along W 11 0 and other directions, respectively, indicate a slight distortion along the W 11 0 direction. Similar to the Au double layer, the Fe monolayer is built of two domains. Interestingly, the angle The spot marked by the arrow in Figure 1b comes from the tungsten substrate, the two strong spots next to it belong to the Au double layer. They are associated with two equivalent Au domains rotated by 3.3 ± 0.4 • , relative to each other. The positions of the double scattering spots are shown in a double scattering construction for the 2 ML thick Au film in Ref. [24]. The Au double layer has slightly distorted hexagonal symmetry, in which the W [001] direction approximately coincides with the Au 110 direction. At first glance it can be considered as a 2.9% compressed Au(111) layer with the lattice constant of 3.96 ± 0.08 Å, instead of 4.08 Å of bulk Au. Detailed analysis indicates that the distance between Au atoms along Au 112 , which is approximately parallel to W 110 , is 4.93 ± 0.08 Å and is almost the same as in the bulk Au (5.00 Å). Along other 112 directions in the Au(111) plane the corresponding distance is 4.82 ± 0.08 Å, which is about 2.2% less compared to the formerly mentioned direction (4.93 Å). Whether the Au double layer is perfect or not is also visible during the growth of the Fe monolayer on top of Au. The growth is more random with many nucleation centers when there are no additional diffraction spots around the (00) spot in the LEED pattern. The deposition of Fe on the well-prepared Au film makes the growth more smooth and the whole 1 ML Fe/2 ML Au/W(110) system reveals double scattering spots. Within the experimental error bar, the Fe monolayer has the same crystallographic structure as the gold film below, Figure 1c. This means that the Fe monolayer has hexagonal symmetry with the lattice constant of 3.94 ± 0.08 Å corresponding to a nearest-neighbor distance of 2.79 Å. The distance between the Fe atoms along all 112 directions is the same within the error bar, although the obtained values of 4.86 Å and 4.80 Å along W 110 and other directions, respectively, indicate a slight distortion along the W 110 direction. Similar to the Au double layer, the Fe monolayer is built of two domains. Interestingly, the angle between both of the domains changes to 6.8 ± 0.4 • , thus, there is an obvious rearrangement in the layer morphology. The nearest-neighbor distance between the Fe atoms (2.79 Å) is much larger than the nearest-neighbor distance in bulk bcc and fcc Fe (2.48 Å and 2.54 Å, respectively), which suggests that the Fe layer is in the high spin state. The high spin state has been reported as a consequence of large lattice constant of fcc Fe [25]. It appears that the subsequent growth of Au at RT on the top of the Fe monolayer proceeds with the same crystallographic orientation, keeping the same two-domain morphology and preserving about the same lattice constant of 3.92 ± 0.08 Å and 3.93 ± 0.08 Å for 1 ML and 2 ML Au on the top of Fe, respectively, Figure 1d. The only observed change is a slight increase in the angle between the two domains, which is 7.5 ± 0.4 • and 8.0 ± 0.4 • for 1 ML and 2 ML Au, respectively. Magnetic Structure of Fe Monolayer The Fe monolayer on the Au double layer that shows double scattering spots around the (00) spot is ferromagnetic, even at room temperature. Figure 2a shows the SPLEEM image of two magnetic domains with opposite magnetization directions (bright and dark areas) with an exchange asymmetry A ex of 0.008. The Fe monolayer grown on the Au double layer that shows weaker double scattering spots in the LEED pattern exhibits lower A ex values (between 0 and 0.008) and/or magnetic contrast only on a part of the surface (dark patches in Figure 2b) due to the local differences in the Curie temperature [6]. If the Au double layer shows no double scattering spots at all, then the Fe monolayer is in a paramagnetic state at room temperature, Figure 2c. The same happens when the sample temperature is too low during Fe deposition, resulting in the formation of many nucleation centers, which causes the growth of small grains. This is in agreement with the generally observed decrease of the Curie temperature with decreasing coverage and grain size-a finite-size effect [6,26,27]. between both of the domains changes to 6.8° ± 0.4°, thus, there is an obvious rearrangement in the layer morphology. The nearest-neighbor distance between the Fe atoms (2.79 Å) is much larger than the nearest-neighbor distance in bulk bcc and fcc Fe (2.48 Å and 2.54 Å, respectively), which suggests that the Fe layer is in the high spin state. The high spin state has been reported as a consequence of large lattice constant of fcc Fe [25]. It appears that the subsequent growth of Au at RT on the top of the Fe monolayer proceeds with the same crystallographic orientation, keeping the same two-domain morphology and preserving about the same lattice constant of 3.92 Å ± 0.08 Å and 3.93 Å ± 0.08 Å for 1 ML and 2 ML Au on the top of Fe, respectively, Figure 1d. The only observed change is a slight increase in the angle between the two domains, which is 7.5° ± 0.4° and 8.0° ± 0.4° for 1 ML and 2 ML Au, respectively. Magnetic Structure of Fe Monolayer The Fe monolayer on the Au double layer that shows double scattering spots around the (00) spot is ferromagnetic, even at room temperature. Figure 2a shows the SPLEEM image of two magnetic domains with opposite magnetization directions (bright and dark areas) with an exchange asymmetry Aex of 0.008. The Fe monolayer grown on the Au double layer that shows weaker double scattering spots in the LEED pattern exhibits lower Aex values (between 0 and 0.008) and/or magnetic contrast only on a part of the surface (dark patches in Figure 2b) due to the local differences in the Curie temperature [6]. If the Au double layer shows no double scattering spots at all, then the Fe monolayer is in a paramagnetic state at room temperature, Figure 2c. The same happens when the sample temperature is too low during Fe deposition, resulting in the formation of many nucleation centers, which causes the growth of small grains. This is in agreement with the generally observed decrease of the Curie temperature with decreasing coverage and grain size-a finite-size effect [6,26,27]. The continuous Fe monolayer has uniaxial in-plane anisotropy with the easy-axis pointing in the tungsten 11 0 direction, as indicated by the arrow in Figure 2a, which approximately coincides with the 112 direction of the Au double layer. The observed in-plane anisotropy disagrees with the reported out-of-plane anisotropy found in the Fe monolayer grown on a thick Au film [28] and in 1.5 ML Fe grown on bulk Au(111) [29,30]. It is also in contradiction to the observed out-of-plane anisotropy of Fe layers between about 0.2 and 1 ML thickness on the bulk Au(111) [31]. However, the magnetocrystalline anisotropy energy measured in the latter study was close to zero, but positive, favoring in-plane magnetization. The main difference between the present and the previous experiments can be attributed to the following different substrates: compressed Au double layer on W(110) versus Au single crystal or thick Au film [32]. It is well known that ultrathin Fe films on the bare W(110) surface have in-plane uniaxial anisotropy with the easy-axis The continuous Fe monolayer has uniaxial in-plane anisotropy with the easy-axis pointing in the tungsten 110 direction, as indicated by the arrow in Figure 2a, which approximately coincides with the 112 direction of the Au double layer. The observed in-plane anisotropy disagrees with the reported out-of-plane anisotropy found in the Fe monolayer grown on a thick Au film [28] and in 1.5 ML Fe grown on bulk Au(111) [29,30]. It is also in contradiction to the observed out-of-plane anisotropy of Fe layers between about 0.2 and 1 ML thickness on the bulk Au(111) [31]. However, the magnetocrystalline anisotropy energy measured in the latter study was close to zero, but positive, favoring in-plane magnetization. The main difference between the present and the previous experiments can be attributed to the following different substrates: compressed Au double layer on W(110) versus Au single crystal or thick Au film [32]. It is well known that ultrathin Fe films on the bare W(110) surface have in-plane uniaxial anisotropy with the easy-axis along the tungsten 110 direction [33]. Our earlier experiments show that this does not change with 2 ML Au between W and Fe [32,34]. The observed distortion of the Au double layer along the W 110 direction can be the source of additional magnetoelastic anisotropy, which forces easy-axis along that direction. The same direction of the easy-axis was found during the RT growth of Fe on the Au double layer [32,34] in the initial coverage range. The experiments show that iron is also ferromagnetic at submonolayer coverages. One monolayer thick Fe islands reveal ferromagnetic order at room temperature with in-plane uniaxial anisotropy and magnetization along the tungsten 110 direction, as observed in the complete iron monolayer. Note that earlier SKKR calculations also reported an inplane easy-axis along the W 110 direction for Fe ML on W(110), which, however, can be reoriented to out-of-plane [110] by changing the out-of-plane Fe atomic layer relaxation [35]. Magnetic Structure of Au-Capped Fe Monolayer During the Au deposition onto the Fe monolayer the exchange asymmetry changes, as shown in Figure 3a. It starts from the value of the uncapped Fe ML, changes sign at about 0.25 ML Au, and increases approximately linearly up to about 1 ML Au. The SPLEEM images, recorded on both sides of A ex = 0, at the thicknesses indicated by the arrows in Figure 3a, show the same domain configuration but with reversed magnetization direction (reversed dark and bright regions). The contrast reversal without a change in the domain shapes, except for the small fluctuations observed occasionally at the domain boundaries during Au deposition, is reproducible in all experiments. This indicates a quantum-size effect (QSE) origin of the observed change in the sign of A ex , as previously observed in the Fe/W(110) system [36][37][38], rather than a spin reorientation transition (SRT). In SRT, the size and distribution of the magnetic domains change as observed in other studies [34,[39][40][41] or as observed here upon heating and cooling during the ferromagnetic-paramagneticferromagnetic phase transitions, Figure 3b along the tungsten 11 0 direction [33]. Our earlier experiments show that this does not change with 2 ML Au between W and Fe [32,34]. The observed distortion of the Au double layer along the W 11 0 direction can be the source of additional magnetoelastic anisotropy, which forces easy-axis along that direction. The same direction of the easy-axis was found during the RT growth of Fe on the Au double layer [32,34] in the initial coverage range. The experiments show that iron is also ferromagnetic at submonolayer coverages. One monolayer thick Fe islands reveal ferromagnetic order at room temperature with inplane uniaxial anisotropy and magnetization along the tungsten 11 0 direction, as observed in the complete iron monolayer. Note that earlier SKKR calculations also reported an in-plane easy-axis along the W 11 0 direction for Fe ML on W(110), which, however, can be reoriented to out-of-plane [110] by changing the out-of-plane Fe atomic layer relaxation [35]. Magnetic Structure of Au-Capped Fe Monolayer During the Au deposition onto the Fe monolayer the exchange asymmetry changes, as shown in Figure 3a. It starts from the value of the uncapped Fe ML, changes sign at about 0.25 ML Au, and increases approximately linearly up to about 1 ML Au. The SPLEEM images, recorded on both sides of Aex = 0, at the thicknesses indicated by the arrows in Figure 3a, show the same domain configuration but with reversed magnetization direction (reversed dark and bright regions). The contrast reversal without a change in the domain shapes, except for the small fluctuations observed occasionally at the domain boundaries during Au deposition, is reproducible in all experiments. This indicates a quantum-size effect (QSE) origin of the observed change in the sign of Aex, as previously observed in the Fe/W(110) system [36][37][38], rather than a spin reorientation transition (SRT). In SRT, the size and distribution of the magnetic domains change as observed in other studies [34,[39][40][41] or as observed here upon heating and cooling during the ferromagnetic-paramagnetic-ferromagnetic phase transitions, Figure 3b The areas which show no magnetic contrast are often associated with the high step density of the tungsten substrate. As the terraces become narrower the Curie temperature decreases due to the finite-size effect, as discussed in detail in Ref. [6]. Apparently, the same effect causes a decrease in Tc in the samples that are grown at lower temperatures, The areas which show no magnetic contrast are often associated with the high step density of the tungsten substrate. As the terraces become narrower the Curie temperature decreases due to the finite-size effect, as discussed in detail in Ref. [6]. Apparently, the same effect causes a decrease in Tc in the samples that are grown at lower temperatures, at which the Fe layer does not grow in the step-flow way and, instead, it forms smaller grains. In the case of the Fe monolayer that does not show magnetic contrast at RT, the Au overlayers induce ferromagnetic order, as illustrated in Figure 3a (blue curve)-the magnetic order appears at about 0.55 ML Au. Depending on the preparation conditions of the Fe monolayer (substrate cleanness and temperature), the onset of magnetism was observed between 0 and about 0.6 ML Au. After the onset, the exchange asymmetry rapidly increases reaching at about 0.7 ML Au the same value as in the case of the samples with the magnetic contrast existing before the Au deposition starts. The sudden increase in the exchange asymmetry parameter suggests the change in the Curie temperature of the Fe film. Apparently, the addition of a certain number of Au atoms increases Tc above RT, resulting in the steep increase in A ex . An approximately linear increase in A ex with Au coverage of up to about 1 ML suggests its dependence on the number of Au atoms sitting on the Fe monolayer. The linear increase in A ex stops at the coverage of about 1 mL and then starts to decrease with further increasing Au coverage. Simultaneously, the Curie temperature of the Fe monolayer changes quite significantly. Without Au adsorbed on the top of Fe, it is about 327 ± 3 K, with 1 ML Au-at 335 ± 3 K and with 2 ML Au it is about 346 ± 3 K. The observed changes in the exchange asymmetry with sample temperature are shown in Figure 4. at which the Fe layer does not grow in the step-flow way and, instead, it forms smaller grains. In the case of the Fe monolayer that does not show magnetic contrast at RT, the Au overlayers induce ferromagnetic order, as illustrated in Figure 3a (blue curve)-the magnetic order appears at about 0.55 ML Au. Depending on the preparation conditions of the Fe monolayer (substrate cleanness and temperature), the onset of magnetism was observed between 0 and about 0.6 ML Au. After the onset, the exchange asymmetry rapidly increases reaching at about 0.7 ML Au the same value as in the case of the samples with the magnetic contrast existing before the Au deposition starts. The sudden increase in the exchange asymmetry parameter suggests the change in the Curie temperature of the Fe film. Apparently, the addition of a certain number of Au atoms increases Tc above RT, resulting in the steep increase in Aex. An approximately linear increase in Aex with Au coverage of up to about 1 ML suggests its dependence on the number of Au atoms sitting on the Fe monolayer. The linear increase in Aex stops at the coverage of about 1 mL and then starts to decrease with further increasing Au coverage. Simultaneously, the Curie temperature of the Fe monolayer changes quite significantly. Without Au adsorbed on the top of Fe, it is about 327 K ± 3 K, with 1 ML Au-at 335 K ± 3 K and with 2 ML Au it is about 346 K ± 3 K. The observed changes in the exchange asymmetry with sample temperature are shown in Figure 4. Results of Calculations We modeled the Aun/Fe1/Au2/W (n = 0; 1; 2) interfaces in five different ways. Models A, B, and C include a W substrate having four W, two Au, one Fe, and n Au atomic layers in the VASP slab optimization, and for models B and C the same structures with a semiinfinite W substrate in the SKKR calculations are taken. Models D and E have no W substrate included, and they have six Au, one Fe, and n Au atomic layers in the VASP slab optimization, and the same structure with a semi-infinite Au substrate in the SKKR calculations. Details on the multilayer models are summarized as follows: Model A. includes 9:8 superstructure, where the bcc(110) W substrate is considered with a DFT-optimized lattice constant of 3.175 Å, and a (8 × 1) supercell is taken (containing 16 W atoms and 18 Au or Fe atoms in the corresponding atomic layers in the surface cell), where the supercell size is 3.175 • √2 = 4.49 Å in the W 11 0 direction and 3.175·8 = 25.40 Å in the W 001 direction (along which nine Au atoms are for eight W, which the reason for the 9:8 superstructure), as an illustration see Figure 5a,b for n = 0. Here, it is assumed that the W-Au interface changes crystallographic growth type from bcc(110) for W to close to fcc(111) for Au. The in-plane nearest-neighbor Au-Au distances of the initial configurations are as follows 2.82 Å (= 25.40/9, this value is close to the experimentally determined distance along the W [001] direction), 2.75, and 2.56 Å, slightly distorted from a perfect hexagonal fcc(111) atomic layer. Above the first Au atomic layer the structure is assumed to grow epitaxially following the slightly distorted fcc(111)-type Results of Calculations We modeled the Au n /Fe 1 /Au 2 /W (n = 0; 1; 2) interfaces in five different ways. Models A, B, and C include a W substrate having four W, two Au, one Fe, and n Au atomic layers in the VASP slab optimization, and for models B and C the same structures with a semiinfinite W substrate in the SKKR calculations are taken. Models D and E have no W substrate included, and they have six Au, one Fe, and n Au atomic layers in the VASP slab optimization, and the same structure with a semi-infinite Au substrate in the SKKR calculations. Details on the multilayer models are summarized as follows: Model A. includes 9:8 superstructure, where the bcc(110) W substrate is considered with a DFT-optimized lattice constant of 3.175 Å, and a (8 × 1) supercell is taken (containing 16 W atoms and 18 Au or Fe atoms in the corresponding atomic layers in the surface cell), where the supercell size is 3.175 × √ 2 = 4.49 Å in the W 110 direction and 3.175 × 8 = 25.40 Å in the W [001] direction (along which nine Au atoms are for eight W, which the reason for the 9:8 superstructure), as an illustration see Figure 5a,b for n = 0. Here, it is assumed that the W-Au interface changes crystallographic growth type from bcc(110) for W to close to fcc(111) for Au. The in-plane nearest-neighbor Au-Au distances of the initial configurations are as follows 2.82 Å (= 25.40/9, this value is close to the experimentally determined distance along the W [001] direction), 2.75, and 2.56 Å, slightly distorted from a perfect hexagonal fcc(111) atomic layer. Above the first Au atomic layer the structure is assumed to grow epitaxially following the slightly distorted fcc(111)-type structure. This largest supercell atomic model is simplified to smaller supercell models with one atom per surface unit cell in each atomic layer with the indicated crystal structure throughout the full multilayer structure. structure. This largest supercell atomic model is simplified to smaller supercell models with one atom per surface unit cell in each atomic layer with the indicated crystal structure throughout the full multilayer structure. Model D. includes fcc(111) structure, without W substrate with experimental inplane lattice constant of 2.82 Å. In this model the local geometry of the Au/Fe/Au is taken from the experiment. Model E. includes fcc(111) structure without W substrate with a DFT-optimized inplane Au lattice constant of 2.953 Å. In the following we refer to the above structural models (A-E) together with the Au capping layers (n = 0; 1; 2), for example, A0 refers to model A with no Au capping layer on Fe. Altogether, fifteen (5:A-E × 3:n = 0; 1; 2) multilayer structures were constructed, and analyzed theoretically. After geometry optimization, for the 9:8 superstructures the expected (flat) multilayer structure was obtained for model A1 only (see Figure 5c), where one of the initial inplane nearest-neighbor Au-Au distances was left unchanged at 2.82 Å (this value is close to the experimentally determined distance along the W [001] direction), and the other two Au-Au distances were both changed to 2.65 Å, still distorted from a perfect hexagonal fcc(111) atomic layer, but were more symmetric after the optimization. The obtained Fe spin moments are almost uniformly 2.84 µB and the induced spin moments of Au at Feneighboring atomic layers are smaller than 0.04 µB. The MAE z − x = −0.5 meV/Fe (x = W 11 0 , Model D. includes fcc(111) structure, without W substrate with experimental in-plane lattice constant of 2.82 Å. In this model the local geometry of the Au/Fe/Au is taken from the experiment. Model E. includes fcc(111) structure without W substrate with a DFT-optimized inplane Au lattice constant of 2.953 Å. In the following we refer to the above structural models (A-E) together with the Au capping layers (n = 0; 1; 2), for example, A0 refers to model A with no Au capping layer on Fe. Altogether, fifteen (5:A-E × 3:n = 0; 1; 2) multilayer structures were constructed, and analyzed theoretically. After geometry optimization, for the 9:8 superstructures the expected (flat) multilayer structure was obtained for model A1 only (see Figure 5c), where one of the initial in-plane nearest-neighbor Au-Au distances was left unchanged at 2.82 Å (this value is close to the experimentally determined distance along the W [001] direction), and the other two Au-Au distances were both changed to 2.65 Å, still distorted from a perfect hexagonal fcc(111) atomic layer, but were more symmetric after the optimization. The obtained Fe spin moments are almost uniformly 2.84 µ B and the induced spin moments of Au at Fe-neighboring atomic layers are smaller than 0.04 µ B . The MAE z − x = −0.5 meV/Fe (x = W 110 , z = W[110]), which means an out-of-plane magnetocrystalline anisotropy. Interestingly, for the A0 and A2 models we found that a mixing of Fe and Au atoms is energetically favored compared to the multilayers, and the corresponding structural models are shown in Figure 5d-f. For these models, the Fe spin moments vary in the range of 2.74-2.98 µ B , and the induced spin moments of Au are always smaller than 0.07 µ B . MAE was not calculated for these intermixed interfaces. In the following we focus on the smaller supercell models B-E and analyze their magnetic properties in more detail. The obtained Fe spin moments and MAE values are summarized in Table 1. The Fe spin moments vary in the range of 3.08-3.43 µ B , and the induced spin moments of Au at Fe-neighboring atomic layers are always smaller than 0.04 µ B . We observe slightly enlarged Fe spin moments for model E, where the DFT-optimized in-plane lattice constant of Au (2.953 Å) was employed, which is almost 5% larger than the experimentally observed Au-Au distance of 2.79 Å. Interestingly, for n = 0 the MAE results are quite insensitive to the variation of the structural model. For n = 1 the models B and C (with W substrate) result in larger MAE than models D and E (without W substrate). For n = 2 model B shows the largest MAE in absolute value among all the considered interface models, this is clearly related to the imposed bcc(110) structure, and the fcc(111) structures (models C-E) result in the lowest MAE values. Importantly, we obtained out-ofplane magnetocrystalline anisotropy in all of the considered systems in our calculations. We cannot conclude on the total anisotropy (MAE + other terms) based on our ab initio calculations. The difference in the experimentally determined magnetic anisotropy might stem from other contributions that are present in the real samples but not captured in our ab initio modelling. Another reason could be a mismatch in the magnetic layer relaxation in comparison to the experiment. By using the SKKR method it was shown that the easy-axis can be reoriented by tuning the Fe layer relaxation in Fe/W(110) [35], clearly indicating a sensitivity of the magnetic properties on the structural model. Figures 6 and 7 show the calculated Fe-Fe isotropic and the magnitude of the DM interactions, respectively, for structural models B-E. The nearest-neighbor (NN) isotropic couplings are ferromagnetic for all of the multilayer systems. The second nearest-neighbor (2NN) isotropic couplings and beyond are considerably smaller than the NN isotropic couplings. The 2NN isotropic couplings are ferromagnetic for model B with the imposed bcc(110) structure only, and for the fcc(111) geometries, i.e., models C-E, the 2NN isotropic couplings are antiferromagnetic (AFM), where n = 1 systematically has smaller AFM couplings than n = 0 or n = 2. Beyond the third nearest-neighbors (3NN), the isotropic couplings decay rapidly, except for model B, where some of the farther neighbors have larger values than the 3NN isotropic couplings. Typically, the seemingly small changes in the 4-8 NN (for bcc) and 2-4 NN (for fcc) isotropic Fe-Fe interactions upon Au capping should be responsible for the observed variation of the magnetic properties of Fe [21] (spin moment and Curie temperature, see later text for explanation), although a quantitative correlation is difficult to establish from such complex data. For the DM interactions (Figure 7), the decay with the Fe-Fe distance is less rapid, and the farther neighbors can also play an important role when forming a complex magnetic structure through the rotation of the spins. Qualitatively similar results were obtained for Co/Pt interfaces [18,19]. Using the determined spin model parameters reported in Table 1 and Figures 6 and 7, the magnetic ground state of Fe was calculated by atomistic spin dynamics at zero temperature. We found ferromagnetic ground states for all of the considered systems, except for models C2 and E1, where spin spiral ground states were found. For the ferromagnetic states, we calculated the magnetization curves M 2 (T) resulting from finite temperature atomistic spin-dynamics simulations [22]. Examples of such magnetization curves for the structural models D are shown in Figure 8. From these, the Curie temperatures can be obtained following Ref. [23]. The results of the Curie temperatures are summarized in Table 2. Overall, the calculated Curie temperatures are fairly in the range of the experimentally determined phase transition temperatures, however, the development of the transition temperatures upon the Au addition reported in Figure 4 cannot be explained by the presented extensive model calculations. We observe oscillating (model B) or linearly changing (model D) Curie temperatures upon Au-capping depending on the structural model. We note that antiferromagnetic correlations (originated, e.g., from frustrated exchange interactions) in particular magnetic layers, tend to reduce the Curie temperature in thicker magnetic films, as was evidenced for uncapped and W-capped Fe/W(001) multilayer systems [42], an effect which might play a role here as well. We find the best quantitative agreement of the Curie temperature values for model D, where the local geometry of the Au/Fe/Au is taken from the experiment (without W substrate). Interestingly, the determined Curie temperatures of models D0 and E0 are the same, even though there is a close to 5% difference in their considered Au fcc(111) lattice constants. In general, our theoretical results also shed light on the sensitivity of the magnetic properties depending on the fine details of the underlying structural models. Table 2. Calculated Curie temperatures (T C in K units) for the structural models B-E of Au n /Fe 1 /Au 2 /W. SSP indicates that the ground state is not ferromagnetic, but a spin spiral is found. Conclusions We have investigated changes in the Curie temperature and magnitude of magnetization of 1 ML Fe grown on 2 ML Au/W(110) induced by the adsorption of Au. Spin-polarized low-energy electron microscopy experiments indicated that the Curie temperature of a single layer of iron exceeded room temperature (327 K) and it could be further increased by covering with Au atoms. In addition, capping with the gold layer initially increased the magnetic moments of the Fe atoms and, above 1 ML Au, the magnetization started to decrease. A number of applied theoretical models delivered results that qualitatively described the considered complex system, as follows: a ferromagnetic state was found for most of the structural models, also providing new physical insights, frustrated isotropic exchange interactions were identified, whereas the NN DM interaction was much smaller than its isotropic counterpart. Depending on the particular model, the Curie temperature was either close to or exceeded room temperature. The calculated magnetic moments were in the range of 3.08 to 3.43 µ B , clearly indicating the high spin state of the Fe atoms.
v3-fos-license
2020-08-27T09:08:07.876Z
2020-08-26T00:00:00.000
234641051
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GREEN", "oa_url": "https://www.researchsquare.com/article/rs-59447/latest.pdf", "pdf_hash": "dafed61799a7a335ff16e4467640331d3404b1d0", "pdf_src": "Adhoc", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41102", "s2fieldsofstudy": [ "Medicine", "Biology" ], "sha1": "28e3cb485d920fef773ad0dbab799cba059815ca", "year": 2020 }
pes2o/s2orc
Development and verication of immune-related long non-coding RNA prognostic signature in bladder cancer Background: Bladder cancer is the second most common malignant tumor in urogenital system. The research aimed to investigate the prognostic role of immune-related long non-coding RNA (lncRNA) in bladder cancer. Methods: We extracted 411 bladder cancer samples from The Cancer Genome Atlas database. Single-sample gene set enrichment analysis was employed to assess the immune cell inltration of these samples. We recognized differentially expressed lncRNAs between tumors and paracancerous tissues, and differentially expressed lncRNAs between the high and low immune cell inltration groups. Venn diagram analysis detected differentially expressed lncRNAs that intersected the above groups. LncRNAs with prognostic signicance were identied by regression analysis and survival analysis. Multivariate Cox analysis was used to establish the risk score model. The nomogram was established and evaluated by receiver operating characteristic (ROC) curve analysis, concordance index (C-index) analysis, calibration chart, and decision curve analysis (DCA). Additionally, we performed gene set enrichment analysis to explore the potential functions of the screened lncRNAs in tumor pathogenesis. Results: Three hundred and twenty differentially expressed lncRNAs were recognized. We randomly divided patients into the training data set and the testing data set at a 2: 1 ratio. In the training data set, 9 immune-related lncRNAs with prognostic signicance were identied. The risk score model was constructed to classify patients as high- and low-risk cohorts. Patients in the low-risk cohort had better survival outcomes than those in the high-risk cohort. The nomogram was established based on the indicators including age, gender, TNM stage, and risk score. The model’s predictive performance was conrmed by ROC curve analysis, C-index analysis, calibration chart, and DCA. The testing data set also achieved similar results. Bioinformatics analysis suggested that the 9-lncRNA signature was involved in modulation of various immune responses, antigen processing and presentation, and T cell receptor signaling pathway. Conclusions: The immune-related lncRNAs cancer biology. Background Bladder cancer is the second most common urological malignancy of the world with approximately 549,000 new cases and 200,000 deaths in 2018 [1]. It was reported that the prognosis of bladder cancer patients was strictly related to the immune microenvironment of tumor tissues [2]. Accumulated evidence has veri ed the therapeutic effect of immune checkpoint inhibitors in bladder cancer, including atezolizumab, avelumab, durvalumab, nivolumab, and pembrolizumab [3]. A recent research demonstrated that pembrolizumab could prolong the progression-free survival of patients with high RNA-based immune signature scores [4]. This suggested that we might identify immune-related prognostic indicators to improve the prognosis of breast cancer patients and guide the treatment. Long non-coding RNAs (lncRNAs) are a group of RNAs that participate in the human physiological and pathological process by interacting with speci c RNAs and proteins. In recent years, lncRNAs were discovered to participate in tumor growth and progression [5]. In bladder cancer, lncRNA plays a vital role in lymphatic metastasis, epithelial-mesenchymal transformation, proliferation, migration, and apoptosis of tumor cells [6,7]. LncRNA SOX2OT could maintain the stemness phenotype of bladder cancer stem cells and served as an adverse indicator of patients' clinical outcomes and prognosis [8]. Furthermore, exosomal lncRNA LNMAT2 in bladder cancer could stimulate the formation and migration of lymphatic endothelial cells tube and intensify cancer lymphangiogenesis and lymphatic metastasis [9]. Therefore, lncRNA, as a novel biological marker, offers broad prospects for early diagnosis and prognosis prediction of bladder cancer. Studies demonstrated that immune-related lncRNAs have a unique value in the prognosis of several cancers. The heterogeneous expression of lncRNAs was identi ed among different immune-in ltrating groups in muscle-invasive bladder cancer [10]. The potential value of immune-related lncRNAs as prognostic indicators has been validated in several cancers. Shen et al. recognized 11 immune-related lncRNAs as prognostic markers for breast cancer, and the 11-lncRNAs signature was related to the in ltration of immune cell subtypes [11]. Li et al. screened 7 immune-related lncRNAs in low-grade glioma and con rmed that these lncRNAs have prognostic value in patients [12]. However, immune-related lncRNAs in bladder cancer has not been revealed before. In the study, we analyzed the lncRNAs data set and corresponding clinical information from the Cancer Genome Atlas (TCGA) and screened for immune-related lncRNAs by single-sample gene set enrichment analysis (ssGSEA). We established a prognostic model based on these lncRNAs and explored their potential biological functions in bladder cancer. Materials And Methods Bladder cancer sample sources and grouping Gene expression data (RNA-Seq), lncRNA sequencing data and corresponding clinical data of bladder cancer were downloaded from the TCGA database (https://portal.gdc.cancer.gov). According to the published research [13], 29 immune cell data sets were applied to evaluate the in ltration level of immune cells through the ssGSEA method. After that, patients were classi ed as the high and low immune cell in ltration groups using hclust package. The stromal score, immune score, and tumor purity score were calculated by the ESTIMATE algorithm to verify the effectiveness of ssGSEA groupings [14]. In addition, we assessed the difference between the two groups by analyzing the expression of the human leukocyte antigen (HLA) gene. CIBERSORT algorithm was employed to determine the in ltration of various immune cells in the tumor sample and verify the potency of the immune groupings again [15]. Screening Of Immune-related Lncrna In Bladder Cancer We set | log 2 FC | > 0.5 and p < 0.05 as the standard to recognize the differentially expressed lncRNAs between the high and low immune cell in ltration groups by edgeR package. Differentially expressed lncRNAs between bladder cancer and paracancerous tissue were also identi ed by the same method. Venn diagram analysis was used to screen out immune-related lncRNAs in bladder cancer from the above two sets. Construction Of Risk Score Model Based On Immune-related Lncrnas In the training data set, univariate Cox regression was performed on immune-related lncRNAs to identify 38 prognosis-associated lncRNAs (Fig. 4a). LASSO regression analysis further screened 9 crucial lncRNAs (Fig. 4b, c). Survival analyses of immune-related lncRNAs revealed that 9 lncRNAs were signi cantly related to OS, including AC126773. (Table 1) to establish the risk score model: Risk score = . We set the median risk score as the cutoff and divided 411 patients into high-risk and low-risk groups. The Kaplan-Meier curve disclosed that the OS in the low-risk group was signi cantly better than that in the high-risk group (p = 7.542e-05) (Fig. 6a). The risk curve and scatter plot indicated that the risk coe cient and mortality of patients in the high-risk group were higher than those in the low-risk group (Fig. 6b, c). The heat map exhibited the expression pro les of the 9-lncRNAs signature in the high-risk and low-risk groups (Fig. 6d). The correlation status of B cells, CD4 + T cells, CD8 + T cells, dendrictic cells, macrophages, and neutrophils with risk score were ploted in Fig. 7 (Fig. 7a-f for the training data set and Fig. 7g-i for the testing data set). Similar results were obtained using the same method on the testing data set ( Fig. 6e-h). Table 1 The prognostic signi cance of the 9-lncRNAs signature We evaluated the prognostic signi cance of risk score and clinical variables such as age, gender, and TNM stage by univariate and multivariate Cox regression analyses. The nomogram was established according to the results of multivariate Cox regression to predict each patient's 3-and 5-year OS. We conducted the ROC curve analysis, concordance index (C-index) method, calibration curve method, and decision curve analysis (DCA) to assess the model's accuracy. Finally, the testing set data was used to evaluate the above results. Gene Set Enrichment Analysis We performed GO enrichment analysis and KEGG pathway analysis on the differentially expressed genes between the high-risk and low-risk groups. GO enrichment analysis indicated that the genes were enriched in ephrin receptor signaling pathway, epidermal growth factor receptor (EGFR) signaling pathway, ERBB signaling pathway, mRNA splice site selection, DNA ADP ribosyltransferase activity, and T cell selection ( Fig. 10a). KEGG pathway analysis showed that these genes were involved in amino sugar and nucleotide sugar metabolism, antigen processing and presentation, extracellular matrix (ECM) receptor interaction, focal adhesion, primary immunode ciency, and T cell receptor signaling pathway (Fig. 10b). These ndings may help researchers further explore the mechanism of immune-related lncRNA affecting the pathogenesis of bladder cancer. Construction and veri cation of bladder cancer groupings The owchart of our research was plotted in Fig. 1. Information of 411 bladder cancer tissues and 19 paracancerous tissues were obtained from the TCGA database. The transcriptome data of bladder cancer samples were analyzed with ssGSEA method to assess the immune cell in ltration level. Unsupervised hierarchical clustering algorithm was employed to divide patients into the high immune cell in ltration group (n = 85) and the low immune cell in ltration group (n = 326) (Fig. 2a). ESTIMATE algorithm was used to calculate the ESTIMATE score, immune score, stromal score and tumor purity of all samples. Compared to the low immune cell in ltration group, the high immune cell in ltration group presented higher ESTIMATE score, higher immune score, higher stromal score, and lower tumor purity (p < 0.001) ( Fig. 2b-e). The expression of HLA family genes in the high immune cell in ltration group was higher than that in the low immune cell in ltration group (p < 0.001) (Fig. 2f). In addition, the CIBERSORT method revealed that the high immune cell in ltration group had a higher density of immune cells (Fig. 2g). Overall, our results indicated that the bladder cancer grouping was feasible. Identi cation Of Immune-related Lncrnas We recognized 2067 differentially expressed lncRNAs between tumors and paracancerous tissues (Fig. 3a) and 1076 differentially expressed lncRNAs between the high and low immune cell in ltration groups (Fig. 3b). The Venn diagram analysis detected 320 differentially expressed lncRNAs that intersected the above groups (Fig. 3c). Taking together, we screened 320 immune-related lncRNAs in bladder cancer. Establishment And Evaluation Of Prognostic Model Univariate Cox regression showed that the risk score and clinical indicators including age, gender, and TNM stage were rmly related to OS (Fig. 8a). We further conducted the multivariate Cox analysis and found that the 9-lncRNAs signature was an independent prognostic factor for bladder cancer (p < 0.001) (Fig. 8b). ROC curve analysis validated the predictive performance of the signature (Fig. 8c). We then established a nomogram including age, gender, TNM stage, and risk score (Fig. 9a). The area under the curves (AUCs) for 3-year, 5-year OS predicted by the model were 0.784 and 0.790, respectively (Fig. 9b). The C-index of the nomogram was 0.751. The calibration curves and DCAs of the prognostic model showed that the model had an excellent predictive effect (Fig. 9c-f). We acquired similar results using the same method on the testing data set (Fig. 8d-f, 9 g-l). Discussion Bladder cancer has the characteristics of high recurrence rate and poor prognosis [1]. Accurately predicting the prognosis of bladder cancer patients is of great importance in guiding their treatment. Transcriptome sequencing results have been widely used in the prognostic model establishment for tumor patients. Prognostic models based on immune-related genes have been developed and proved to have excellent predictive e cacy in bladder cancer patients [16,17]. Using lncRNA to construct a prognostic model may be an important supplement to the prognosis prediction of bladder cancer. Targeted therapy for immune checkpoints could inhibit the immune escape of tumor cells and eliminate them by activating the immune system [18]. Many immune checkpoint blockers have achieved gratifying therapeutic effects in bladder cancer patients. Accumulated studies have shown that lncRNA played an essential role in the tumor immune microenvironment. LncRNA NKILA could activate T cell-induced cell death to promote tumor immune escape [19]. LncRNA SNHG1 could regulate the differentiation of Treg cells by targeting miR-448 and thus affected the immune escape of breast cancer [20]. It was reported that pre-existing immune cell in ltration in tumor tissue was an crucial factor determining the treatment response of immune checkpoint inhibitors [21]. The above researches indicated that these lncRNAs might have prognostic value in cancer patients. Zhou et al. identi ed 6 immune-related lncRNAs in glioblastoma and con rmed that these lncRNAs had prognostic value in glioblastoma patients [22]. However, the prognostic value of immune-related lncRNAs has not been studied before. In the research, we analyzed the lncRNAs data set from the TCGA database and screened 320 immunerelated lncRNAs. Nine immune-related lncRNAs with prognostic signi cance were identi ed ultimately. Multivariate Cox analysis was used to construct the risk score model. We found that patients in the lowrisk group had longer OS than those in the high-risk group. Subsequently, we established a nomogram including age, gender, TNM stage, and risk score. The ROC curve analysis, C-index, calibration curves, and DCA con rmed the model's predictive power. Compared to models based on other sequencing data, the prognostic model constructed by immune-related lncRNAs presented better e cacy according to ROC curve method [23][24][25][26]. Subsequently, we performed GO enrichment analysis and KEGG pathway analysis to explore the potential functions of the 9-lncRNAs signature in bladder cancer. The results showed that these lncRNAs were involved in various immune responses, antigen processing and presentation, T cell receptor signaling pathway, epidermal growth factor receptor signaling pathway, ERBB signaling pathway, ECM receptor interaction, focal adhesion, and primary immunode ciency. Epidermal growth factor was reported to activate the androgen receptor and increase the expression of TRIP13 to promote bladder cancer progression [27,28]. Notably, ECM modi cation could not only promote tumor cells to escape, but also help generate and maintain the cancer stem cell niche [29]. Moreover, high in ltration of memory activated CD4 + T cell subsets were associated with prolonged OS and reduced risk of tumor recurrence in bladder cancer [30]. Chobrutskiy et al. demonstrated that lower CDR3 region isoelectric point in T cell receptor was associated with better survival outcomes in bladder cancer [31]. In brief, our data suggested that the 9-lncRNAs signature modulated the development and progression of bladder cancer in a variety of ways. There are some limitations in our research. It is a retrospective study whose data was obtained from the TCGA database that lacked information including treatment and recurrence records. Further in vivo or in vitro experiments and prospective clinical researches are needed to validate our conclusions. Conclusion In summary, the present study identi ed 9 immune-related lncRNA and the 9-lncRNAs signature possessed prognostic value for bladder cancer patients. Bioinformatics analysis suggested that immunerelated lncRNAs may regulate tumor pathogenesis through modulation of various immune responses, antigen processing and presentation, and T cell receptor signaling pathway. Our research proposed a potential model and biomarkers for the immune-related work and personalized treatment in bladder cancer patients. Data availability All data used in this study were acquired from TCGA database. Ethics approval and consent to participate Not applicable. Consent for publication Not applicable. Con icts of interest The authors declare no con icts of interest. Establishment and veri cation of bladder cancer grouping. (a) The heatmap showed the unsupervised clustering of 29 immune cells in the high immune cell in ltration group (Immunity_H) and the low immune cell in ltration group (Immunity_L). Parameters including the tumor purity, ESTIMATE score, immune score, and stromal score were also displayed. (b-e) The box plots revealed the statistical differences in tumor purity, ESTIMATE score, immune score, and stromal score between the high and the low immune cell in ltration groups. (f) The expression of HLA family genes in the high immune cell in ltration group was higher than that in the low immune cell in ltration group. (g) the CIBERSORT method demonstrated that a higher density of immune cells was found in the high immune cell in ltration group compared to the low immune cell in ltration group. *p < 0.05; **p < 0.01; ***p < 0.001. Construction of risk-score model based on immune-related lncRNAs. (a, e) Kaplan-Meier analysis evinced that patients in the high-risk group suffered worse OS compared to the low-risk group in training and testing data sets, respectively. (b, f) The overviews of survival time for each patient in training and testing data sets, respectively. (c, g) The distributions of a risk score for each patient in training and testing data sets, respectively. (d, h) The heatmaps of expression pro les for 9-lncRNAs signature between the low-risk and high-risk groups in training and testing data sets, respectively. Warm colors represented high expression, and cold colors represented low expression. The prognostic value of risk score and clinical variables. (a,d) Univariate Cox analysis showed that risk score and clinical variables including age, gender, and TNM stage were signi cantly related to OS in training and testing data sets, respectively. (b,e) Multivariate Cox analysis manifested that the 9-lncRNAs signature was an independent prognostic indicator for bladder cancer in training and testing data sets, respectively. (c,f) ROC curve analysis of the 9-lncRNAs signature demonstrated that AUCs in training data set and in testing data set were 0.727 and 0.752, respectively. The prognostic value of risk score and clinical variables. (a,d) Univariate Cox analysis showed that risk score and clinical variables including age, gender, and TNM stage were signi cantly related to OS in training and testing data sets, respectively. (b,e) Multivariate Cox analysis manifested that the 9-lncRNAs signature was an independent prognostic indicator for bladder cancer in training and testing data sets, Page 18/18 respectively. (c,f) ROC curve analysis of the 9-lncRNAs signature demonstrated that AUCs in training data set and in testing data set were 0.727 and 0.752, respectively. Figure 10 Gene set enrichment analysis on the differentially expressed genes between the high-risk and low-risk groups. (a) GO enrichment analysis indicated that the genes were enriched in ephrin receptor signaling pathway, EGFR signaling pathway, ERBB signaling pathway, mRNA splice site selection, DNA ADP ribosyltransferase activity, and T cell selection. (b) KEGG pathway analysis showed that these genes were involved in amino sugar and nucleotide sugar metabolism, antigen processing and presentation, ECM receptor interaction, focal adhesion, primary immunode ciency, and T cell receptor signaling pathway.
v3-fos-license
2018-01-08T13:52:30.581Z
2017-10-26T00:00:00.000
35180787
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/2075-4698/7/4/28/pdf", "pdf_hash": "013cfcdb85a5af31c607d1587e93eed5063c8815", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41103", "s2fieldsofstudy": [ "Computer Science", "Medicine", "Sociology" ], "sha1": "013cfcdb85a5af31c607d1587e93eed5063c8815", "year": 2017 }
pes2o/s2orc
Visions of Illness, Disease, and Sickness in Mobile Health Applications Popular media and public health care discourses describe an increasing number of mobile health technologies. These applications tend to be presented as a means of achieving patient empowerment, patient-centered care, and cost-reduction in public health care. Few of these accounts examine the health perspectives informing these technologies or the practices of the users of mobile health applications and the kind of data they collect. This article proposes a critical approach to analyzing digital health technologies based on different visions of disease, namely disease, illness, and sickness. The proposed analytical classification system is applied to a set of “mobile health solutions” presented by the Norwegian Technology Council and juxtaposed with the reported use and non-use of several mobile health applications among young patients with Inflammatory Bowel Disease (IBD). The discussion shows how visions on health and disease can affect a patient’s embodied experiences of a physical condition, and, secondly, illustrates how the particular vision inscribed in a mobile health technology can be negotiated to include the patient’s vision. Introduction In the past couple of decades, health care sectors worldwide have experienced an increasing number of demands for more advanced treatment and care, earlier discovery of diseases and chronic conditions, and the treatment of a growing number of patients [1]. At the same time, self-care and patient empowerment have emerged as important phenomena in the context of health and disease management and for supporting the work of health care providers. Within these two related concepts, the patients become active participants who engage in self-care through assuming a central role in the "action-taking" in their health and health care [2]. Coupled with the advancements in and the spread of digital technologies, digital health technologies [3] are being designed and developed in the hope of providing better care [4]. Mobile health technologies are argued to "translate everyday processes into information" [5] (p. 80), including physical activity and bodily dys/functions. Because of people's tendency to carry their mobile phones everywhere and close to their bodies [6], mobile health applications offer new and interesting possibilities for health promotion [3], self-diagnosing [7], as well as promoting patient empowerment and reducing health care costs [3,8]. These technologies have caught the attention of health care sector policymakers, politicians, and the media, who tend to present digital health technologies in largely uncritical ways [9]. In Norway, this discourse portrays mobile health applications as sources of empowerment, health management, and a more efficient and patient-centered health care [10], without demonstrating the actual use of these technologies by patients. In August 2016, the Norwegian Technology Council (Teknologirådet; NTC), which serves as an independent advisory board to the Norwegian government, contributed to this discourse on mobile health applications by publishing a list of the so-called "Mobile Health Solutions" [11]. This list was part of their ongoing "Mobile Health" project. The intention was to give an overview of existing mobile health applications that can perform health measurements and self-tests, as well as to provide information on whether these solutions were approved by the American Food and Drug Administration or are provided with the CE mark, meaning that they adhere to current EU regulations. In a supplementary leaflet, the NTC discusses medical diagnostics as expensive and dependent on laboratories, hospitals, and medical offices. An alternative to these costly examinations are mobile self-tests that can be purchased without health care services functioning as an intermediary, which, according to the NTC, may relieve the health care system of expenditures and contribute to the earlier discovery of diseases. In other words, the technologies were presented in a medical as well as economic discourse. The NTC's list of mobile health solutions has inspired several newspaper articles in Norway and, as a result of the NTC's status as a national advisory board, this list is used in governmental policy-making. Recent governmental proposals present the new health technologies as what Lupton [9] refers to as "magic bullets", by stating that "new technologies give better possibilities of mastering one's own life and health" (translated from Norwegian, [12]). The Norwegian media also frequently refer to mobile health applications as "solutions" [8] and thus confirm Lupton's [9] observation of largely uncritical accounts by the popular media and in public health publications. There is a growing critical discourse surrounding digital health technologies, which argues that simple accounts, such as the "Mobile Health Solutions" list presented by the NTC, strengthen the ongoing processes of increased surveillance and digitalization of health care. Furthermore, such accounts may support technological and social divides, thus promoting specific political agendas that are increasingly persuading patients into tracking and monitoring the body [13,14]. According to Hofmann [4], digital health technology alters the responsibility of humans and institutions. This shift in the locus of responsibility can be explained as part of the patient-centered health care movement and empowerment of patients. In a more critical perspective, this shift can be understood as efforts to promote a form of citizenship in which citizen behave productively and in the interest of the state by voluntarily engaging in self-surveillance to improve their health and reduce health care expenditure [15]. The differences in understanding digital health technologies, between actors such as the NTC and actors in the field of critical research, can be understood in terms of different visions of the potential of technology and, more importantly, what constitutes a disease and what is a meaningful way to track a body. The NTC's account contains several visions: the cost vision, which presents disease as a governmental and social cost and technology can mediate a cost-reduction; the commercial vision, which sees profit opportunities for Norwegian tech companies; and the medical vision, which presents self-tracking as an opportunity for bettering research and treatment of diseases. These visions also contribute to the creation of technology needs and design requirements that further reinforce the dominant discourse on digital health technologies. The "Mobile Health Solutions" list is a product of the vision and interests of the NTC. Although the technologies on the NTC list have not been developed or designed by the NTC, the list mediates visions and epistemologies of what it means to track the body, what constitutes a disease, and how a disease is enacted. Technology establishes how we act toward a disease by contributing to the discovery, diagnosis, and treatment of diseases [16], but tends to "tell partial stories of much larger lives" [5] (p. 80), meaning that the data collected by the technology might not take into the account the fluid lines dividing health and disease in a patient's life. Feeding this information back to the users constitutes moral actions and may have implications for the proclaimed empowerment of patients, who have an embodied and often conflicting vision of their body and disease. In this paper, we will focus on visions of health and disease mediated by technologies and ask: how can patients negotiate the visions mediated by mobile health technologies to become more empowered knowers of their body? Aim of the Article In this article, we will focus on the tension resulting from the different visions mediated, inscribed in, re-produced, and promoted by digital health technologies. We understand visions as generative, often future-oriented, constructs that inspire, shape, and mobilize a wide variety of activities, such as technology design, research, discourses, funding, policy, and perceptions. With our particular focus, we aim to contribute to the growing body of critical inquiries into digital health technologies by proposing an analytical classification system drawn from social studies and medical philosophy, which is based on three perspectives on disease, namely disease, illness, and sickness. The three perspectives represent often contested but interlinked visions and epistemologies or ways of knowing in health care policy and practice, including the issue of who can be a knower in health care. Patient empowerment and patient-centered health care envision a role for the patient as an active participant, which entails the role of a knower in his or her health and care. Exploring the visions shaping the design, promotion, and use of mobile health applications enables us to understand how patients negotiate these visions. We do this by applying our classification system to two data sets. The first set consists of the NTC's list of "Mobile Health Solutions". This dataset represents the vision of the NTC as well as the visions of the design and development team of each of the technologies on the list. The second dataset consists of empirical material on the use of mobile health applications. This data is extracted from interviews with 15 young patients diagnosed with Inflammatory Bowel Disease (IBD). The patients took part in a larger project on the transition from child-centered to adult-oriented health care services. We present the two sets of data to draw attention to the disconnections between the various visions. The remainder of this paper is structured as follows: Firstly, we will present the background concepts and literature that have informed the research approach of this article. In this section, we will focus on the concept of vision and the three perspectives central to our analysis-disease, illness, and sickness. In Section 3, we will explain our research approach -the classification system, and the datasets. Our data includes testimonies on the use of mobile health applications by young patients, highlighting the ways in which patients use and appropriate various applications. We have split our analysis and discussion into two separate sections. Section 4 presents the analysis and discussion of the overall functionality of the analyzed mobile health applications based on the list by the NTC and the applications reported by the young patients. Section 5 presents an analysis and discussion based on the use and appropriation of mobile health applications by the young patients, followed by a discussion on patient empowerment and the design of mobile health technologies. Lastly, we describe our suggestions for future work on the design of mobile health applications and present our concluding remarks in Section 6. Background Central to mobile health applications is the tracking of bodily functions and physical activity, commonly referred to as self-tracking or quantified self. Self-tracking activities are linked to the empowerment of patients through the promise of offering new knowledge about the body, which might lead to new practices, bodily changes, and better health outcomes [3,14,17,18]. In the context of institutionalized health care, knowledge of the body and the disease has always been fundamental to diagnosis, treatment, and care. To prescribe the appropriate treatment or care, a disease must be localized and quantified [19]. Inherent in the individual meetings between patients and health care institutions, or the process of diagnosing taking place between a doctor or a patient, is the distinction between knowing subjects, the medical practitioners, and objects known, the patients [19]. Mol [19] (p. 27) describes this meeting as an enactment of the medical gaze: "[w]hen doctor and patient act together in the consultation room, they jointly give shape to the reality of the patient's hurting legs". This enactment is by no means homogeneous, as this enactment depends on, and is entwined with, various technologies to "render the body more visible" [10,19,20]. Due to the entwinement of medical practice with medical technology, e.g., stethoscopes or x-ray machines, medicine can be understood as technoscience [21,22]. Medical technologies extend the practitioner's vision beyond the skin of the patient to quantify and to allow for the localization of pathologies. This technoscientific process, performed by the medical practitioners, is negotiated with the patients and their experiences, and results in knowledge of the patient's body that can be treated and cared for with current medical and pharmaceutical knowledge. Due to the diffusion of digital health technologies, the medical gaze is now re-negotiated, explored, and shared by other actors, and makes aspects of the patients' lives that were, until now, out of reach of the health care institutions, accessible to them [18]. As a result, both health care professionals and patients participate in producing descriptions of the body that can be redistributed, technologized, and capitalized [9]. The data doubles, the digital data collected about the individual and portraying her in a certain way [5,23], unfold in the relationship between users and their technologies and add value to aspects of the body and to activities that before were deemed without value [14]. Much like the medical instruments, self-tracking and self-measurement technologies are perceived as offering insights that are objective and factual, rather than embodied and situated [13,15]. Perhaps this reflects the dominant understanding of science among the general population in Western societies, which is based on "disembodied scientific objectivity", which Donna Haraway describes as "visions from nowhere" [21]. Haraway [24] argues that a vision is always from somewhere; it is situated in epistemologies, knowledge, policies, practices, and discourses. A vision also requires instruments of vision [21]. Thus, digital health technologies mediate the vision of medical practitioners as well as the situated visions of people who designed the technology. One needs to proceed cautiously within the terminology of vision and mediation to not confuse technologies with tools to visualize or to extend our bodies. Verbeek [25] (p. 393) explains that "human beings and their world are products of mediation, not a starting point". Although technologies can be understood as tools for "recrafting our bodies", they should also be seen as means to enforce meanings [24] (p. 164). It is the designers who translate "the world into a problem of coding" [24] and inscribe a specific vision of the world in new technological artifacts [26]. Consequently, whose vision is inscribed into digital health technologies becomes a question of who can be a knower of a body and how this knowledge can be obtained. Can one be both an object known and a knowing subject within the technology-mediated visions of health and disease? We looked for a place to start addressing this question in the writings of Annemarie Mol and Donna Haraway. Mol [19] introduced the concept of enactment of the medical gaze, which enables the understanding that the medical gaze is as much a situated vision as the vision of the patient. The patient and the doctor can be understood as having their particular visions, epistemologies, and interests and enacted them in their meetings. This coalition between Mol's enactment and Haraway's situated visions allows us to illuminate the various visions and assumptions about patient empowerment: who can be the knower of a body that is mediated in and through technologies. Visions of Disease There are varying visions on disease, and this has significant implications, not only for medical science and practice, but also for the way in which we structure our societies, what research we fund, and who can be called a patient. Because the productivity of societies is argued to be structured around health and disease, what constitutes a disease is highly political. Among others, Mol [19] explains how medical practitioners contribute to the maintenance of the social order in modern societies. If a person is ill, she must seek medical assistance. The doctor then either sanctions the patient's behavior or "sends [her] back to work" [19] (p. 57) and hence exert social control. Yet, the precise definition of the concept of disease is lacking and has been the subject of discussion in medical philosophy. In a discussion of the slipperiness of the disease concept, Hofmann [27] explains that there are three relating and overlapping perspectives for analyzing disease: illness, disease, and sickness ( Figure 1). The terms are defined by Hofmann [27] as follows: Illness, or "being sick", is a term meant to describe the (negative and) subjective experience characterized by pain, suffering, symptoms, and syndromes. Disease, or "having a disease", implies findings and classifications executed by medical professionals and is characterized by signs and markers. Sickness, or the "sick role", refers to being perceived as sick in the social context and is characterized by social behavior. The sick role has been discussed extensively in the context of functionalist theory, e.g., [28,29], and in terms of the implications for the patient-doctor relationship [19,30]. professionals and is characterized by signs and markers. Sickness, or the "sick role", refers to being perceived as sick in the social context and is characterized by social behavior. The sick role has been discussed extensively in the context of functionalist theory, e.g., [28,29], and in terms of the implications for the patient-doctor relationship [19,30]. Although the three perspectives differ epistemologically and each has its knowers and ways of knowing the body and its disease, the perspectives are related and dependent on each other. Räikkä [31] argues that "[a]lthough the medical concept and the social concept may be closely related in many ways, neither is reducible to the other. The fact that a condition qualifies as a disease in the medical sense does not by itself entail that it qualifies as a disease in the social sense, or vice-versa" [31] (p. 359). Nevertheless, each perspective is favored by the group whose practice and understanding they represent and support. Health care professionals favor the disease perspective as it is most instrumental to them and is grounded in their discipline, epistemology, and practice. In Foucault's words, knowledge of disease is "the doctor's compass" [20], and the signs and markers that constitute this perspective allow for correct diagnosis and administration of treatment. The illness perspective positions patients as the knowers of their pain and suffering, but is also gaining importance in health care due to its link to well-being [32,33]. The renegotiation of the illness and disease perspectives in patient-doctor relationships aims toward the legitimization of the sickness of the patients, which may be important for the patient's identity. The importance of sickness for the identity of patients has been explored, by among others Pols [34], in a study of people with Chronic Obstructive Pulmonary Disease (COPD). Pols showed that patients needed a form of presence or visibility of their disease to create a social position within which they could have productive lives. In the study, the technologies aiding the patients, mobility scooters, helped to make the patient's invisible disease visible to others and validated their sickness role. Research Approach Our focus on visions and the enactment of these visions guided our research approach. To elucidate the various visions inscribed in and mediated by mobile health applications, we developed a classification system based on Hofmann's three perspectives, disease, illness, and sickness [27], to visualize with which of these visions the various mobile health applications were affiliated (Table 1). According to Bowker and Star [35] (p. 10), a classification system "is a set of boxes (metaphorical or literal) into which things can be put to some kind of work-bureaucratic or knowledge production". Our system is based on our analytical framework, which maps the affinity of each mobile application with the three perspectives on disease. Affinity is a concept drawn from Haraway, who suggests the term affinity instead of identity [24]. Affinity may be helpful for understanding the relation between the disease perspectives and the data set because the technologies should not be categorized based on being identified as representing certain perspectives, but rather by having an affinity with these perspectives. As presented in the previous section, each of the disease-perspectives represents different phenomena and units that are measured and analyzed to establish the disease, illness or Illness Sickness Disease Although the three perspectives differ epistemologically and each has its knowers and ways of knowing the body and its disease, the perspectives are related and dependent on each other. Räikkä [31] argues that "[a]lthough the medical concept and the social concept may be closely related in many ways, neither is reducible to the other. The fact that a condition qualifies as a disease in the medical sense does not by itself entail that it qualifies as a disease in the social sense, or vice-versa" [31] (p. 359). Nevertheless, each perspective is favored by the group whose practice and understanding they represent and support. Health care professionals favor the disease perspective as it is most instrumental to them and is grounded in their discipline, epistemology, and practice. In Foucault's words, knowledge of disease is "the doctor's compass" [20], and the signs and markers that constitute this perspective allow for correct diagnosis and administration of treatment. The illness perspective positions patients as the knowers of their pain and suffering, but is also gaining importance in health care due to its link to well-being [32,33]. The renegotiation of the illness and disease perspectives in patient-doctor relationships aims toward the legitimization of the sickness of the patients, which may be important for the patient's identity. The importance of sickness for the identity of patients has been explored, by among others Pols [34], in a study of people with Chronic Obstructive Pulmonary Disease (COPD). Pols showed that patients needed a form of presence or visibility of their disease to create a social position within which they could have productive lives. In the study, the technologies aiding the patients, mobility scooters, helped to make the patient's invisible disease visible to others and validated their sickness role. Research Approach Our focus on visions and the enactment of these visions guided our research approach. To elucidate the various visions inscribed in and mediated by mobile health applications, we developed a classification system based on Hofmann's three perspectives, disease, illness, and sickness [27], to visualize with which of these visions the various mobile health applications were affiliated (Table 1). According to Bowker and Star [35] (p. 10), a classification system "is a set of boxes (metaphorical or literal) into which things can be put to some kind of work-bureaucratic or knowledge production". Our system is based on our analytical framework, which maps the affinity of each mobile application with the three perspectives on disease. Affinity is a concept drawn from Haraway, who suggests the term affinity instead of identity [24]. Affinity may be helpful for understanding the relation between the disease perspectives and the data set because the technologies should not be categorized based on being identified as representing certain perspectives, but rather by having an affinity with these perspectives. As presented in the previous section, each of the disease-perspectives represents different phenomena and units that are measured and analyzed to establish the disease, illness or sickness. These units can be both qualitative (e.g., embodied experiences of pain) and quantitative (e.g., glucose levels or body temperature). Our classification system is based on the understanding that the collection and representations of different types of data in the various functions in mobile technologies support different phenomena; e.g., a sign, such as a fever, can be measured using a thermometer and represented by a number (quantitative), or appear as a symptom, a feeling of being warmer than usual and be represented as a qualitative account of this specific experience. We divided the three perspectives further into subcategories that would aid us in analyzing and structuring the various visions inscribed in mobile health technologies. The subcategories are (i) the type of data; (ii) unit of analysis for the technology; and (iii) means for analysis that best support the applicable perspective. To exemplify, if a mobile application collects numerical values on body temperature and analyzes the data for the user to determine the right course of action, the application is categorized as a disease-affiliated technology in our classification system. Our analysis of mobile health applications was based on a deductive content analysis [36]. It is important to stress that there is space within this classification system for overlapping perspectives within each of the analyzed technologies. One perspective does not exclude another, but it may have a stronger affinity. For instance, although an application might measure markers, such as blood sugar, and analyze this data through the use of an algorithm, the data might still encourage "a relation to life events" and allow for "establishment of patient's own causalities". As such, the classification system provides a first ordering of the data, which will be presented in the following section, followed by a discussion and a second ordering of the affiliations based on the use of health applications by the young patients. Empirical Data The data for our analysis consist of two separate sets. The first set consists of the 18 applications and wearables suggested by the NTC [11]. The NTC's list is continuously updated with new technologies and consisted of 83 "mobile health solutions" at the time of data collection (02.05.2017). In our analysis, we only included the applications and wearables that are available in Norway. The second set of data consists of six mobile applications, whose use and non-use was reported by 15 young patients aged 13-25 (six male and nine female patients), diagnosed with IBD. They were interviewed as part of a larger project regarding the transition from pediatrics to adult health care services. The participants were interviewed using the Transition Cards method [37], a qualitative card sorting method that we specifically designed to address the various aspects surrounding young patients in transitioning from pediatrics to adult medicine. In addition to asking the participants to sort cards representing important people, things, skills, and feelings into categories representing the various stages of transition, we were also interested in their use of digital health technologies. The overall goal was to understand the potential of mobile health technologies in supporting them in the process of transition. The patients were recruited while receiving treatment at two hospitals, the Akershus University Hospital and the Central Hospital in Vestfold. The medical staff decided whether the patients were well enough to participate and introduced them to the study. The patients received an information leaflet and a consent form, which they signed upon meeting the researcher. The research was registered and approved by the ethical board at both hospitals and by the Norwegian Social Science Data Services (NSD). The overall data from the interviews were analyzed using deductive thematic analysis [38], but the mobile health applications that the patients reported on during the interview were analyzed according to the classification system in this article. We have discussed the findings regarding the participant's technology needs and current technology use in a previous article [39]. Visions of Disease Our first analysis using our classification system is based on the list of 18 applications and wearables suggested by the NTC and six mobile applications whose use and non-use was reported by the participants in the study on health care transitions (see Tables 2 and 3). We found that 14 of the 18 mobile technologies on the NTC list have an affinity with the disease perspective. Two of the technologies can be placed within the overlap between illness and disease, and two in the overlap between the disease and sickness perspective. The list presented a strong presence of digital health tests and a focus on diagnostics, which supports the NTC's note to the ministry, in which they presents mobile technologies as means to achieve efficiency and cost-reduction in the Norwegian health care system. Three of the six health-related mobile applications selected and used by the participants (Table 3), have an affinity with the illness perspective. The IBD app was the only disease-specific application within that data set. All interviewees reported on their non-use of this app, even though the IBD app was recommended by their doctors [39]. Two of the apps had a double affinity, disease/sickness, and disease/illness. Interestingly, none of the 24 technologies could be analyzed as representing only the sickness perspective, which was perhaps due to the focus on self-management both in the list provided by the NTC and in the accounts given by the young patients. The findings from the analysis of the mobile applications used by the patients indicated that young patients had a different set of "mobile health solutions", which represents contesting visions when compared with the applications presented by the NTC. Our analysis of the data indicated that the patients valued health applications that facilitated them to become the knowers of their bodies and illness. We will discuss this in the following sections. Knowing Devices Ten out of the 18 mobile health solutions listed by the NTC required additional measuring devices. Positioned within the disease-vision, they required additional instruments in contrast to the rest of the analyzed technologies, which only used the mobile phone. These additional instruments, such as lenses, blood sugar meters, blood pressure meter, wristband, and stethoscope, could send data to the user's phone. They provide data of higher accuracy and detail by coming closer to the body, enhancing the view of the body, and even getting inside the body through breaching the skin. The medical practitioners, the receivers of the data sent by several of these applications, were not the only knowers of the body. This is illustrated by the example of the skin anomaly technologies: three out of the six technologies for diagnosing and tracking skin anomalies required additional lenses to provide high-detail images of the patient's skin. The technologies mediated an understanding that neither the patient's eyes nor the phone's camera was accurate enough to identify and analyze the skin anomalies correctly. The existence of additional devices might also lead to a situation in which patients view stand-alone mobile applications as limited in their ability to gain knowledge about their skin. The relatively high price of these additional lenses, (around $1400 for DermLite), indicate that the average patient will not be able to afford some of these devices. Another three devices (#3, #12, #18) were designed for patients who have diabetes. Here the novelty was the mobile application itself, which accompanied the measuring device and which could visualize the data produced by the device, as opposed to stand-alone glucose measuring devices. In her comparison between the logic of care and the logic of choice, Mol discussed blood sugar meters [40]. In the logic of choice, the patients become customers and are empowered to choose both their care and the technologies supporting this care. Similarly to other customers, patients are invited to enter the market to buy attractive products, which are marketed in positive terms and "buy as much kindness and attention as they can afford" [40]. The autonomy to choose, whether a patient wants to buy an additional device or not, is presented as a kind of empowerment. On the other hand, this empowerment is undermined when the choice includes expensive products, which are unaffordable to most patients. Since the purchase of expensive medical technologies is the task of the medical institutions, expensive technologies will reinforce the medical gaze, not only in terms of knowing the disease, but also in terms of who controls these instruments of vision. Knowledge Work The first ordering in our classification work showed that knowledge of the body and disease required work by the patient. Only two of the technologies in our sample can generate data without manual input or specific measuring actions performed by the users. These two technologies were the NTC-suggested Embrace watch, which is a wristband developed for people who have epilepsy, and the "Luftambulanse" (Air Ambulance) application discussed by one of our participants, which is a mobile application that generates user's GPS location in case of an emergency. All other mobile applications and devices required the user to insert the data through manually: (i) taking pictures of their skin, body, stool; (ii) inserting their symptoms and health data, such as weight, body temperature and also pain and its severity; (iii) puncturing their skin to insert test strips into the blood sugar meters, strapping on the blood pressure meter or placing the stethoscope against their chest; (iv) placing their fingers against the phone's camera and flashlight, and (v) writing down their symptoms. Moreover, several of the mobile technologies suggested by the NTC are dependent on another person assisting the user in producing data, e.g., in the case of UMSkinCheck, which requires someone else to take pictures of the user. Similarly, the functionality of the PoopMD, which is aimed at diagnosing newborns and infants, depends on the parents to take pictures of their child's stool. The work of the patients and their helper-technologies, such as adjusting their bodies and translating their embodied symptoms into text or numbers, feeds data into the mobile applications for analysis and/or categorization. The majority of the results are visualized in the form of graphs or marked as requiring medical consultation, after algorithmic analysis. In a few of the technologies, however, the patients can share their data with others as well as their physicians. Patient Empowerment The majority of the analyzed applications, except for the illness-affiliated ones, view the user as a provider of data about the body to the application, and not necessarily as knowers or decision-makers about their health. It is hard to know if the patients, and not just their bodies, are included in the vision of the technologies, as the data about their bodies are in fact disembodied. The technologies' vision is limited to the input they can process. The results of algorithmic analyses are often perceived as more "factual" and "credible" than the users" embodied and subjective experience and might be rooted in the cultural notion that "seeing" makes knowledge reliable [5]. In other words, algorithmic analyses are perceived as "better" knowers of the body than the patients themselves. These cultural beliefs, combined with techno-utopianism, result in a view of algorithms as offering a new form of logic and expertise, described by Lupton and Jutel [7] as "algorithmic authority". Regardless of the authority of algorithms and patients' pre-diagnostic work, the results of the produced by these mobile health applications are always presented in ways that portray qualified doctors as the final decision makers, advisors, and knowers [7]. Without facilitating a critical understanding of the results and possible methodical flaws of digital health technologies, patients may be left more anxious and with a worsened illness than prior to the information or information visualization provided by the app. Furthermore, diagnoses and results generated by the applications do not offer access to medical treatment nor further laboratory tests or qualify the users for sick leave; the task of diagnosing and granting access to health care resources remains with the medical practitioners [7]. This challenges discourses that link digital health technologies to patient empowerment. In two of the mobile health applications used by the participants in our study, it was the patient, rather than the algorithm, who had the authority to decide whether the patient should contact a medical professional. Here, the patient can be viewed as the knower of the (her) body and the one who decides whether she would like to seek care and treatment. The role of the health professionals in these two applications is two-fold. In the case of the Air Ambulance app (#19), the user contacts the emergency department and forwards her coordinates, in order to be transported to the hospital. In the case of the Emergency Medicine Handbook application (#23), which supports medical personnel to assess the severity and treatment of patients at the emergency ward, the patient assesses if the severity of her symptoms requires a visit to the emergency ward. Such use of the application puts the patient in the position of the knower of her body, while the health care professionals assume the role of resource managers. These two mobile applications represent a traditional approach to health care; only the means of contacting the practitioner have changed. Apart from the three applications affiliated with the illness perspective (#19, #20, #24), all of the mobile health technologies in the sample relied on medical practitioners, either for diagnosing, treatment or evaluation. Also, six of the applications (#6, #11, #14, #15, #16, #23) were developed for health practitioners; their usage by patients could be compared with patients using analog stethoscopes without any knowledge to support the understanding of the data the technology produces. The analysis according to the classification system demonstrates a strong presence of disease-affiliated technologies on the NTC's list. Combined with the recommendations from the patients' physicians, the support surrounding the disease affiliated applications seems to lack attention to (i) the interests and epistemologies inscribed in mobile health applications (e.g., the "IBD app" was developed by a pharmaceutical company) and to (ii) the unforeseen uses of these technologies. Appropriating Medical Technology for Patient Empowerment The previous section presented a classification based on the functionality and features of the mobile health technologies. The analysis illustrates that the discourses surrounding health technologies that present these technologies as means for achieving patient empowerment, are questionable. Not only did the majority of the analyzed mobile applications lack any information that would educate their users; the patients were not acknowledged as knowers of their diseased bodies. Only the diabetes-management applications could be seen as illness-affiliated insofar that the users were already familiar with blood sugar meters. In this second ordering of the data, based on how mobile applications are used by the patients in our study, we will continue exploring the question of patient empowerment and who can be a knower. While the first ordering uncovered the affiliations with the disease perspective, we will argue in this second ordering that these affiliations are the effect of visions mediated by technology. Verbeek [41] (p. 99) explains that "technological mediations are not intrinsic qualities of technologies, but are brought about in complex interactions between designer, users, and technologies ( . . . ). [T]echnologies can be used in unforeseen ways, and therefore play unforeseen mediating roles". For example, technologies can be used to make disease invisible [42], technologies can be resisted, and technologies can be appropriated in different ways than envisioned by technology designers and health care providers. We will discuss the mediating power of technology in the following examples of the use and non-use of mobile applications. Resisting the Algorithm The participants in our study, all IBD patients, reported that their clinicians recommended them to use the "IBD app" (#21), which is an application that processes self-reported data on pain, symptoms, stool consistency, rectal bleeding, and drug administration. Based on the occurrence of the relevant symptoms, the app recommends patient/users to contact their specialist. The participants reported on having downloaded the prescribed application only to delete it after a short period. The young patients found the application's push notifications and reminders to log their symptoms to be annoying and intrusive during symptom-free periods. As explained by one of the participants: I guess that I forgot it because you are supposed to put in information about how you feel from day to day. And for example, if you have a period when you do not have any illness or symptoms, you will get a reminder a month later. Then it [the IBD app] asks the same questions: "How are you to day" . . . so no . . . It was too time-consuming. (boy 17a) Besides, the patients had regular appointments at the clinic. In case their condition worsened, they could contact the clinic or their general practitioner on their own: It was not something for me, it did not work for me, so I didn't bother to use it [ . . . ]. I don't know... Honestly, I get enough information about [my condition] here [at the clinic]. (boy 17b) None of the patients were still using the IBD app at the time of the interview. Within the first ordering in our classification system, the IBD app was categorized as affiliated with the disease perspective. Similar to Ruckenstein [18], we argue on the basis of our findings that applications that do not put the user in the knower-position are of no use to the patients, because, as is applicable to many self-tracking technologies [18], they fail to engage users. However, this interpretation is weakened by one of our participants' account regarding her use of the "Headache diary" (#22). The "Headache diary" application is aimed at logging the occurrence and severity of headaches and administration of painkillers to correlate her headaches to life events and activities. The patient explained that she did not use the results of the application in her consultations with her doctor, but instead was attempting to relate her headaches to life events and establish her own causalities: [I use it] only when I have migraine attacks, which is not that often. I am just trying to get an overview and create a picture, if there is any picture to create from using it. (girl 22) The "Headache diary" became an instrument to extend and structure her own understanding of her diagnosis. She could use her log to understand her headaches and adjust her lifestyle to decrease their occurrence. This case illustrates that although the description of the application clearly states that the results should be discussed with a medical professional, the user can choose to appropriate the purpose of the application to her own needs and knowledge. In this case, the patient can be argued to become empowered as she became more knowledgeable on her illness by connecting the occurrence of her symptoms with certain behaviors and events in her daily life. The patient's unforeseen use (at least to the developers) of the application allowed her to establish her own causalities and become more active in self-care of this specific area of her health. Lupton [43] argues that the lure of numbers, as indicators of health and representations of the body, is that they appear as scientifically neutral and thus invite people to think of their body through numbers [44]. Furthermore, Lupton [43] (p. 399) argues that "it becomes easier to trust the numbers over physical sensations". The findings in our study did not support this argument. However, the creative ways in which the participants negotiated their visions against the functionality and presented results in the mobile health applications raises concerns related to Ruckenstein's [5] argument that the data doubles generated by health technologies "can also profoundly change ways in which people reflect on themselves, others, and their daily lives" (p. 82). When used in unforeseen ways or to serve users' own agendas, especially when the goal is to show adherence to treatment, track the body for the medical practitioner, or to create causalities to change one's behavior, alternative use of mobile health applications might have unexpected and negative health outcomes for people with physical challenges. The inscribed disease vision assumes definitions of disease on the basis of signs and markers [27,45]. Consequently, the treatment will necessarily try to change the conditions until the signs and markers are back to "normal". Likewise, a visualization of patients' changing condition in disease-affiliated technologies might lead them to try to change their behavioral patterns to return to "normal". This behavior may be problematic if the technologies alter the patients' understanding of what "normal" is, or the patients alter the representations produced by the technologies until they present their values as "normal". Tracking Cycles The majority of the applications in our sample have a direct link to a diseased body. They aim to track anomalies, to diagnose, and to facilitate communication with health care practitioners. One of the mobile applications used by the participants in our study cannot be linked to disease as it tracks menstrual cycles, which indicate changes in the state of a healthy body. The inclusion of the "M. Cycles" application and its categorization as affiliated with the illness perspective might be viewed as inappropriate-after all, neither menstruation nor the menstruation cycle is a diagnosis or disease. However, many people with a female reproductive system experience a variety of symptoms and pain in relation to their monthly cycle and while not qualifying as a disease, these symptoms can constitute an illness, especially when combined with other conditions, diseases, or functions in the life of a patient. Several of the female participants reported on using the "M. Cycles" application. The persistent use of this app contrasts with the reported non-use of the "IBD app", despite both application's focus on tracking of symptoms and patient-reported signs. In addition to tracking the menstrual cycles, "M. Cycles" also allows for input on mood, cycle-related symptoms, weight and body temperature, life events, and sexual and physical activity. In the case of the "M. Cycles" application, push notifications concerning upcoming menstruation and ovulation were welcome by our participants: I love it [ . . . ] It's not certain that I will get it [menstruation] when it says I will, but then I know approximately when I will get it and can be prepared for it. (girl 17) The functioning of the M. Cycles application does not rely on the input of symptoms or data, other than marking the first day of the user's menstruation. The application's analysis does not propose contacting a physician, and the functionality could be appropriated to life phases during which users will not experience regular menstruations. Because menstruation cycles are rarely a concern for IBD specialists in their consultations with patients, the benefits of the persistent use of this application befalls only to the users. From a clinical perspective, logging events and tracking of symptoms related to one's menstruation, without the existence of an underlying disease affecting the reproductive system, does not make much sense, but within the patient's illness perspective, this activity might lead to knowledge about the body and aid in coping with the illness and caring for oneself. The reported use of the "M. Cycles" app by the participants indicates the importance of the illness perspective in health technologies. Re-Negotiating the Disease Vision The participants reported various apps for booking an appointment with a general practitioner or specialist, and related text-message based appointment reminders, as very useful. Such appointment managing applications build on the illness and sickness perspective and stand in contrast to applications such as the "IBD app" and self-tests in the NTC's list, which delegate the decision of seeing a doctor to algorithms and health professionals. If a person suffers from any symptoms or experiences her patientness, it is her judgment and knowledge of her body that makes her contact a professional for treatment or the legitimization of the symptoms as a disease. The disease-affiliated applications do not assess the qualitative experience of the person's illness before recommending her to contact medical professionals. In this case, it is possible that the applications interfere with the patient's need to contact health care providers, as they mark the symptoms as normal or not severe enough to see a doctor. Applications such as "Air Ambulance" do not make such calculations and facilitate immediate and eased access to help by providing the emergency workers with the patient's position. The participant using the "Air Ambulance" application shared a story of how this application helped her: It has saved me once, so I am very fond of it. I was hiking in the forest with my boyfriend and felt really sick. We went on a short hike so I did not bring anything with me but we had cell reception and used the app. They came after 20 min. It would have taken longer if it was not for the app [ . . . ]. I think it is great because it gives you safety. (girl 23) Our assumption about the limits of the disease-affiliated applications is also challenged by the account of one of our participants, who reported on using the "Emergency room handbook" application in deciding whether he should contact the hospital when experiencing symptoms. The handbook had initially been developed to aid emergency health workers in the assessment of patients' symptoms, signs, and markers and provided them with appropriate courses of action. The patient explained that he used the app to locate the combination of symptoms to see if the emergency room staff could do anything or if the combination of symptoms was not severe enough to be prioritized in the waiting line or to receive treatment. The examples of the reported use of the headache diary and the "Emergency room handbook" illustrate that although mobile health applications can be categorized as affiliated with the disease perspective, they can play unforeseen mediating roles, enabling users to re-negotiated their purpose. Understanding the Different Uses and Appropriations From a clinical perspective, tracking signs, markers, and symptoms that do not directly lead to a diagnosis or managing a condition is illogical. Lupton [10] argues that there exists a patronizing "we know better" attitude in representations of the relationship between doctors and patients and their mediating technologies. However, by promoting or even prescribing disease-oriented technologies, such as the "IBD app", the medical practitioners promote ways of knowing and caring for the body which might be oppressive and add additional stressors to the lives of the patients, who already need to juggle the responsibilities and demands of their multiple identities and roles. As Ballegaard et al. [44] (p. 1808) argue, "health and health care technologies are just small pieces that [the patients] try to fit into the larger puzzle of the everyday routines ( . . . ). What might make sense from a clinical perspective, might not make sense in the everyday life of a patient". Illness and patient-role are only some of the factors in the participants' lives, while disease, diagnosing, and treatment constitutes the discipline, practice, and professional identity of a medical practitioner. As a result, a tension exists between what the health care institutions, government, and companies perceive as valuable to track, measure, or log and what patients perceive as valuable. Thus, we can understand the use/non-use and appropriation of the mobile health technologies presented in this section as a result of the negotiation between the experience of disease and illness and the different roles and identities of the patients. It is often taken for granted that mobile health applications are desirable and useful to all, without acknowledging that much of the output of these technologies fail to engage people [18], much like in the case of the "IBD app". According to Lupton [10], not all patients wish to be empowered or reflexive. Furthermore, people might not want to engage in self-tracking, because this would make them even more aware of their bodies' limitations or their changing capacities, as one of the most powerful aspects of mobile health technologies is precisely to make visible something that is typically not subject to reflection. In our study, we found that young patients might also not wish to constantly be reminded of their diseases, e.g., during symptom-free periods, or simply refuse to take responsibility for their health. Patients might feel uncomfortable when technologies enter their personal space and may feel invaded or unable to switch off the devices to take time off from tracking their conditions [10]. Uncritical discourses surrounding mobile health applications do not account for how the impact of the longevity of a condition affects the actions undertaken by patient-a person who has never been ill will act differently than someone who has been suffering for years [46]. It is possible that due to the specific time in their life, the participants were entering adulthood, and the chronic nature of their condition, the interviewed patients were resisting the use of apps such as the "IBD app" more than a person who becomes diagnosed with IBD as an adult. However, even if a patient is in fact "hungry for information" about her body, she might lack access to such technologies or the cognitive or physical skills to use them, or find her condition outside of the available functionality of these apps. As illustrated by the accounts of participants in our study, mobile health technologies are also limited with respect to the aspects of life that are not connected to the illness or disease. As we learned through our work with young patients, patients may also feel the need to experience their quantified life as something more than just the ill body, as showcased by the M. Cycles application. The need for separating the patient's identity from the identity of a "normal" person, is especially evident among adolescent patients [47]. In the case of young patients, contrary to the patients in Pols' [34] study, who were trying to make their condition visible, adolescent patients might want to separate being a patient from being a teenager and make their illness and disease invisible [42]. To fulfill their purpose, health technologies depend on the input of relevant data, as frequently as needed to produce technologically and medically meaningful and accurate analyses and outputs. Thus, from a technological perspective, the "annoying" and intrusive reminders and notifications of the "IBD app", make sense and serve a specific purpose-to track a disease. On the other hand, Lupton [10] questions whether push notifications promote healthy behaviors or rather incite feelings of guilt or shame if the users perceive themselves as lacking self-control and self-discipline. In our case, the push-notifications from the "IBD app" reminded the patients of their diagnosis, even when they felt well and symptom-free. When their symptoms worsened, the patients might understand the visualization of their health status as the result of non-compliance with the prescribed treatment, which can have a major impact on their health and functioning. Moreover, when the condition is chronic or fatal, do patients need to see their disease worsen through quantifiable data? If changes in lifestyle cannot improve their condition, what benefits do they gain by inviting the vision of their practitioners into their private sphere? Self-Tests and Invisible Patients When the users notice and quantify every symptom and sign and somehow connect these with their diagnosis, without having this work acknowledged, they might feel neglected and overseen. This may further change the expectations and trust they might have towards health care institutions. As discussed above, not all symptoms and life events are of interest to the medical practitioners because they are not trained to approach health and well-being "in terms of everyday temporalities" [18], and because the vision and epistemology of medical practitioners is not based on holistic and individual observations, but on a multiplicity of similar cases [20]. The medical professionals are reluctant to surrender their authority to patients and people outside of the medical discipline which places the patients and their information in a difficult position [7]. The importance of receiving a diagnosis based on collected data or the embodied experience of an illness is apparent in both popular media and literature within medical philosophy that report on patients feeling relieved upon being diagnosed or having the illness legitimized. The available self-tests such as Skin Vision (#7) or the algorithmic analyses of data offered by the "IBD app" offer a confirmation and a legitimization of one's experiences, but may also lead to what Hung [48] defines as orphaned patients-a situation where there exists a misfit between the formal and medical understandings and experiences of illness. The patients interviewed in our study did not experience becoming orphaned patients both due to their non-use of self-diagnosing tools and due to their long disease history, which taught them to always contact their doctors as soon as they noticed anything unusual about their condition. The patient using the "Headache Diary" used it for her own purposes and did not wish to consult her doctor about the results, which also minimized the chances of feeling neglected and overseen. However, health-seekers and newly diagnosed patients might seek out mobile health applications to translate their illness into disease prior to their consultations and find the efforts to be unacknowledged and not leading to actions in their care, which again contests the proclaimed facilitation of patient empowerment by mobile health applications. Patient Empowerment and the Design of Mobile Health Applications Hofmann [4] argues that technology preserves and promotes paternalism and "[m]edical science and technology give the physician an objective account of disease, which makes it legitimate to ignore the perspective of the patient" [4] (p. 683). The large number of disease-affiliated technologies in our first ordering supports Hofmann's argument. However, the unforeseen uses of these applications show that users can become knowers when the mobile applications serve their specific needs or goals. Unfortunately, some of the mediations and appropriations may result in "risky" (according to the medical discourse) self-care practices [49]. Is it possible to include the patients' knowledge and needs and as well as to support the work of the medical practitioners? Frøisland et al. [50] suggest that it is possible to include the two perspectives productively when designing digital health technology. From a patient perspective, the technology then becomes a means to learn more about an illness and how to communicate and translate it into disease or even sickness, similar to Mol's [19] enactment of the disease perspective taking place in doctor-patient consultations. Frøisland et al. [50] argue that health care personnel "need to recognize patients' existing competencies, experiences, and preferences" to deliver health care that is better tailored to the needs of the public. It is also imperative that, in order to bring the illness perspective into the design of technology, patients become central actors in the design process of mobile health applications and are given opportunities to share their vision in the discourse around digital health technologies [51]. Although the importance of user inclusion in the design of digital health technologies gains increasing support in the design and research community, many technologies continue to be developed with minimal or no patient involvement. Patients "tend to be human factors rather than human actors in the design of digital technology for healthcare" [52] (p. 12), and their involvement is limited to participation in usability testing and as informants. Underestimating patients' ability to contribute to or participate in design results in a lack of patient visions, illness-oriented perspectives, and patients' needs in the final design. If digital health technologies, especially mobile health applications, are to provide direct means to patient empowerment, it is the patients' visions that should be prioritized and inscribed into these technologies. Concluding Remarks This article contributes toward the critical discourse surrounding digital health technologies by providing a classification system to identify various visions of disease inscribed in and mediated by these technologies. The proposed classification system can be helpful in structuring critical analyses of mobile health technologies, which in this study led to questioning of the proclaimed empowerment of patients through such technologies. The data from our interviews with young patients challenged the classification system by showing examples of how users resist and appropriate technologies, often in unforeseen ways that extend the purpose and vision inscribed in these technologies. Haraway [24] argues that we become empowered by figuring out, and learning to manipulate, the code that organizes society. In our examples, we illustrate this kind of empowerment by describing how patients appropriate mobile health technologies to fit their life and knowledge. For example, the patient using the "Headache diary" (#22) chose to apply her own knowledge to the data, instead of following the app's requirements that would allow her doctor to become the knower. Because of the unforeseen and alternative ways in which patients may appropriate mobile health applications, it is important to critically investigate and explore alternative use as well as non-use of health applications and self-tests; to use this knowledge to evaluate the emerging policies; and to design technologies based on the understanding that health application might have a profound impact on how people view and deal with their illness. We found in our analysis a confirmation of Verbeek's [41] argument that the mediating role of technology implies a "fundamental unpredictability". Through analyzing the visions inscribed and mediated by technologies, and juxtaposing these against the use context and reported use and non-use of mobile health technologies, we demonstrated the importance of critical investigations of technologies. We have used this insight to suggest that it is important, in the pursuit of technology-supported patient empowerment, to include patients' visions in the design of health care technologies and to enable patients to assume the position of expert knowers of their body. This suggestion is grounded in the insight that patients have many roles, contesting identities, and symptom-free periods, in which tracking of a disease might be undesirable and unwelcome. The NTC's understanding of mobile health applications as "solutions" is based on the perception that the diagnoses and treatment of diseases is still the task of medicine. This perspective challenges the patient empowerment discourse woven into the various technological and popular presentations of mobile health applications. The analysis and discussion presented in this paper suggests that empowerment may also mean that patients choose to remove or not to download mobile health applications or that they may alter the application's intended use or they may manipulate the data to present themselves as adhering patients. In the design of mobile health technologies, it is important to remember that they "create opportunities, not obligations" [40]. For health care practitioners, this would imply that they need to open up for the possibility that patients may manipulate their data and establish their own causalities. For the patients, non-use should not lead to a disappearance of patient rights and benefits.
v3-fos-license
2023-08-09T15:20:09.232Z
2023-08-01T00:00:00.000
260724660
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "http://www.cell.com/article/S2405844023061534/pdf", "pdf_hash": "c8125795cec2ac84f9eba221f84611c49eb8de4f", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41104", "s2fieldsofstudy": [ "Medicine" ], "sha1": "e0a54340a97ecc5510374d4f76dd2c888e07a4e3", "year": 2023 }
pes2o/s2orc
Helicobacter pylori and epithelial mesenchymal transition in human gastric cancers: An update of the literature Gastric cancer, a multifactorial disease, is considered one of the most common malignancies worldwide. In addition to genetic and environmental risk factors, infectious agents, such as Epstein-Barr virus (EBV) and Helicobacter pylori (H.pylori) contribute to the onset and development of gastric cancer. H. pylori is a type I carcinogen that colonizes the gastric epithelium of approximately 50% of the world's population, thus increasing the risk of gastric cancer development. On the other hand, epithelial mesenchymal transition (EMT) is a fundamental process crucial to embryogenic growth, wound healing, organ fibrosis and cancer progression. Several studies associate gastric pathogen infection of the epithelium with EMT initiation, provoking cancer metastasis in the gastric mucosa through various molecular signaling pathways. Additionally, EMT is implicated in the progression and development of H. pylori-associated gastric cancer. In this review, we recapitulate recent findings elucidating the association between H. pylori infection in EMT promotion leading to gastric cancer progression and metastasis. Introduction Gastric cancer (GC) is a multifactorial disorder and the fourth most common malignancy worldwide; it is considered the second cause of mortality in cancer patients [1,2]. Previous investigations have shown that 50% of newly recognized cases are observed in developing countries [1]. East Asia (Japan and China), Central and South America, and Eastern Europe are considered high risk countries having a ratio between 10 and 30% [1]. In contrast, low risk areas (North and East Africa, Australia, North America, New Zealand, and Southern Japan) show 15-20-fold decrease rate of occurrence compared to high-risk countries [2,3]. However, variations are not limited to geographical locations but also include age and gender [1,3]. Men are two to three times more prone to developing gastric cancer than women [1,4,5]. Conventional gastric carcinoma is detected in the population aged 45 years, where 10% of the cases are considered early onset (45 years and below) [1]. According to the WHO, GC is categorised into adenocarcinoma, undifferentiated carcinoma, and signet ring-cell carcinoma [3]. On the other hand, Lauren's classification divides GC into two subtypes: diffused, and intestinal [1,3]. GC is generally attributed to either environmental or genetic factors. Environmental elements encompass dietary intake, smoking, processed meat, and alcohol consumption, accounting for ~50% of GC incidence. On the other hand, hereditary risk factors involve mostly alteration of the cadherin 1 gene (CDH1), which is associated with diffused gastric cancer cases; this genetic disorder is inherited in an autosomal dominant manner and represent 1-3% of GC cases [1,[4][5][6]. In addition to these environmental and genetic factors, infectious viral and bacterial agents are also considered as causative factors comprising 5-16% of gastric cancer cases [6]. However, since 1994 Helicobacter pylori remains a class I carcinogen of GC development according to the WHO, representing 5.5% of the global cancer burden [1,6]. Helicobacter pylori (H.pylori) is a gram-negative pathogenic bacterium that colonizes the gastric epithelium selectively. Statistically 50% of the population frequently encounters infection with this bacterium; however, most individuals remain asymptomatic [7,8]. Epidemiologically, high frequencies of H. pylori are recorded in developing areas compared to developed regions [7,8]. One Japanese study has shown evidence of a positive correlation between H. pylori and GC [7]. Moreover, several investigations illustrated that eradication of H. pylori notably decreases the occurrence of premalignant lesions, which confirms the association of H. pylori with early gastric cancer stages [7,9]. A remarkable feature of H. pylori is the ability to tolerate high acidic environments since they are urease positive (able to convert urea to ammonia), making the gastric mucosa a suitable environment for their colonization [7,10]. The mode of transmission of H. pylori is via oral-oral or faecal-oral routes, with an increased infection rate during childhood [7]. Variability of outcomes due to this infection depends on several factors: genetic diversity that contributes to the inflammatory response, strain difference as well as the environmental impact, which contributes to the interactivity between host and pathogen [7,8]. Genetic studies revealed heterogeneity in H. pylori's genome, which contributes to variant virulence factors such as CagA, cagPAI, VacA, and adhesion proteins that affect the degree of infection and occurrence of GC [1,7,11]. In this regard, H. pylori can promote Epithelial-Mesenchymal-Transition (EMT), which is considered a hallmark of cancer invasion and metastasis in human cancers [12]. On the other hand, a recent report shows clearly that EBV can cooperate with H. pylori to enhance cancer progression via EMT [13]. EMT is a process in which epithelial cells undergo dramatic morphological changes. Thereby losing their polarity and converting to fibre-like structures that resemble mesenchymal cells [12,14]. In addition, this action decreases cell-cell adhesion properties and stimulates cell mobility, thus converting immobilized epithelial cells into mobilized mesenchymal ones. This acts as a key event that promotes cell invasion and metastasis [12]. Additionally, EMT driven de-differentiation increases mesenchymal features that initiate cancer invasion, stemness and metastasis in addition to chemoresistance [15]. It is interesting to note that such a process is reversable at the intermediate stage, cells can change phenotypes either to mesenchymal via EMT or to epithelial by MET (mesenchymal-epithelial transition) [16]. The process of EMT is accompanied by loss of E-cadherin and upregulation of Vimentin and N cadherin, resulting in the loss of epithelial properties and attainment of mesenchymal ones [12,17]. EMT is subclassified into three distinctive types that will be covered in our review. Various factors such as stress, hypoxia, and pathogens like H. Pylori and EBV infections can promote EMT and result in GC initiation and progression [15,16]. This review aims to understand the underlying mechanisms by which H. pylori infection induces EMT by altering EMT-associated transcription factors, adhesion molecules, extracellular matrix components, and growth factor signaling pathways in gastric epithelial cells leading to the development of GC. H.Pylori's pathogenicity and carcinogenicity The clustering of H. pylori infection increases the risk of developing GC [7]. There are two major pathways implicated in H. pylori infection leading to intestinal-type GC, indirect and direct. While the indirect effect is attributed to the inflammatory processes associated with the infection; the direct pathway effects the molecular make-up of stomach epithelial cells, this comprises the toxic effect of virulence factors, deregulation in cell-cycle controlling genes, deficits in DNA repair systems, loss of a cell's adhesion capabilities, and epigenetic modifications [8]. Pathogenesis of H. pylori's infection can be grouped into four stages [18]. During stage 1, the bacteria enters and survives within the host. H. pylori utilizes an acid acclimation mechanism that neutralizes the acidic pH of the stomach. The process is regulated by intrabacterial urease activity, which breaks down urea into carbon dioxide and ammonia, thus promoting acid resistance by H. pylori [19,20]. Following the first stage, the bacteria moves via the flagella towards epithelial cells. For H. pylori to colonize the gastric mucosa, the bacteria migrates from epithelial to the basal layer driven by chemotaxis with a pH closer to 7.0. There are 4-7 polar sheathed flagella that achieve this process [21]. Once the migration occurs, in the next stage adhesin-receptor interaction takes place. Different bacterial strains express different adhesins, with the most common being, blood-antigen binding protein A (BabA) and sialic acid-binding adhesin (SabA) [22,23]. Other adhesions responsible for adaptation include neutrophil-activating protein (NAP), adherence-associated proteins (AlpA and AlpB), heat shock protein 60 (Hsp60), lacdiNAc-binding adhesin (LabA), and H. pylori outer membrane protein (HopZ) [24]. These adhesins bind to cellular receptors thereby strengthening binding of the bacterium within the mucosal layer and inhibiting bacterial displacement from the stomach due to forces such as peristalsis and gastric emptying. In the final stage, the bacteria secrete toxins to enhance its growth by damaging adjacent epithelial cells. The most common toxins are cytotoxin-associated gene A (CagA) and vacuolating cytotoxin A (VacA), and peptidoglycan which are considered part of H. pylori's virulence factors [25]. H. pylori employs a diverse range of mechanisms to modulate host cellular responses and signaling pathways. CagA stimulates inflammation, provokes the release of pro-inflammatory cytokines, enhances bacterial motility, and promotes the acquisition of cancer stem cell-like properties [26]. Additionally, CagA stimulates host cell growth and proliferation while inhibiting important cellular proteins. On the other hand, VacA creates pores in host cell membranes, leading to apoptosis and necrosis, hampers immune cell activity, and stimulates cytokine release; VacA also disrupts specific signaling pathways and influences cellular differentiation [26]. Together, the concerted action of these virulence factors contributes to the development of chronic inflammation, disruption of cellular processes, evasion of immune surveillance, which can potentially contribute to the progression of gastric cancer T.M. Jamal Eddin et al. [26]. Table 1 indicates few in-vivo and clinical studies that were held to understand the role of these virulence factors. The major genes and molecular pathways implicated in H. pylori associated cancer initiation are. CagA The outcome of H. pylori infection is determined by the genetic heterogeneity present in its genome. CagA, discovered in the early 1990s, represents a crucial H. pylori protein, which is encoded by the Cag pathogenicity island (Cag PAI) and has a rigid correlation with peptic ulceration [7]. The clinical disease is associated with Cag PAI, a virulent contributing factor in H. pylori. The presence of this determinant is frequently indicated by CagA. However, not all Cag PAI strains express the terminal CagA gene product, which results in two classifications CagA-positive (CagA+) and CagA-negative (CagA-) [30]. H. pylori strains differ in the presence of Cag PAI, where the severity of acute gastritis, gastric ulcers, and gastric cancer increases when the virulent factor Cag PAI (cag+) is present [29]. The prevalence of CagA + H. pylori contagion is 90% in Asian countries and 60% in Western countries [7]. Furthermore, the Cag A+ category can be subclassified into East Asian-type and Western-type CagA depending on the repeat sequence Glu-Pro-Ile-Tyr-Ala (EPIYA) motifs at the N terminus of CagA [31][32][33]. The affinity of CagA to SHP-2 (Src homology 2 domain-containing tyrosine phosphatase-2) is considerably higher in East Asian-type CagA, which is considered CagA phosphorylation-dependent host cell signaling [7,34]. Consequently, East Asian-type CagA stimulates further cytoskeleton changes that increases the probability of developing gastric cancer [35]. CagA phosphorylation-independent host cell signaling involves the translocation of the bacterial protein CagA into the host gastric cell cytoplasm upon interaction with epithelial cells; whereby, CagA can alter host cell signaling via phosphorylation or translocation thereby playing a vital role in gastric carcinogenesis [36]. VacA toxin VacA toxin induces intracellular vacuolation; it suppresses T-cell response to H. pylori [37]. The majority of H. pylori strains possess the VacA gene; however, significant variations in vacuolating activities were observed between different strains [38]. The variations seen in VacA gene structures within the signal (s) region, middle (m) region, and intermediate (i) region codes for these differences. The s and m regions are subclassified into s1, s2, and m1, m2, respectively [39]. The s region encodes the N terminus, while the m region encodes the C terminus. VacA s1/m1 chimeric strains (more prevalent in East Asians) trigger enhanced vacuolation than s1/m2 strains (high prevalence in Western populations); no vacuolation is seen in s2/m2 strains [40]. VacA binds to gastric epithelial cells via several receptors; the most common being the receptor-type protein tyrosine phosphatase RPTP [41]. This toxin affects host cells in several ways, such as gastric epithelial barrier interruption, inducing an inflammatory response, triggering vacuolation by disruption of the late endosomal compartment, decreasing the transmembrane potential of the mitochondria, and activating apoptosis [38,40]. Peptidoglycan H.pylori peptidoglycan can be integrated into host cells via Nod1, which results in activating NF-κB dependant proinflammatory response [42]. However, translocated peptidoglycan of H. pylori stimulates other signaling pathways such as: PI3K-AKT signaling and IFN that contribute to GC development [7]. Moreover, adhesins and outer membrane proteins are considered important virulence factors, as mentioned in the section above. Finally, it is important to highlight that H. pylori, play an important role in GC via the initiation of epithelial-mesenchymal transition, which is a hallmark of cancer progression. This biological event and its relation to H. pylori will be discussed in the section below. Epithelial-mesenchymal transition Epithelial-mesenchymal transition (EMT) is a biological process where epithelial cells go through several biochemical changes that result in transdifferentiation into motile mesenchymal cells [43] as seen in Fig. 1. In a study conducted on stomach adenocarcinoma (STAD), it was found that patients infected with H. pylori exhibited elevated expression of IRF3/7. [28] Human The study of human genetics has uncovered diversity within the genome of H.pylori, resulting in various virulence factors such as CagA, cagPAI, VacA, and adhesion proteins. These factors play a crucial role in determining the severity of infection and the likelihood of developing gastric cancer (GC). [11] Human H.pylori strains exhibit variability in the presence of Cag PAI, with a notable impact on the severity of acute gastritis, gastric ulcers, and gastric cancer. The presence of the virulent factor Cag PAI (Cag+) correlates with an escalation in the severity of these conditions. [29] T.M. Jamal Eddin et al. During EMT, the basement membrane undergoes alterations and remodeling with changes in its structure and organization due to polarity loss, invasiveness, high resistance to apoptosis, migratory capacity, and elevated assembly of extra cellular matrix component (ECM) facilitating cell migration and invasion [17]. Several molecular processes play a role in establishing EMT through restructuring cytoskeletal proteins, transcription factor stimulation, secretion of ECM-degrading enzymes, specific cell-surface and distinct micro-RNAs expression. EMT is classified to three distinct types, which share similar genetic and biochemical origins, while having a different biological process and phenotypic programs [16]. The first type involves EMT during the embryogenesis phase where fibrosis and the invasive phenotype are not promoted. However, the embryogenesis phase includes sharing of epithelial cell plasticity characteristics which can promote the reversibility between MET and EMT. Type two EMT correlates with tissue transformation and organ fibrosis. For instance, during the repair mechanism in the wound healing stage, type 2 EM T is triggered by inflammation, however, in case of continuous respond to inflammation, organ destruction occurs [44]. While, type 3 of EMT is responsible for cancer development and metastasis and specifically appears in neoplastic cells due to genetic and epigenetic alterations that deregulate the expression of oncogenes and tumor suppressor genes [16]. It is important to note that the degree of EMT can vary, some cells conserve the epithelial characters while gaining some mesenchymal phenotypes, while others are completely transformed into mesenchymal cells. For cancer cells to possess the metastatic potential, the EMT process is controlled by epigenetic alterations of E-cadherin and β-catenin/LEF activity. This conversion provokes systemic manifestation of cancer [45]. The loss of E-cadherin is one of the significant changes that occurs during the EMT process. E-cadherin is a repressor of tumor progression by enhancing intact cell-cell contact, and preventing invasion, and metastatic diffusion [46]. In most human carcinomas, the expression of E-cadherin gene is either low or absent; the activation of E-cadherin is required to reduce tumor cell invasion and migration; studies show that loss of E-cadherin acts as a trigger for EMT and tumor metastasis [47]. The regulation of cadherins at the mRNA and protein levels occur through changes in transcriptional or translational events, protein degradation, and subcellular distribution [48]. Loss of E-cadherin in many human carcinomas is due to malfunctioning of protein production, which results from gene variation, atypical post-translational modification, or increased proteolysis [46,49]. Several in vivo and in vitro studies illustrate that neoplastic cells occupy mesenchymal phenotype and intimate mesenchymal markers such as FSP1, α-SMA, desmin, and vimentin. These markers are involved in the invasion-metastasis cascade (intravasation, moving through the circulation, extravasation, micrometastases emergence, and eventually colonization) [16,50]. Signals that induce EMT (EGF, HGF, TGF-β, and PDGF) are produced by tumor-associated stroma and are accountable for promoting functional activation in malignment cell of EMT-inducing transcription factors (Snail, Slug, zinc finger E-box bind-ing homeobox 1 (ZEB1), Twist, Goosecoid, and FOXC2) [51]. The activation of the EMT program is established through three components; intracellular signaling network (ERK, MAPK, PI3K, Akt, Smads, RhoB, β-catenin, lymphoid enhancer binding factor (LEF), Ras, and c-Fos as well as cell surface proteins such as β4 integrins, α5β1 integrin, and αVβ6 integrin) and disturbance of cell-cell adherens junctions and cell-ECM adhesions moderated by integrins [16,52]. Genetic changes, either irreversible or reversible, play a role in carcinogenesis. Epigenetic changes such as DNA and histone modifications and acetylation are examples of reversible modifications that trigger atypical gene expression in EMT during tumor progression. Hypermethylation of promoter CpG islands is the essential mechanism of tumor suppressor genes deregulation [46]. Methylation of E-cadherin promoter was observed in the majority of epithelial cancers [53]. The mechanism behind E-cadherin promoter silencing involves two models [53]. The first one suggests that Snail expression associates with E-cadherin silencing and its promoter hypermethylation. However, the second model implies that E-cadherin silencing doesn't necessitate hypermethylation of the promoter but requires further epigenetic changes, for instance histone deacetylation [46,54]. Transcriptional repressors (Snail-1, Snail2, Zeb-1, and Zeb 2) along with histone deacetylases and DNA methyltransferases account for co-repressor complexes that inhibit E-cadherin expression [46]. On the other hand, the polycomb repressive complex 1 (PRC1) protein Bmi-1 is linked with carcinogenesis and EMT [46]. Bim-1 hinders c-Myc-induced apoptosis by binding the Ink4a-Arf locus and blocking it, implying irregular cellular proliferation. Several studies demonstrated that the upregulation of Bmi-1 triggers EMT by restricting PTEN expression thus stimulating the PI3K/Akt pathway and downregulating E-cadherin expression [55,56]. Fig. 1. Phenotypic alterations involving loss of epithelial cell characteristics (orange to green color) and its transition to the mesenchymal cell phenotype (green color). The orange color symbolizes the characteristics of epithelial cells, which typically exhibit strong cell-cell adhesion, organized cellular morphology, and a polarized structure. These cells are associated with the maintenance of tissue integrity and specialized functions. As the color transitions from orange to green, it represents the phenotypic changes occurring during the epithelial-to-mesenchymal transition (EMT). This transition involves a loss of epithelial traits and the acquisition of mesenchymal ones, characterized by decreased cell-cell adhesion, a more elongated and spindle-shaped morphology, enhanced migratory capabilities, and increased production of extracellular matrix components. EMT pathways and H.pylori in human gastric cancer EMT is the process by which epithelial cells gain mesenchymal properties, similarly to some cells it induces cancer stem cells (CSC's) phenotype [57]. Recent studies suggest that EMT plays a substantial role in the generation of CSCs and EMT-inducing signals, which can promote the acquisition of stem-like properties in cancer cells, including self-renewal, resistance to chemotherapy and immune disruption [58][59][60][61][62]. CSCs are a subpopulation of cells within a tumor that can self-renew and differentiate into multiple cell types. CSCs also undergo EMT where cells acquire increased invasiveness and migratory capacity that promote in situ cancer cells to become highly invasive and disseminate to distant sites in the body, leading to metastasis [63]. The antigenic CD44 is correlated with the induction of EMT-activating transcription factors (TFs), and since gastric CSCs are CD44-positive cells, they are capable of inducing EMT [64]. Studies have shown that proteins such as: Snail-1, β-catenin, E-cadherin, vimentin, ZEB-1, and CD44 markers are EMT-interrelated in gastric cancer [65]. The presence of CD44 highly correlates with Snail-1, E-cadherin, and ZEB-1 expression. Stem cells at the level of pyloric gastric glands in the gastric epithelium can regulate Wnt pathway, which is stimulated during the EMT process; however, for the signals to be amplified, Lgr5 should be present [57,66,67]. The main mechanisms underlying EMT regulation in GC involves both transcriptional and epigenetic regulatory mechanisms. The key TFs that repress the expression of E-cadherin and trigger EMT in GC include Snail, Twist, and ZEB in GC [68]. Depending on the relation between the signals, different types of GC are developed. In intestinal GC, Snail2 and ZEB2 act synergistically. However, in diffused carcinoma, Snail1 and Snail2 act in complement [69]. Moreover, Twist expression is responsible for the degree of metastasis [70]. On the other hand, epigenetic mechanisms trigger EMT in GC via DNA methylation, histone modifications, and microRNAs. Epigenetically, in GC the promoter of CDH1 is frequently methylated, thus, CDH1 hypermethylation correlates with the degree of aggressiveness and metastasis of GC [71,72]. Histone modification (methylation or acetylation) also plays a role in GC during EMT. The transcriptional repressor enhancer of zeste homolog 2 (EZH2) is crucial for maintaining the homeostatic balance in gene expression and repression, imbalance between the two causes oncogenesis development. In GC cells, EZH2 downregulates E-cadherin due to histone H3 methylation [73,74]. On the other hand, the acetylation of H3 and H4 results in an enhanced transcription rate due to relaxed chromatin structure [75]. Moreover, microRNAs (miRNAs) act as an oncogene, or tumor suppresser and functions as a post-transcriptional regulator of genes responsible of cell differentiation, cell proliferation, and tumor growth [76][77][78]. In GC various miRNAs are deregulated such as miR-200, miR-101, miR-107, miR-221 and miR-22 [79]. MiR-200 interacts with β-catenin to suppress Wnt/β-catenin signaling, thus inhibiting tumor invasion, migration, and proliferation [80]. In GC cells, overexpression of miR-27 stimulates the Wnt pathway and results in GC cell metastasis and promotes EMT [81]. The main site for H. pylori growth is the gastric epithelium, the strain carrying cag-PAI induces type IV secretion system that triggers the entry of the bacterial cytokines into gastric epithelial cells; resulting in EMT progression due to phenotypic alterations in the cells [15,88]. While a positive correlation is reported between the presence of H. pylori and expression of TGF-β1, Snail, Slug, Twist, and vimentin mRNA [89], a negative association is observed between H. pylori and E-cadherin. These correlations trigger the EMT pathway (TGF-β1-induced) which play a crucial role in GC development [90]. Pathogenic H. pylori functions in a multidisciplinary way by upregulating soluble Heparin-binding Epidermal growth factor (HB-EGF) shedding; HB-EGF is vital for tumor progression, metastasis, and is considered a crucial factor in EMT progression especially in the gastric epithelia [91]. The overall development depends on the expression of EMT proteins "gastrin and matrix metalloproteinase-7 (MMP-7)" [92]; MMP-7 is responsible for cleaving superficial proteins, promoting cancer cell linkage, and strengthening tumor metastasis. Additionally, siRNA, gastrin, and MMP-7 are involved in a feedback loop to regulate EMT. In the presence of siRNA, EMT proteins are neutralized and H. pylori-infected cells promote EMT by upregulating the intact protein and indirectly increasing soluble HB-EGF level [92]. Additionally, CagA downregulates PDCD4 which results in an increase in the expression of TWIST1, and vimentin, while inhibiting the expression of E-cadherin [93]. This process signals a new EMT pathway in gastric cancer. By means for prevention, these two pathways can be inhibited by H. pylori eradication [15]. Similar to the role of H. pylori in inducing EMT in GC, H. pylori can deregulate several molecular pathways which are vital for the onset and development of cancer and will be described in the section below (Fig. 2) [94]. Janus kinase/Signal transducer and activator of transcription pathway Signal transducer and activator of transcription 3 (STAT3) is responsible for angiogenesis proliferation, apoptosis, and basal homeostasis; hallmarks for cancer development including GC. STAT3 can be activated either by proinflammatory cytokines and growth factors secreted by H. pylori or receptor tyrosine kinases phosphorylation (JAK1, JAK2, and Src) [27,88]. From the ~30 genes that H. pylori possesses, the CagA gene activates the STAT3 pathway and induces GC (Fig. 2) [95]. Studies done in mice models suggest that utilization of the glycoprotein-130 by CagA induces signal transduction by IL-6 and IL-11 [27]. IL-6 stimulates the recruitment and homodimerization of gp130, which results in a balance between the two signaling pathways (JAK/STAT and SHP2/Ras/ERK) [96]. On the other hand, IL-11 functions as an activator of gastric STAT3 in the early stages [97]. Interferon regulatory factor signaling pathway Type I interferon production increases during H. pylori infection of gastric epithelial cells, due to TLRs − 7, − 8, or − 9 activation. Several factors such as NFκB, activator protein 1 (AP1), and IRF3/7 are responsible for type 1 interferon production. In H. pylori DC-SIGN receptors account for IRFs (IRF3/7) activation [98]. Moreover, the severity of H. pylori is determined by the presence of cytokine and chemokines stimulated by interferon type 1 produced by nucleotide-binding oligomerization domain 1 (NOD1) signaling pathway [88,99]. A study done on stomach adenocarcinoma (STAD) revealed H. pylori infected patients to have an increase in IRF3/7 expression [28]. IRF7-induced IFN-β production activates type I IFN, IFN-stimulated gene factor 3 (ISGF3) along with subsequent production of CXC motif chemokine ligand 10 (CXCL10) [100]. H. pylori infection also stimulates the IFN signaling pathway regulated by two elements (IRF 1 and STAT1), resulting in activation of NOD1 pathway which is responsible of increasing STAT1-Tyr701/Ser727 phosphorylation levels and IRF1 expression in epithelial cells. This process results in elevated chemokines production coordinated by IFN-γ induced protein 10, IFN-γ, IL-8 and NOD1 [101]. Nuclear factor kappa B pathway H.pylori activates NFκB through several factors including LPS, peptidoglycan and virulence genes (CagA). Through TLR activation, H. pylori regulates the level of NFκB and alters the signaling of its canonical and non-canonical pathways [102]. Both of these pathways are reactivated in B-lymphocytes; however only the canonical one is activated in epithelial cells [103]. The kinase complex IκB kinase (IKK) is activated when the epitope binds to the receptor, allowing the translocation of the canonical NFκB heterodimer of RelA/p65 and p50 thereby enhancing the breakage of inhibitor IκB phosphorylation [103,104]. In B-lymphocytes, LPS of H. pylori activates the pathway via NFκB inducing kinase (NIK) and IKK, where the receptor's downstream activates the IKKα and NIK (Fig. 2). Activated IKKα phosphorylates its downstream P100, responsible for p52 proteasomal breakdown [105]. Two factors (RelB and P52) act together to promote B cell survival, maturation, lymphoid organogenesis and bone metabolism. In gastric epithelial cells, CagA coordinates with intramembrane hepatocyte growth factor receptor (HGFR)/MET triggering the PI3K-Akt pathway, which turns on β-catenin and NFκB (Fig. 2). Moreover, interaction of TRAF6 and TGF-β-activating kinase 1 (TAK1) leads to CagA-induced TAK1 expression which causes upregulation of NFκB either throught the activation of IKK complex due to TAK1 phosphorylation or by CagA multimerization via the Met-PI3K-Akt signaling pathway [103]. c-Jun proto-oncogene signaling pathway In gastric epithelial cells, H. pylori infection induces apoptotic cell death through the activation of new signaling pathways "ROS/ ASK1/JNK" [106]. Apoptosis signal-regulating kinase 1 (ASK1) is an enzyme produced in the presence of H. pylori in a ROS and cagPAI-dependent manner [107]. Intracellular ROS releases ASK1 from its binding protein Thioredoxin (TRX) and activates it [108]. Moreover, ASK1 is responsible for H. pylori mediated apoptosis, and JNK initiation. In a ROS-dependent manner, TAK1 regulates JNK activity positively and negatively [107]. A negative loop between ASK1 and TAK1 is responsible for the equilibrium between ASK1-induced apoptosis and TAK1-induced anti-apoptotic responses, which determine the fate of epithelial cells. When TAK1 or downstream p38 MAPK is suppressed, ASK1 is stimulated in a ROS-dependent manner resulting in downstream NFκB activation in H. pylori response. However, when TAK1 binds to TAB1, ASK1 is inhibited. Activation of the downstream JNK, MAPK, and p38 is regulated by the phosphorylation of MKK4, MAP2Ks, MKK3, and ASK1 [107]. TGF-β pathway The primary tumorigenesis transformation growth factor beta (TGF-β) is the key suppresser of epithelial cell propagation and promotes EMT through two signaling pathways. The first pathway includes Smad proteins which arbitrate TGF-β-induced EMT through ALK-5 receptor, that facilitate motility and mediates the activity of LEF and β-catenin via interaction with Smad. On the other hand, the secondary TGF-β-induced pathway involves p38 MAPK, RhoA, integrin β1-mediated signaling and the activation of latent TGF-β by αVβ6 integrin [16,[109][110][111][112]. Mitogen-activated protein kinases (MAPK) RAS and RAF proteins activate MARK to directly upregulate ERK. Specific ERK proteins stimulated by MEK phosphorylate the c-Myc, and Elk-1 transcription factors [115]. Upon IKK-β and cytosolic phospholipase A2 (cPLA2) phosphorylation, H. pylori LPS stimulates ERK, that in turn enhances the translocation of NFκB and encourages the production of COX-2 and iNOS. H. pylori neutrophil-activating protein (HP-NAP) stimulates cells of the immune system, since it resembles virulence factors. In human neutrophils, HP-NAP provokes ERK and p38-MAPK initiation [116]. Moreover, H. pylori prompts serum-responsive element (SRE) dependent gene transcription and increases c-Fos protein expression, revealing the signaling mechanism through which H. pylori activates ERK [116]. Other molecular pathways that have been reported to play a role in H. pylori infection include HIF-1α [118], BCR [119], and TLR [120] signaling pathways. Conclusion and future perspective This review presents a summary viewpoint on the role of H. Pylroi and its oncoproteins CagA, CagPAI, VacA in the initiation and progression of EMT by alteration of its main biomarkers and deregulation of several signaling pathways, mainly Interferon regulatory factor, NF-κB, PI3k, MAPK and Wnt/β-catenin. Although the role of H. pylori is well described in human gastric diseases especially gastric ulcer, its function in the development and/or progression of human cancer via EMT is not fully understood. Thus, we believe that developing in vitro and in vivo experimental models to unravel the underlying complex mechanisms of H. pylori infection in EMT can elucidate their role in cancer progression. Such advancements can potentially help identify specific therapeutic targets and pave the way for new management approaches of metastatic GC which is the major cause of cancer related death. Author Contributions All authors listed have significantly contributed to the development and the writing of this article. Data availability statement No data was used for the research described in the article. Funding statement Open Access funding was provided by Qatar National Library. Institutional review board statement Not applicable. Informed consent statement Not applicable. Declaration of competing interest "The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results".
v3-fos-license
2020-02-13T09:20:39.609Z
2020-01-01T00:00:00.000
212850043
{ "extfieldsofstudy": [ "Computer Science", "Physics" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://iopscience.iop.org/article/10.1088/1742-6596/1449/1/012115/pdf", "pdf_hash": "8b6e010a31138ed9e57c67a213e1efd29a81f9bf", "pdf_src": "IOP", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41105", "s2fieldsofstudy": [ "Engineering", "Computer Science" ], "sha1": "f2db3bdb8fde57c2e4305c5771c73a8f6d73344f", "year": 2020 }
pes2o/s2orc
Design and implementation of real-time robot operating system based on freertos With the advancement of artificial intelligence technology, the product of its integration with robots:intelligent robots, is appearing more and more in people’s lives and work. Compared to the single function of industrial robots, intelligent robots have the ability to perceive, recognize and decide on the outside world. In order to achieve these functions, data needs to be collected by various types of sensors, and then processed by various algorithms. The introduction of these modules has increased the complexity of the system. Therefore, in order to reduce the complexity between the functional modules, reducing the coupling between modules, and improving the code reuse rate of the functional modules has become an urgent task. This paper proposes an improved FreeRTOS operating system have a driver layer, a middleware layer, and a functional module layer. The experimental results show that the scheme can support the maximum 2000hz module response frequency and the Topic data throughput rate of 2K Bytes per second. Introduction The International Organization for Standardization defines robots as: "The robot is an automatic, position-controllable, programmable multi-function manipulator with several axes that can handle various materials by means of programmable operation, parts, tools and special equipment to perform various tasks". [1]. Intelligent robots enhance the perception, recognition and decision-making ability of the external environment based on traditional robots [2]. These capabilities are achieved by installing different types of sensors on the robot. A large number of sensors increase the difficulty of robot design and development. Therefore, in order to improve the design and development efficiency of the robot function module and coordinate the effective work of each function module, how to effectively reduce the coupling between the function modules becomes an urgent problem to be solved. The world-renowned ROS (Robot Operating System) is a robotic operating system that allows users to quickly build their own robot platform [3,4]. The primary design goal of ROS is to increase the code reuse rate in the field of robot development and reduce the coupling and complexity between functional modules. Through the publisher/subscriber communication mechanism, ROS allows any functional module to be mounted on the ROS as a node at any time, and the nodes are well connected, so that the coupling degree of each functional module is greatly reduced. However, ROS is based on the Linux operating system. Linux is a non-real-time operating system and cannot cope with scenarios with high real-time requirements. This article describes a robotic operating system that combines a real-time operating system with a publisher/subscriber communication mechanism in ROS. It can not only solve the real-time demand of intelligent robots, but also improve the function code reuse rate and reduce the coupling degree between functional modules. Real-time operating system selection Most of the current robot system hardware platforms are based on the ARM core M series MCU, which is characterized by no MMU (memory management unit). Its fast response and strong real-time performance are limited to Flash and SRAM capacity, and most of them are lightweight RTOS. Mainly with multi-task scheduling management, optimize CPU runtime and system resource allocation. Robotic systems typically require processing power for a variety of sensor data, requiring a high degree of real-time and hard interrupt generation, and any upper-level instructions and decisions require the underlying timely response and processing. The ROTS selected by this solution is FreeRTOS. FreeRTOS is an AWS-authorized open source project authorized by MIT. The ecological environment is very stable and will receive more technical support. No matter whether it is scientific research or late products, there is no need to consider the issue of fees. As an RTOS system, FreeRTOS has low complexity and a small number of files. At the same time, FreeRTOS has been ported to many different microprocessors, which is very convenient for use [5][6][7][8][9]. Overall system design The real-time robot operating system proposed in this paper adopts a distributed processing framework. Each functional module is designed as a node and runs under a loosely coupled model. Each module uses PSP (Publish-Subscribe-Pattern) model for communication. The whole system is divided into three layers: device driver layer, middleware layer, and functional module layer. As shown in Figure 1. The function module layer completes the data acquisition and distribution functions of the sensor device. The middleware layer mainly provides two management services: the first implements the PSP (Publish-Subscribe-Pattern) communication mechanism of the whole system; the second establishes a unified sensor management model. The device driver layer defines a common device operation interface for each device. Analysis and design of functional module layers The design of the functional module layer is to effectively implement code reuse, so the general functions of the functional modules are abstracted into the basic classes. The function module layer includes: an algorithm module, a timer module, a frequency parameter setting module, etc. The timer can periodically arrange the processing functions in the module to acquire data and process the data. The frequency parameter setting module can control the frequency of operation of the submodule. Communication mechanism design The real-time robot operating system proposed in this paper adopts PSP (Publish-Subscribe-Pattern), which is the publisher/subscriber communication model. Its essence is event monitoring and broadcast model: The subscriber of the message registers a listener with the publisher when needed and registers the topic that they want to subscribe to the message dispatch center. When a publisher has a message to publish, the message to be published is broadcast to the message dispatching center in the form of a topic, and the event notification is triggered to the subscriber through the listener. The dispatching center will uniformly schedule the processing code of the subscriber to the dispatch center. This scheme implements PSP communication mechanism in the form of cyclic message queue. The topic published by the publisher is defined as a circular message queue, which reads and writes by means of reading and writing pointers. Since each subscriber has its own read pointer, subscribers can control the timing of reading data at their own discretion without affecting other subscribers. This enables a data communication model for multiple publishers of a publisher. Sensor Management Module Design The robot system will contain a variety of sensors, such as: motion sensor (IMU), air pressure sensor, GPS, camera, sound sensor, etc. These sensors are the data input sources of the robot. In order to effectively manage these sensors and their data, a sensor management module was designed: SensorManager. The device module Dev is abstracted in the SensorManager. As shown in Figure 2. First, all the sensor devices are added to a global sensor device list. When the SensorManager starts and initializes, it traverses each sensor device in the list and calls the open function interface. Each device completes the initialization process of the sensor device by implementing the open interface, and then the timer callback function is periodically scheduled in the SensorManager. In the callback function, iterate through each device registered in the device list, and call each sensor device write function to write the device's timing control instruction to the device, read function to read the sensor number data. Finally, the sensor data received by the TopicManager function is released to the system. Drive layer analysis and design The driver layer is a software module responsible for communicating with external hardware devices, and generally includes a hardware abstraction layer (HAL), a board support package (BSP), and a driver. The driver layer is the main channel for communication between the system and external sensors and external control devices. Its main function is to provide the operation interface of the external device and the driver of the device for the upper module. Device drivers can generally be abstracted as: open, close, read, read, write. These operations basically cover all operations of an external device. Test environment The real-time robot operating system proposed in this paper is verified by STM32F4 [10][11][12]. The main technical indicators of the system include: frequency pressure, message delay response time, maximum load capacity and stability. The test cases and test results of each performance indicator are described in detail below. Test case According to the scheduling frequency of the system, four test cases are designed. 1) Frequency stress test: Define four modules to run the addition operation at 2000Hz, 1000Hz, 500Hz, 200Hz, and calculate the running frequency of the output module every 3S. At the same time, the error between the actual frequency and the expected frequency is counted. 2) Message delay response test: Add test Topic communication based on test case one. Time stamp and broadcast for each message in the 500Hz module. Then raise the 200Hz module to 500Hz, and test the Topic in it, and receive the receiving timestamp. Compare timestamp errors received and sent, delayed response of statistical messages. 3) Maximum load capacity test: Copying multiple submodules through a submodule template and implementing the receiving operation of the test topic in it. Each module operates at 200Hz. Count the frequency error of each module and the average frequency error of all modules. The error statistics are then output at a frequency of 3S. 4) 24-hour stability test: Start 1000Hz module for Sensor data acquisition; Start the 400Hz module to solve the pose and send the attitude data Topic; Start 200Hz module for attitude data reception. Statistical gesture information output per 3S. These functions perform a 24-hour trial run and count frequency errors, attitude errors, and message delay errors. 1) Results of the first test: In the test, the priority order of the modules is 500Hz>200Hz>1000Hz>2000Hz (The highest priority module is generally set at 500Hz, which can meet the scheduling period of each module in the overall system.). Sampling every 10 minutes, sampling a total of six times, the results are shown in Table 1: It can be seen from the experimental data that the real-time operating system running frequency is stable at 200-1000 Hz. When the frequency domain is running to 2000 Hz, the scheduling performance loss has begun to increase gradually. Therefore 2000Hz is basically the maximum scheduling frequency of the real-time system. If you want to implement a high frequency module, you need to use a hardware timer to complete. For high frequency modules, a hardware timer is used to complete. 2) Results of the second test the size of the message is 2048B, 1024B, 256B, 64B, and a total of six samples, the results are shown in Table 2. During the experiment, I found that the amount of Topic data will affect the frequency of its data update. Because Topic is a circular queue buffer, its buffer size is fixed. Therefore, if the amount of Topic data is too large, the number of buffered cache data is too low, so that too much data cannot be cached. If the subscriber module does not update the data in time, it will result in data loss. This realtime operating system can support Topic data cache of 2048B per second. 3) Results of the third test: In the test, Tested the load capacity of 100, 80, 50, 20 submodules respectively. All modules have a priority of normal. The average operating frequency of each type of module is sampled every 10 minutes, and a total of 6 records are recorded. The results are shown in Table 3. It can be seen from the experimental data that the delay of the message will increase as the number of system modules increases. As the number of modules increases, the period in which a message needs to be distributed becomes longer. In robot systems, some messages require real-time broadcast and response; some message delays have little effect on the system. Therefore, when designing the module, it is necessary to divide the purpose of the topic, and set the upper limit of the subscriber according to the demand for the response. 4) Results of the fourth test: In the test, the system status indicators are saved every 1 minute. The indicators include: the actual running frequency of the module, the average delay time and the maximum delay time of the Topic data transmission-reception. The system has accumulated 24 hours of operation (1440 points). The results are shown in Table 4. It can be seen from the experimental data that the stability of the real-time operating system is good, and the experimental data of 1440 minutes has no large fluctuations. Conclusion This paper proposes a real-time robot operating system based on FreeRTOS. The system leverages the ROS publisher/subscriber communication mechanism to combine the real-time nature of FreeRTOS. The whole system is divided into three layers: drive layer, middleware layer and functional module layer. The middleware layer is the core of the whole system, which can uniformly manage the sensor and PSP communication mechanism. The sensor management module is responsible for uniformly scheduling the interface functions of each sensor and controlling the access timing of the sensor. The user can access the system to work by simply inheriting the driver device base class and adding it to the driver device list. The PSP communication module uses Topic as the message hub of the entire middleware, responsible for the establishment of Topic and the distribution of messages. The message data area is stored by using the message data wake-up queue, the subscription list information is saved by using the subscriber list, and the message distribution is completed by updating the subscriber list when the message is updated, thereby speeding up the message forwarding process. Through theoretical analysis and experimental data, the system can better meet the user's requirements for the robot system in real-time and load capacity. In the subsequent research process, some common modules will be developed by themselves, such as log module, IMU module, SLAM module, etc. These modules can better verify the scalability of the system.
v3-fos-license
2020-07-30T02:07:33.709Z
2020-07-25T00:00:00.000
264662503
{ "extfieldsofstudy": [ "Materials Science" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/2076-3417/10/15/5123/pdf", "pdf_hash": "fccd1a48b302374835b9387336ae81e091300449", "pdf_src": "Adhoc", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41110", "s2fieldsofstudy": [ "Materials Science", "Engineering" ], "sha1": "3a80e319c973676e49786d59e9f08aa8c7bdea23", "year": 2020 }
pes2o/s2orc
Nondestructive Testing in Composite Materials A composite material is made of two or more constituents of different characteristics with the intent to complete the shortcomings of the individual components and to get a final product of specific characteristics and shape [...] Introduction A composite material is made of two or more constituents of different characteristics with the intent to complete the shortcomings of the individual components and to get a final product of specific characteristics and shape [1] to fulfil the user's demand. The most extraordinary example of composite is found in nature; in fact wood, which appears so strong and resistant, is composed of long fibers of cellulose held together by the lignin that is a weaker substance. Human beings observing and copying nature have always strived to develop composite materials. An example of composite material comes from afar: mud bricks; these were created when the ancients realized that mixing mud and straw gave them a resistant building material such as mud bricks. Later on, concrete was originated from the combination of cement, sand and gravel, and was widely used in the construction sector. Many types of materials have been developed and continue to be developed to meet the different needs of the modern world. Different types of matrices and reinforcements are being used that are derived from petrochemical resources or extracted from the vegetable world [2], which also allows us to comply with safety at work concerns and waste disposal. Indeed, the combination of two elements represents for many composite materials a strength and weakness at the same time. In fact, several different types of defects [3] may occur during the fabrication of composites, with the most common being: fiber/play misalignment, broken fibers, resin cracks or transversal ply cracks, voids, porosity, slag inclusions, non-uniform fiber/resin volume ratio, disbonded interlaminar regions, kissing bonds, incorrect cure and mechanical damage around machined holes and/or cuts. The presence of defects may result in a considerable drop of the composite mechanical properties [4]. Therefore, effective non-destructive evaluation methods able to discover defects at an incipient stage are necessary to either assure the quality of a composite material prior to putting it into service, or to monitor a composite structure in service. Nondestructive Testing We all would like to live in a safe house that would not collapse on us. We would all like to walk on a safe road and never see a chasm open in front of us. We would all like to cross a bridge and reach the other extreme safely. We all would like to feel safe and secure to take the plane, the ship, the train or to use any equipment. All this may be possible with the adoption of adequate manufacturing processes, non-destructive inspection of final parts and monitoring during the in-service life. This requires effective non-destructive testing techniques and procedures. The intention of this special issue was to collect the latest research to highlight new ideas and the way to deal with challenging issues worldwide. There were 19 papers submitted of which 12 were accepted and published. Going through the special issue, different types of materials and structures were considered; different non-destructive testing techniques were employed with new approaches of data treatment proposed as well numerical simulation. The degradation of concrete, the material of which many widely used goods are made of such as roads, bridges and the home in which we live, is certainly a cause of anxiety and demands for safety. Milad Mosharafi, SeyedBijan Mahbaz and Maurice B. Dusseault dealt with the problem of corrosion of steel in reinforced concrete [5]. The authors reviewed previous literature and focused on the self-magnetic behavior of ferromagnetic materials, which can be exploited for quantitative condition assessment. In particular, they performed numerical simulation to get information on the possibility to detect the rebar degradation with the passive magnetic inspection method and to establish detectability limits of such method. Of great relevance for all us is the safeguard of the cultural heritage, which represents our history; the paper by Grazzini [6] can be inserted in this context. Grazzini describes a technique to detect plaster detachments from historical wall surfaces that consist of small and punctual impacts exerted with a specific hammer on the plastered surface. This technique was applied to frescoed walls of Palazzo Birago in Turin (Italy). Most of the papers of this special issue involve fiber reinforced composites [7][8][9][10][11][12]. These include different types of matrices and fibers that are used for different applications going from the transport industry (aircraft, trains, ships, etc.) to goods for daily life. The most popular are those based on resin epoxy matrix reinforced with either carbon or glass fibers and are named CFRP for carbon fiber reinforced polymer and GFRP for glass fiber reinforced polymer; these materials are also called carbon/epoxy and glass/epoxy. These materials can be non-destructively evaluated by using different techniques, amongst them ultrasonic testing (UT) and infrared thermography (IRT). Ultrasonic testing in reflection mode (pulse-echo) can be accomplished with a single probe (SEUT), which acts to both send and receive sound waves, or with a phased array (PAUT). The superiority in terms of the signal noise ratio of PAUT over SEUT was assessed by Hossein Taheri and Ahmed Arabi Hassen through a comparative study on a GFRP sample [7]. The authors of Ref. [7] used the same PAUT for guided wave generation to detect flaws in a CFRP panel. In addition to the use of the direct wave, the diffuse wave can also be exploited for inspection purposes. The information contained in diffuse waves are mostly useful in seismology and in civil engineering, but can be also used for health monitoring and the nondestructive evaluation of fiber reinforced composites. Zhu et al. [8] applied this method to the inspection of carbon/epoxy and found it promising for early crack detection. A critical aspect for defect localization is to distinguish signals from noise, and this requires more investigation. Carbon fiber reinforced polymer laminates are also considered by Toyama et al. [9]. The latter authors used non-contact ultrasonic inspection technique through visualization of Lamb wave propagation for detecting barely visible impact damage in CFRP laminates. Ultrasonic testing is generally a contact technique, but this poses problems in materials and structures in which the contact fluid (water, gel) may be hurtful for the surface; thus, the non-contact deployment is of great interest and ever more investigated. The results reported in Ref. [9] are promising but, as also concluded by the authors, the method based on Lamb waves requires further investigation with particular regard to the signal-to-noise ratio improvement. Teng et al. [10] investigated the suitability of the recurrence quantification analysis in ultrasonic testing to characterize small size defects in a thick, multilayer, carbon fiber reinforced polymer. The authors conclude that their proposed method was able to detect artificial defects in the form of blind holes, but further research is necessary to improve and update the method to address real discrete defects. Niutta et al. [11] used the detecting damage index technique in combination with the finite element method to evaluate residual elastic properties of carbon/epoxy laminates damaged through repeated four-point bending tests. As a conclusion, the authors of Ref. [11] affirm that their methodology allows us to locally assess the residual elastic properties of damaged composite materials. By mapping the elastic properties on the component and considering the assessed values in a finite element model, a precise description of the mechanical behavior of the composite plate is obtained and, consequently, the health state of a damaged component can be quantitatively evaluated and decisions on its maintenance can be made by defining limits on the acceptable damage level. Infrared thermography is widely used in the inspection of materials and structures, amongst them composites, thanks to its remote deployment through the use of a non-contact imaging device. Lock-in thermography coupled with ultrasonic phased array was used by Boccardi et al. [12] to detect impact damage in basalt-based composites. In particular, two types of materials that include basalt fibers as reinforcement of two matrices were considered: polyamide and polypropylene. The obtained results show that both techniques can discover either impact damage or manufacturing defects. However, lock-in thermography, being non-contact, can be used with whatever surface while contact ultrasonic cannot be used on hydrophilic surfaces that get soaked with the coupling gel. Infrared thermography lends itself to being integrated with other techniques to allow the inspection of both thin and thick structures such as in Ref. [13], in which a joint use of infrared thermography with a ground penetrating radar (GPR) allowed us to assess the conditions of archaeological structures. In particular, IRT was able to detect shallow anomalies while the GPR followed their evolution in depth. The integration of infrared thermography with other techniques is also deployed with IRT for the detection of defects, and the other technique is exploited for thermal stimulation. An example of this deployment is ultrasound thermography [14], in which elastic waves are used for selective heating and infrared thermography detects buried cracks. An example of integration between infrared thermography and eddy current is given by Li et al. in Ref. [15] of this special issue, in which the pulsed eddy current is used for thermal stimulation to detect welding defects. The paper by Zhang et al. [16] is concerned with a technical solution that combines the adaptive threshold segmentation algorithm and the morphological reconstruction operation to extract the defects on wheel X-ray images. The obtained results show that this method is capable of accurate segmentation of wheel hub defects. The authors claim that the method may be suitable for use in other applications, but warn about the importance of using the proper parameter settings. Na and Park [17] investigated the possibility to transform the electromechanical impedance (EMI) technique into a portable system with the piezoelectric (PZT) transducer temporarily attached and detached by using a double-sided tape. Regardless of the damping effect, which may cause the impedance signatures to be less sensitive when subjected to damage, the results from this study have demonstrated its feasibility. The authors are convinced that, by conducting simulation studies, the PZT size can be further reduced for a successful debonding detection of composite structures. At last, Zhou et al. [18] made an overview of nondestructive methods for the inspection of steel wire ropes. The authors first analyzed the causes of damage and breakage as local flaws and the loss of the metallic cross-sectional area. Then, they reviewed several detection methods, including electromagnetic detection, optical detection, ultrasonic guided wave method, acoustic emission detection, eddy current detection and ray detection, by considering the advantages and disadvantages. They found that the electromagnetic detection method has gradually been applied in practice, and the optical method has shown great potential for application, while other methods are still in the laboratory stage.
v3-fos-license
2018-12-01T20:20:17.462Z
2016-01-24T00:00:00.000
53972671
{ "extfieldsofstudy": [ "Engineering" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://doi.org/10.11121/ijocta.01.2016.00258", "pdf_hash": "712af885e024a7a1277d3a86ec2a34b5d429631a", "pdf_src": "ScienceParseMerged", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41113", "s2fieldsofstudy": [ "Engineering", "Computer Science" ], "sha1": "712af885e024a7a1277d3a86ec2a34b5d429631a", "year": 2016 }
pes2o/s2orc
Transit network design and scheduling using genetic algorithm – a review The aim of this paper is to summarize the findings of research concerning the application of genetic algorithm in transit network design and scheduling. Due to the involvement of several parameters the design and scheduling of transit network by means of traditional optimization technique is very difficult. To overcome these problems, most of the researchers have applied genetic algorithm for designing and scheduling of transit network. After the review of various studies involved in design and scheduling of transit network using genetic algorithm, it was concluded that genetic algorithm is an efficient optimization technique. Introduction In developing countries like India traffic congestion, slow speed of vehicle and poor level of service are the major problems encountered in our daily life. These problems are due to huge growth of vehicular population specially the private and intermediate transport service [1][2][3]. In this view of rapid development it necessary to plan and design the public transport system in an efficient manner so that the use of private and intermediate transport service is reduced. The transport system is efficient if design and schedule of transit network is efficient. From the user point of view, the system is efficient if it meets the demand by providing cheap and direct service to the passenger, and from the operator point of view the system is efficient if it makes as much profit as possible. This is the main challenge in the transit planning to find balance between these conflicting objectives, various optimization techniques come in to the game [4]. Among various optimization techniques the genetic algorithm offers a new strategy with enormous potential for many tasks in planning and designing of transit network. It is an area of interest indicating how genetic algorithm addresses the shortcoming of conventional optimization techniques. In the present study an attempt has been made to explore the application of genetic algorithm in routing, scheduling, combined routing and scheduling and integration of mass transit planning. Genetic algorithm Genetic algorithms, search optimization techniques are based on the mechanics of natural selection. It is an evolutionary algorithm. The basic mechanics of genetic algorithm are simple involving copying strings and swapping partial strings. The major steps involve in the GA implementation algorithm are generation of population, finding the fitness function, and application of genetic operator and evaluation of population [5] as shown in Figure 1. [6] GA starts with the population of randomly created string structure representing the decision variable. The size of the population depends upon the string length and problem being optimized. These strings are called chromosome in biological system. The associate value with the chromosome is called the "fitness function". These strings consist of coding and binary coding which is most common coding method and GA performs best if adopted [7] i.e. ones and zeros or "bits". Every position in chromosome consists of "genes" having value as "allele" (e.g., 0 or 1). Initially the allele of chromosome is generated as simple tossing of an unbiased coin and consecutive flips (head=1 and tail=0) can be used to decide genes in coding of a population strings. Thus population having individuals is generated by pseudorandom generator whose individuals represent a feasible solution in solution space called initial solution. After deciding the encoding method as binary strings, its length is determined according to desired precision. While in the process of coding, the corresponding point can be found using fixed mapping rule [7]. Suppose the function of n variables, f(x1, x2,….,xn): R n R to be minimized for each decision variable si then linear mapping rule is: Where, Ximin = lower bound on decision variable Xi Ximax = upper bound on decision variable Xi The variable Xi is coded into substring Si of length mi then, and string is represented as (Sm-1, Sm-2, S2, S1, S0). After decoding all decision variables using above mapping rule, the function value can be calculated by substituting the variables in the given objective function F(x). The objective function value is used as a measure of "goodness" of the string and called as "fitness" in GA terminology. The obtained accuracy of a variable for a mi-bit coding is In the next step, fitness function f(x) is derived from the objective function and used in GAs is naturally suitable for solving maximization problems while for minimization problems are transformed into maximization problems using suitable transformation. For maximization problems, fitness function is the same as objective function i.e. f(x) = F(x). For minimization problems, the fitness function is an equivalent maximization problem chosen such that the optimum point remains unchanged. The fitness function used is [8]: With fitness function value of each string in particular generator, maximum, minimum and average fitness values of all strings in a population are calculated and checked for termination criteria. If the termination criterion is not satisfied then new population is created by applying three main genetic operatorsreproduction, crossover and mutation. Reproduction / Selection: [7,9] is a process in which individual is copied considering their fitness function values to make more copies of better string in a population. This represents a measure of the utility or goodness related to what we want to maximize. Copying strings according to their fitness function values means that string with a high value have higher probability of contribution to one or more off-spring in the next generation. In all selection schemes the essential idea is to pick strings with more than average fitness value from current population and their multiple copies are inserted in the mating pool in a probabilistic manner as shown in Figure 2. The most commonly selection operator are uniform random selection, roulette selection and tournament selection. The former selects member of pool at random, ignoring fitness or other factors. Thus the chromosome is likely to be selected. The simplest way to implement the reproduction operator is to create a biased roulette wheel where each string in the current population has a slot sized proportionally to its fitness function value. The formula used to calculate the slot size of roulette wheel, corresponding to the reproduction probability pr(i) of the string is as follows: Where, n = Population size fi = fitness value of i string. Figure 2. Reproduction operator Crossover: [7,9] after reproduction, crossover is applied to the string of mating pool. A crossover is used to combine two strings with the hope of creating better string. It can performed with the probability (Pc ) to restrict some of the good string found previously. Two strings are chosen at random for crossover. The most commonly used crossover operators are single point crossover, double point crossover. A crossing site (represented by vertical line) is chosen at random. The contents in the right side of the crossing side are swapped between the strings. The essential idea of crossover is to exchange bits between two good strings to obtain a string that is generally better than the parent. For example, a single point crossover on five bit string is shown in Figure 3. Mutation: [7,9] adds new information in a random way to genetic search process and prevents an irrecoverable loss of potentially useful information which reproduction and crossover can cause. It operates at bit level, when bits are copied from current string to new string. Mutation operates with a very small mutation probability (pm). It introduces the diversity in the population whenever the population trends to become homogeneous due to iterative use of reproduction and crossover. A coin toss mechanism is used; if a random number between 0 and 1 is less than the mutation probability, then bit is inverted i.e. 0 become 1 and 1 become 0. There are different type of mutation operator flipbit, boundary, uniform, non-uniform and Gaussian. Flip-bit operator is used for binary gene; boundary, uniform, non-uniform and Gaussian operator is used for integer and float gene. In this paper the example using binary gene has been considered. To understand how to use integer gene refer [10]. Pr (1) Pr(2) Pr (4) Pr(3) The newly created strings are evaluated by decoding and calculating their objective function values (fitness). This whole process completes one cycle of GAs, normally called as generation. Such iterative process continues until the termination criterion is satisfied. Termination Criteria: the population is said to be converged, when the average fitness of all the string in a population is equal to best fitness. When the population is converged, the GAs is terminated. Review of GAs in design and scheduling of transit network Various attempts have been made in designing and scheduling of transit network by different researchers. The design and scheduling of transit network using GAs has been classified into four broad categories: Routing, scheduling, combined routing and scheduling and integration of mass transit planning. Table 1 provide an overview of the approaches reported in the literature. Routing approaches Design of a route is an important step in planning the transit system. Bus/rail takes a major share of public transport demand. However, in most of the service areas the distribution of passenger travel is not homogenous; and therefore such location may not be cost effective in terms of operator or user point of view. For better passenger accessibility and saving of cost, reconstruction of bus route and its associated frequency must be done to suit the travel demand results in better passenger accessibility and saving of operating cost. Transit operator and commuter both give preference to short routes so that the operator cost and the travel time can be decreased, respectively. Generally, the passengers also prefer those routes that can be easily accessed from their origin or destination trip. The route set is efficient if it satisfies the following rules: i. The transit demand of all commuters. ii. Transit demand of all commuters with zero transfer. iii. Less time to travel. Pattnaik et al. [11] focused on route network design and calculated associated frequencies for a given set of routes using genetic algorithm (GAs). Design consisted of two phase; first of all candidate route set was generated and then using GAs optimum route set was selected. The GAs was solved by fixed string length coding scheme assuming a solution set route size, and tried to find many best routes from the candidate route set. Newly proposed varaible string length method was used to found the solution route set size and set of solution routes. Bielli et al. [17] focused on a new method of computing fitness function (ff) values in genetic algorithm for bus optimization. Each set was evaluated by computing a number of performance indicators obtained by analysis and aimed to achieve best bus network satisfying both demand and offer of transport. The algorithm was used to generate iteratively new set of bus networks. Ngamchai and Lovell [23] proposed a new model to optimize bus route design which incorporates unique frequency setting for each route using GAs. The model design the bus route in three phases; firstly an initial set of route is constructed which are feasible and good. Secondly the service frequency on each route was assigned and headway coordination techniques were applied by ranking of transfer demand at transfer terminal. Lastly the existing route was modified to identify the shortest paths between origin and destination. The efficiency of the model was tested by applying it on the benchmark network. The performance result shows that proposed model is better than binary-coded genetic algorithm. Chien et al. [21] develop a model using GAs to optimize bus transit system. The total cost function consisting of supplier and user costs was minimized subject to realistic demand distribution and irregular street pattern. The quality and quantity of the data can be improved by incorporating boarding demand data of census block and information of GIS (Geographical Information Systems) street network. Selected Strings Single point crossover New Strings 1 0 0 10 1 0 0 1 0 1 0 0 0 1 1 1 0 0 1 1 1 0 0 1 1 1 0 1 0 To attain better search space for initial feasible solution, the algorithm was formulated with smart generating methodology. The computation time of an efficient network model can be minimized using redundancy checking mechanism and gene repairing strategy. The solution quality is improved by embedding the passenger assignment model and improved fitness function. Based on the comparison between performance measures, such as two initial solution generating methods (1-car and Minimum -car) and three operators (1-point, 1point mutation and 2-point crossover), it was found that for MTRND problem proper combination of minimum-car method and the 2point crossover operator was suitable. The result indentifies that development model and algorithm is effective for solving MTRND problem. Chew and Lee [32] developed a framework using GAs to solve urban transit routing problem (UTRP). In this study, the infeasible solution was converted into feasible solution using addingnode procedure. Minimizing the passenger cost was the main objective of the study. To perform genetic operation, route crossover and identicalpoint mutation were proposed. The Mandl's benchmark data set was used to carry out the computational experiment. The result shows that the proposed algorithm performs more efficiently when compared to other researchers as shown in Table 2. Scheduling approaches Careful and detail scheduling computation and precise presentation of schedulers are extremely important aspects of transit system operation. They affect efficiency and economy of operation, regularity and reliability of service and facility with which public can use system. Good scheduling means spacing transit vehicle at appropriate intervals throughout the day and evening to maintain an adequate level of service. Therefore, it minimizes both waiting time for passengers as well as transfer time from one route to another. Total waiting time of passengers is the sum of the total initial waiting time (IWT) and total transfer time (TT) of the passenger. As an effect, it will provide a better level of service to passenger at no extra cost. The resource and service related constraints are as follows: [15] presented a two-step based heuristic technique for the distribution of buses on urban bus route. In the first step, bus frequencies required to manage the peak demand on each route was worked out. To compute base frequencies, buses overcrowding at certain location was also included. In the second step, additional frequencies were allocated in order to minimize the level of overcrowding in the network. The problem of commuters' discomfort because of overcrowding was formulated as non-linear objective function. The problem, such as allocation of superfluous was solved using GAs. Route overlapping has been considered and a frequency of buses according to one transfer was set. The model concentrates on overcrowding as measure of user cost; other factors, like waiting time and vehicle operating cost are not taken into account, which play a major role in allocation of buses. At intermediate node maximum of one transfer was considered that is generally not an actual case. They generated frequency-setting (FRESET) model and minimized the objective function subject to constraints of feasibility loading, assignment of commuter flow and size of fleet. Loading feasibility constraint ensured that the passengers demand across each route was fulfilled by allocation of frequency of buses across each route. Two stages of FRESET model were: Base frequency allocation and surplus allocation. Chakroborty et al. [16] presented a genetic algorithm based approach for optimizing the problem related to allocation of fleet size and development of schedule with transfer consideration as well as minimizing the passenger waiting time. From the past experiences, it has been identified that it is impossible to get optimal result for simple problem using conventional optimization method but it is possible to get optimal result with minimum computation effort using GAs. Limitation of the developed method is that string length in case of a larger network is generally large. Two points that needs attention are; (i) developed a real-coded GAs based approach (ii) proposed procedure must be included with transfer stops. Shrivastava and Dhingra [18] developed a Schedule Optimization Model (SOM) for coordinating schedule of BEST buses determined on existing feeder routes. Minimizing transfer time between buses and train and operator cost are the main aims of the proposed model. The problem becomes nonlinear and non-convex due to large number of variables and constraint in the objective function, making it difficult to solve due to traditional approaches. Therefore, for coordination of sub-urban train and buses a genetic algorithm was used which is a robust optimization technique. The proposed model provides a better level of service to the commuters because they consider load factor and transfer time from train to bus rather than fleet size. It was found that less number of buses is required on existing feeder routes and it is a specific contribution towards integration of public transport mode. Kidwai et al. [25] presented a bi-level optimization model for bus scheduling problem. In first level, load feasibility was determined for each route individually and by adding the number of buses across each route, the fleet size was determined. In second level, using GAS the fleet size obtained from the first level is again minimized. Model is applied to real life network and based on the result, it is concluded that proposed algorithm yields significant saving for transit network with overlapping of routes. Routing and scheduling approaches Problems related to vehicle routing and scheduling (VRS) involve four decisions; (a) vehicle fleet size, (b) customer are assigned to a vehicle, (c) assigning a sequence to the vehicle which travel to the assigned customer, and (d) completing its route the actual time that vehicle travel. To solve the problem, various techniques have been used but no technique has included all the practical options or restriction confronting to a company. Unfortunately, the analysts had to be satisfied with the existing method or with slight modification they can develop their custom solution techniques. Routing and scheduling also mean an approach which deals with the route configuration and respective frequency simultaneously. This combined process of routing and scheduling involves two decisions parameters: number of routes and associated frequency. Gundaliya et al. [13] proposed a GAs to develop a model for routing and scheduling. Objective function is minimization of total cost that is user and operator costs and the related constraints are load factor, fleet size and overloading of links. User cost is a combination of in-vehicle time, waiting time and transfer time and operator cost is vehicle operating cost of buses. Mandl's Swiss network of fifteen nodes was used to test the model. Model gives the better optimized results found by other researcher on same network and demand matrix. Chakroborty and Dwivedi [20] proposed an algorithm using GAs that provides an efficient transit route networks. In this paper, four cases with different number of routes in the route set were used. A comparison of the proposed algorithm with algorithm developed by other author is done using various measures of effectiveness, such as (i) percentage of demand satisfied directly (ii) proportion of demand satisfied with one transfer (iii) proportion of demand satisfied with two transfers (iv) proportion of demand unsatisfied (4) average travel time per user in minutes (5) total man-hours saved per day. They also state that the developed procedure uses non-heuristic techniques in optimization process. Further they mentioned that the proposed procedure is useful for transit planners and designers. Tom and Mohan [22] designed a route network for transit system involving selection of route set and its related frequency. The problem was formulated as an optimization function that minimized the overall cost of the system (operating cost of bus + travel time of passenger). Design of the route network was done in two stages: In the first stage, a large set of candidate route set was generated. In the second step, a solution route set with associated frequencies was chosen using GAs, from the large set of candidate route set generated during first step. The proposed model considered route frequency as the variable, thus making it different from the existing model in terms of adopted coding method. The model was studied on small size network and found that the performance of the model can be evaluated using adopted coding method for design of transit network. The SRFC model provides a solution with minimum operational cost, minimum fleet size and maximum allocation of demand which is directly satisfied. Using asymmetric demand matrix and demand sensitiveness to the service quality, this study can be extended. Chakroborty [36] discussed the optimal routing and optimal scheduling problems and described that the problem of routing can be classified as vehicle routing (TSP (travelling salesman problem), SVPDP (single vehicle pick-up and delivery problem) and SVPDPTW (single vehicle pick-up and delivery problem with time windows) and the transit routing problem. To develop schedules for bus arrival and departure at all the stops of network for a given set of route was the optimal scheduling problem. The genetic algorithm was used to solve optimization problem which was difficult to solve using conventional optimization tools. Results for various routing and scheduling problem were obtained by applying GAs technique. Zhao and Zeng [26] demonstrated a mathematical based stochastic methodology for optimizing transit route network using integrated Simulated Annealing (SA) and GAs search method. A computer program was developed to implement the methodology, and previously available results were used to test the feasibility of the proposed methodology. By developing time-dependent transit network optimization methods, the present study can be enhanced further to optimize a transit network for both periods that is peak and off-peak which also takes into account the waiting time and transfer penalties. It is also necessary to analyze the objective functions defined in terms of commuters and operator costs. To correctly identify the difference between two lines (i.e., is bus and rapid transit line), it is necessary to use the travel time instead of travel distance. Integration of mass transit planning Integration of mass transit planning means development of feeder routes and schedule coordination simultaneously. In integrated system, all the trips involve more than one mode and since passengers are subjected to transfer. The transfer is one of negative aspect of any trip but cannot be neglected. Transfer is essential because it make the integrated system quick and convenient. Dhingra and Shrivastava [37] described the methodology for co-ordination of suburban railway and BEST buses at Mumbai. The aim of this study was to achieve optimal coordinated schedules for optimally designed feeder route network with due consideration to user and operator costs and better level of service. To meet these objectives, network optimisation and transfer optimisation models were proposed. The problem was of multi-objective nature therefore a strong optimization technique GAs was proposed for optimisation. The objective function contains minimisation of in-vehicle travel time, standing passengers and vehicle operating cost. Kaun et al. [27] presented a methodology for solving the problem related to feeder bus network design using meta-heuristic (combination of GAs and Ant Colony Optimization (ACO)). To compare the performance of meta-heuristic in terms solution quality and computational efficiency, a comparison was done between randomly generated 20 test solutions. In this study, each route was sequentially developed as follows: firstly the station was randomly selected, secondly the selected stop was added to the route linking to selected station, and lastly the route length was checked. Results also indicate that procedure can be further improved by using a 3-opt procedure (used to optimize each route) formed by applying GAs and ACO, so that the performance could be as good as that of tabu search with intensification. Shrivastava and O'Mahony [29,30] build a model using GAs for generating the optimized feeder routes and indentifying associated frequencies for coordinating schedule of feeder buses. Thus instead of decomposing the problem in two steps (i) feeder route development (ii) schedule coordination of feeder buses, the two steps were optimized together that were complementary to each other. In this study, the authors determined the schedules coordination of feeder buses for the existing given schedules of main transit. The model produced better results in terms of improved load factor as compared to previous technique accepted by author for the same study area. The proposed model involves real life objective and constraints therefore it is specific involvement towards realistic modeling of integrated public transport system. Shrivastava and Dhingra [19] developed a methodology that determines the feeder routes and coordinated schedules for coordinating feeder buses with suburban trains. Feeder routes were developed using heuristic feeder route generation algorithm, and GAs was used for optimizing coordinated schedules. Based on the load factor and bus waiting time, the schedules were decided. To maintain better level of service and waiting time within satisfactory limit, the load factor lies between minimum and a maximum values. From the results, it is found that optimal values can be obtained in lesser time using genetic algorithm. Verma and Dhingra [28] described a model for building optimal coordinated schedules for urban rail and feeder bus operation. An optimization technique GAs was used for developing a model. In this study, the optimal coordinated of urban train and feeder buses was done in two parts; (i) sub-model was developed for train scheduling, and (ii) sub-model was developed for schedule coordination. The train scheduling objective function is taken as minimization of train operating cost and passenger waiting time cost (boarding the train) subject to constraint load factor and waiting time. The schedule coordination objective function is taken as minimization of sum of bus operating cost, passenger transfer time cost (transferring from train to feeder buses), and passenger waiting time cost (boarding along the feeder route) subject to constraint load factor and transfer time. Two cases were considered for coordination, in first case, buses of different types were considered, and in second case single-decker fleet buses were considered. A comparison between both the cases was done to choose the strategy that is best. On the basis of comparison, it was found that mixed fleet size buses produce optimum result for coordinated feeder bus schedules. After examining and interpreting various literatures it is concluded that studies up until recent times are limited to operational integration of mass transit planning. The efficiency of different mass transit modes and main transit facility can be enhanced by overall system integration. The overall system integration includes operational integration, institutional integration and physical integration as shown in Figure 4. Conclusion In this review paper, we have presented the classification and analysis of studies on design and scheduling of transit network using GAs. It was found that problem related to design and scheduling of transit network are highly complex and non-linear in terms of decision variable and difficult to achieve using classical programming. GAs an optimization technique is computationally more efficient to solve the problem requiring large number of resources and services related constraints such as design and scheduling of transit network. Based on the review, it is concluded that GAs have advantage over traditional optimization techniques, as they work with clusters of points rather than a single point. Due to simulataneous prosess of more than one string it increases the posibility of global optimum solution. But still there exists some limitations: the solution for the complex problems is efficient if the evaluation of the fitness function is good. But to evaluate a good fitness function is mostly the hardest part. Future scope For future scope the following points are recommended: 1. The effeciency of the integrated transport sytem can be enhanced by overall system integration. 2. The developed integrated transport system can be extended by developing integrated fare system between integrated modes which is a part of operation integration. 3. Up till now, the developed integrated systems consider the train as the main mode and the bus and the intermediated transport system (auto-rickshaw and taxi) as feeder modes. In order to develop a fully integrated transport system, personalized modes (car and two-wheeler) and nonmotorized modes (cycle-rickshaw & cycle) should also be considered.
v3-fos-license
2019-01-03T12:56:28.539Z
2013-09-29T00:00:00.000
76654955
{ "extfieldsofstudy": [ "Biology" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=37357", "pdf_hash": "1a0801df8726c92dbde16c29904252bece9a4310", "pdf_src": "ScienceParseMerged", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41114", "s2fieldsofstudy": [ "Physics" ], "sha1": "1a0801df8726c92dbde16c29904252bece9a4310", "year": 2013 }
pes2o/s2orc
Impact of the light intensity variation on the performance of solar cell constructed from ( Muscovite / TiO 2 / Dye / Al ) In this work, the influence of the light intensity as one of the parameters that control the solar cell is studied. The effect of the other main variables, such as temperature, rotation per Minuit of the spin coating instrument, and the samples concentration, was found to be in conformity with other results, but unfortunately the intensity of light does not increase the solar cell efficiency, and fill factor, by other words it was found to play only a secondary role. INTRODUCTION Man has needed and used energy at an increasing rate for his sustenance and well-being ever since he came on the earth a few million years ago.Primitive man required energy primarily in the form of food.He derived this by eating plants or animal which he hunted.Subsequently he discovered fire and his energy needs increased as he started to make use of wood and other biomass to supply the energy needs for cooking as well as for keeping himself warm.With the passage of time, man started to cultivate land for agriculture.He added a new dimension to the use of energy by domesticating and training animals to work for him.With further demand for energy, man began to use the wind for sailing ships and for driving wind mills, and the force of falling water to turn water wheels.Till this time, it would not be wrong to say that the sun was supplying all the energy needs of man either directly or indirectly and that man was using only renewable source of energy [1]. The Industrial Revolution which began with the discovery of the steam engine brought about a great many changes.A new source of energy, nuclear energy, came on the scene after the Second World War.Thus today, every country draws its energy needs form a variety of sources.We can broadly categorize these sources as commercial and noncommercial, for instance solar energy.The solar energy option has been identified as one of the promising alternative energy sources for the future.The nature of this source, its magnitude and its characteristics have been described, and a classification of the various methods-direct and indirect-for utilizing solar energy has been given [2]. Due to the nature of solar energy, two components are required to have a functional solar energy generator.These two components are a collector and a storage unit.The collector simply collects the radiation that falls on it and converts a fraction of it to other forms of energy (both electricity and heat).Solar energy can be directly converted to electricity energy.Most of tools were designed to be driven by electricity, so if you can create electricity through solar power, you can run almost anything with solar power.The collectors that convert radiation into electricity can be either flat plane or focusing collectors, and the silicon components of these collectors are photo voltage cells. Photo voltage cells, by their very nature, convert radiation to electricity.This phenomenon has been known for well over half a century, but until recently the amount of electricity generated was good for title more than measuring radiation intensity. The modern age of solar power technology experimenting with semiconductors accidentally found that silicon doped with certain impurities was very sensitive to light. This resulted in the production of the first practical solar cell with a sunlight energy conversion efficiency of around 6 percent.Solar cells can be made from a number of materials other than silicon, and formed in a variety of designs; cells are classified both by material and by the type of junction.For instance, the materials used in solar cells today are cadmium sulfide, gallium arsenide, titanium dioxide and dyes. However, with the current technology, the cost per watt is rather high due to the high cost of manufacturing silicon-based solar cells.The cost per watt can be lowered by two ways: lower the manufacturing cost, or increase the amount of power output for the same cost.The latter is related to efficiency of the device.Clearly, a more efficient way converting sunlight into energy needs to be researched in order to make solar cell technology economically viable.Most traditional solar cells rely on a semiconductor (typically silicon) for both light absorption and charge transport.A fairly new, promising method separates these functions.Organic dyes (dye sensitizers), which are sensitive to light, can absorb a broader range of the sun's spectrum [1,2].Dye sensitized solar cell is considered the low-cost solar cell.This cell is extremely promising because it is made of low-cost materials and does not need elaborate apparatus to manufacture to produce it compared with other type of solar cell.This led me to use this type of material that has attracted academic and industrial research groups not only because of their theoretically interesting properties but also because of their technologically promising future.More researchers used this material to made solar cells with different technologies. PROBLEMS WITH CURRENT SOLAR CELL TECHNOLOGY When it comes to solar technologies, one of the biggest issues is with that of photovoltaic panels or solar cells.These panels are made in various ways and depending on their production determine their efficiency and cost.By far the cheapest solar cell is that of the thin film type which is used to make flexible panels.This process is also cheaper but likewise their efficiency is not all that to be desired. You also have the casting of existing silicon crystals into a panel which is the most predominant form of solar cell manufacture but this is more expensive and while efficiency may increase, the costs of these panels may be too great for the average person. You also have the creation of silicon crystals which is a manmade process of dissolving them and reforming them only to be thinly sliced and turned into a PV panel.These are the second most efficient solar cell with efficiencies closing in on the 15% mark, but they are also consuming to produce which also means very expensive. The final type of solar cell is also the newest that is available to the general market.While concentrated PV panels are not new as they have been utilized in space for well over a decade, they are new to the general market.These PV panels are by far the most expensive and they are multiple layers of solar cells stacked on top of each other.The concentrated PV panel can see efficiencies upwards of 40% -50% which is getting closer to what the world needs, but their cost is something else altogether; some of them costing well over $10,000 for a single cell that can fit in the palm of your hand. In most cases the cost of solar cells can be so great that is will not even cover the life expectancy of the cell which means it would cost more than using your utility feeds.Solar power created by solar cells is also considered by far to be the most expensive means of producing electricity.You cannot use a solar cell at night time so you either have to store the energy at higher costs or turn to fossil fuels for this night time power production.Let's not forget the fact that solar cells only produce direct current and when coupled to the necessary inverter you end up losing another 15% of that power in heat. While there is much promise in solar cells; delivery of an efficient and inexpensive PV panel in this lifetime may be asking too much.However it is important to remember that solar cells are not the only way of producing electricity from the sun.While solar cells use the sun's light to create electricity, solar thermal systems use the sun's heat to produce and is by far much more efficient.While no good in low light conditions, the heat created by the sun can be stored in a number of mediums from graphite to molten salt which means that within no time the ability to store this energy all night long will be upon us.Furthermore solar thermal is much cheaper than solar cell energy production and it is why the vast majority of solar power plants around the world use this form over any other [3]. SOLAR PHOTOVOLTAIC Solar cells convert the solar radiation directly into electricity using photovoltaic cell or the so called solar cells generate electromotive force as a result of absorption of ionizing radiation.Major advantage of solar cells over conventional methods of power systems are: Photovoltaic effect without going through a thermal process, Solar cells are reliable, modular, durable, and generally maintenance free and therefore, suitable even in isolated and remote areas.Solar cells are quiet, benign, and compatible with almost all environments, respond instantaneously to solar radiation, and have an expected lifetime of 20 or more years.Solar cells can be located at the place of use and hence no distribution network is required.Like other solar devices solar cells also suffer from some disadvantages, such as: The conversion efficiency of solar cells is limited to about 30%.Since solar intensity is generally low, large areas of solar cell modules are required to generate sufficient useful power.The present costs of solar cells are comparatively high, making them economically uncompetitive with other conventional power generation methods for terrestrial applications, particularly where the demand of power is very large.Solar energy is intermittent and solar cells produce electricity when sun shines and in proportion to solar intensity.Hence, some kind of electric storage is required making the whole system more costly.However, in large installation, the electricity generated by solar cells can be fed directly into the electric grid system [4]. Photovoltaic Conversion The devices used in photovoltaic conversion are called solar cells.When solar radiation falls on these devices, it is converted directly into dc electricity.The principal advantages associated with solar cells are that they have no moving parts, require little maintenance, and work quite satisfactorily with beam or diffuse radiation.Also they are readily adapted for varying power requirements because a cell is like a "building block".The main factors limiting their use are that they are still rather costly and that there is very little economy associated with the magnitude of power generated in an installation. However, significant developments have taken place in the last few years.New types of cells have been developed, innovative manufacturing processes introduced, conversion efficiencies of existing types increased, costs reduced and the volume of production steadily increased.The present annual world production of photovoltaic devices is already about 60 M W. As a result of the above development, solar cells are now being used extensively in many consumer products and appliances [4]. Application In spite of the high initial cost, photovoltaic systems are being used increasingly to supply electricity for many applications requiring small amount of power.Their costeffectiveness increases with the distance of the location (where they are to be installed) from the main power grid lines. Some applications for which PV systems have been developed are: 1) Pumping water for irrigation and drinking. 2) Electrification for remote villages for providing street lighting and other community services. 3) Telecommunication for the post and telegraph and railway communication network. In addition, in developed countries solar cells are being used extensively in consumer products and appliances [4]. PRINCIPLE OF SOLAR CELL WORKING Two important steps are involved in the principle of a solar cell.These are: Creation of pairs of positive and negative charges (called electron-hole pairs) in the solar cell by absorbed solar radiation.Separation of the positive and negative charges by a potential gradient within the cell, for the first step to occur, the cell must be made of a material which can absorb the energy associated with the photons of sunlight.The energy (E) of a photon is related to the wavelength (λ) by the equation [5]: Where h is Plank's constant = 6.62 × 10 −34 J.s, and c = velocity of light = 3 × 10 8 m/s. Substituting these values, we get: Where E is in electron-volt (eV) and λ is in µm. The only material suitable for absorbing the energy of the photons of sunlight are semiconductors like silicon, cadmium supplied, gallium arsenide, titanium oxide, etc.In a semiconductor, the electrons occupy one of two energy bands-the valence band and the conduction band.The valence band has electrons at a lower energy level and is fully occupied, while the conduction band has electron at a higher energy level and is not fully occupied.The difference between the energy levels of the electrons in the two bands is called the band gap energy [6].Photons of sunlight having energy E greater than the band gap energy E g are absorbed in the cell material and excite some of electrons.These electrons jump across the band gap from the valence band to the conduction band leaving behind holes in the valence band.Thus electronhole pairs are created. The electron in the conduction band and holes in the valence band are mobile.They can be separated and made to flow through an external circuit (thereby executing the second step of the photovoltaic effect) if a potential gradient exists within the cell.In the case of semiconductors the potential gradient is obtained by making the cell as sandwich of two types, p-type and n-type.The energy levels of the conduction and valence bands in p-type are slightly higher than the corresponding levels in n-type.Thus when a composite of two types of material is formed, a jump in energy levels occurs at the junction interface as in Figure 1.This potential gradient is adequate to separate the electrons and holes, and cause a direct electric current-to flow in the external load [7,8]. In a semiconductor cell, the junction is the region separating the n-type and p-type portions.Solar cells can be made from many semiconductors material for example, titanium oxide (TiO 2 ) [9]. Current-Voltage Charecteristics The behavior of a solar cell is displayed by plotting its current-voltage characteristic.A typical characteristic curve is shown below in Figure 2. The intercepts of the curve on the x-axis and y-axis are called the short circuit current ISC and the open circuit voltage Voc the ratio between them called characteristic resistance Where KT q = Constant, L I = Light generation current and o I = Saturation Current. The maximum useful power corresponds to the point on the curve which yields the rectangle with the largest area.We denote the values of current and voltage yielding the maximum power by the symbols m I and [10]. MATERIALS USED IN THIS WORK The element Titanium was discovered in 1791 by William Gregor, in England.Gregor spent much of his time studying mineralogy, which led him to his discovery.Titanium dioxide is even used in food stuffs [11,12]. Titanium dioxide is a fascinating low-cost material exhibiting unique properties of stability and photo activity, leading to clean technologies in environmental remediation and energy conversion of sunlight, However, conventional high-temperature processing of Titania is a technological limitation beyond energy consumption, being a handicap for practical applications in areas such as the preparation of hybrid organic/TiO 2 materials or devices thermo labile substrates.Titanium oxide is also used as a semiconductor [13].When titanium dioxide (TiO 2 ) is exposed to near ultraviolet light, it exhibits strong bactericidal activity Figure 3. Anatase phase of TiO 2 film was prepared by anodizing pure titanium coupons (substrate).We observed wide distribution of TiO 2 particles ranging from submicron to nanometer sizes, fully covering the substrate forming a thin film on anodizing, with the formation of a thin film of TiO 2 [14]. Dye-sensitized solar cell is considered the low-cost solar cell.This cell is extremely promising because it is made of low-cost materials and does not need elaborate apparatus to manufacture.So it can be made in different way allowing more players to produce it than any other type of solar cell.In bulk it should be significantly less expensive than older solid-state cell designs.A dye-sensitized solar cell (rhodamine B) is a relatively new class of low-cost solar cell that belongs to the group of thinfilm solar cells [15].It is based on a semiconductor formed between a photo-sensitized anode and an electrolyte, a photo electrochemical system.Rhodamine B is an amphoteric dye, although usually listed as basic as it has an overall positive charge.It is most commonly used as a fluorochrome, an example being in mixture with auramine O to demonstrate acid fast organisms.The chemical structure of Rhodamine B is as shown below [16]. Physical properties Mica Group is a complexity of alumino silicate that consists of potassium, magnesium, iron (Fe).Mostly, mica is found in Granite Rhyolite, Phyllite or Mica Schist.There are two types of mica group; 1) Muscovite or white mica, formula KAl 2 (AlSi 3 O 10 ) (OH) 2 . Muscovite and phlogopite are used in sheet and ground forms Figure 4 [18]. THE EXPERIMENTAL WORK A 0.35 g of the rhodamine B dye was accurately weighed and dissolved in 25 ml ethanol, a 0.1 g of TiO 2 was accurately weighed and a fusion mixture composed of 1 g of sodium-carbonate and 0.8 g of ammonium sulphate.The sample and the fusion mixture were well mixed by means of a glass rod in a porcelain crucible and put into an electric furnace for 2.5 hours at 1200˚C.After cooling the content of crucible were dissolved with dilute sulphuric acid, heating to boiling till the contents were completely dissolved and then the solution was cooled and the volume was completed to 100 ml with distilled water in a volumetric flask. SAMPLE PREPERATION Our samples were prepared by 0.05 g of TiO 2 solution.The prepared solution was spin coated on the mica substrate for 60 sec and prepared by 0.35 g of rhodamine B dye solution.This solution was divided into three portions doped with iodine under three concentrations 0.05 g, 0.1 g, and 0.2 g.The prepared four concentrations solution were spin coated on the mica substrate at about 600 rpm for 60 sec in order to yield a thin uniform film.The spin coating technique device was removed using acetone, and the surface was washed by distilled water and methanol, then rinsed with acetone and dried.Finally aluminum strips were evaporated on top of the (Muscovite/TiO 2 /Dye) sensitize at a pressure of 1 × 10 −6 mbar, using Coating Unit (Edwards High Vacuum Evaporation Unit model 19E6/196).These samples (9 samples) were connected in serial circuit with nanometer, voltmeter and rheostat.The measurements were taken at the room temperature (300 K).I-V measurement in dark and under illumination was carried out by mounting the sample in a sample holder metal, having area about 1 cm 2 .The current measured using Nano-Ampere meter (model 841B) and voltage measured using multi meter device.The intensity of the incident light was measured (0.2 and 0.4 W/m 2 ) using a light meter (model D-10559).Michelson interferometer was used in thickness measurement of (Muscovite /TiO 2 /Dye/Al) layer which is coated on mica substrate. RESULTS Figures 5- 16 show the results obtained within the experiment in which the present work discuss, the main parameter meant in this experiments is the light intensity I, (two values for I were chosen 0.2 w/m 2 and 0.4 w/m 2 ), beside the all parameters used in the work [effect].Also the current-voltage plots give an exponential relation, which lead directly to the evaluation of the physical characteristics of solar cell, such as efficiency percentage (η %), and current density (J). DISCUSSION The results shown in Figures 5-16, besides the values of the solar cell efficiency, and fill factor given in Tables 1-3, insure that there is an inverse proportionality, be- tween the light intensity I, the solar cell efficiency and fill factor.In details the result of this work can be con- tributed as follows: At a constant temperature, and rotations per Minuit (of the spin coating instrument), the solar cell efficiency and fill factor, increase with the samples concentration at the same intensity of light 0.2 w/m 2 , and decrease, when we the intensity of the source negatively affected the solar cell efficiency and fill factor at intensity values greater than 0.2 w/m 2 , according to the sample studied in this work.Also one of the most important results noticed, and observed in this paper is that the increasing in rotations per Minuit had to influence the solar cell efficiency, and fill factor. In a previous work [19], the same result was obtained, but, the intensity was not taken into account, this work insured that the intensity of light played only a secondary role as one of the parameters affecting the solar cell efficiency. 7 ) Is called the fill factor (FF) of the cell.Its value obviously ranges between 0 and 1.The maximum conversion efficiency of a solar cell is given by the ratio of the maximum useful power to the incident solar radiation.Thus = incident solar flux and A c = area of the cell.For an efficient cell, it is desirable to have high values of fill factor, short circuit current and open circuit voltage.From solid state physics theory, expressions can be derived for each of these quantities.The expressions show that high values of I sc are obtained with low band gap materials, while high value of Voc and FF are possible with high band gap materials.Thus if theoretical values of η m are calculated for different values of E g, it is obvious that a maximum value would be obtained at some value of E g[10]. Table 2 . Cell efficiency and fill factor results for different light intensity for (Muscovite/TiO 2 /dye/Al). Table 3 . Cell efficiency and fill factor results for different light intensity for (Muscovite/TiO 2 /dye/Al).
v3-fos-license
2020-08-26T14:42:17.936Z
2020-08-24T00:00:00.000
221310300
{ "extfieldsofstudy": [ "Materials Science" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "http://www.cjps.org/article/doi/10.1007/s10118-020-2473-z", "pdf_hash": "5e9a302b2e3e9ebb5548f3087cc0c19728e91e35", "pdf_src": "Springer", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41115", "s2fieldsofstudy": [ "Chemistry", "Materials Science" ], "sha1": "5e9a302b2e3e9ebb5548f3087cc0c19728e91e35", "year": 2020 }
pes2o/s2orc
Polymerization Kinetics of Propylene with the MgCl2-Supported Ziegler-Natta Catalysts—Active Centers with Different Tacticity and Fragmentation of the Catalyst The catalytic activity and stereospecificity of olefin polymerization by using heterogeneous TiCl4/MgCl2 Ziegler-Natta (Z-N) catalysts are determined by the structure and nature of active centers, which are mysterious and fairly controversial. In this work, the propylene polymerization kinetics under different polymerization temperatures by using Z-N catalysts were investigated through monitoring the concentration of active centers [C*] with different tacticity. SEM was applied to characterize the catalyst morphologies and growing polypropylene (PP) particles. The lamellar thickness and crystallizability of PP obtained under different polymerization conditions were analyzed by DSC and SAXS. The PP fractions and active centers with different tacticity were obtained with solvent extraction fractionation method. The catalytic activity, active centers with different tacticity and propagation rate constant kp, fragmentation of the catalyst, crystalline structure of PP are correlated with temperature and time for propylene polymerizations. The polymerization temperature and time show complex influences on the propylene polymerization. The higher polymerization temperature (60 °C) resulted higher activity, kp and lower [C*], and the isotactic active centers Ci* as the majority ones producing the highest isotactic polypropylene (iPP) components showed much higher kp when compared with the active centers with lower stereoselectivity. Appropriate polymerization time provided full fragmentation of the catalyst and minimum diffusion limitation. This work aims to elucidate the formation and evolution of active centers with different tacticity under different polymerization temperature and time and its relations with the fragmentation of the PP/catalyst particles, and provide the solutions to the improvement of catalyst activity and isotacticity of PP. INTRODUCTION Since the discovery of Z-N catalysts in the 1950s, iPP catalyzed with MgCl 2 -supported Z-N catalyst has become the second largest volume commercial polymer with more than 50 million tons output yearly. [1,2] Although fundamental studies on the catalytic behaviors of propylene polymerizations including catalytic activity and stereoregularity have attracted lots of attention, [3,4] the catalytic mechanism for propylene polymerization by using Z-N catalysts is still mysterious and controversial. [5−7] Many efforts have been made to give insight into the fragmentation of catalyst and the growing of polymer particles in propylene polymerization. [8−16] Theoretical studies proved that the structures of the catalysts affect the catalyst fragmentation and then the propylene polymerization activity. [8−14] Especially, the porosity of the catalyst greatly influenced the way that catalyst fragmented in the early stage of polymerization. [12,13] In addition, factors such as size of the catalytic nucleus, the catalyst support and the spatial distribution of macropores in the catalyst were recognized to influence the catalyst fragmentation. [9,11,14] Chiovetta et al. reported that the crystallinity of the produced polymers affected the rate of catalyst fragmentation. [15] Fan et al. pointed out that the comparable lamellar thickness (<20 nm) of iPP with the nanopores (15-25 nm) in the catalyst was essential in propylene polymerization by the growing of the iPP lamellae inside the nanopores of the catalysts to break up the catalyst particles. [16] the polymer/catalyst particles, [18−21] and the [M] at active sites cannot be accurately measured. Therefore, the [M] is frequently assumed to be the same as in the whole polymerization medium even though its real value is a little smaller, while the k p is strongly dependent on the stereospecificity of active centers. [22,23] Based on the determinations of the [C*] with Kinetic method, [24−26] radioisotope calibration method, [27−34] and chemical quenching method, [35−43] the catalytic mechanism and catalytic behaviors of the Z-N catalysts have been discussed. Terano et al. proposed the modified three-sites model mechanism to explain the formation of active centers with different stereoselectivity. [26] Chadwick et al. found that an increased [C*], instead of k p , could lead to the high activity of the Z-N catalysts in propylene polymerization. [28] Bukatov et al. pointed out that the diffusion limitations could be neglected by decreasing [C*] to improve catalytic activity of the Z-N catalyst. [33] Fan et al. found that introducing internal electron donor in the Z-N catalyst increased both [C*] and k p in the polymerization of 1-hexene, [39] while introducing external electron donor reversibly inactivated some active centers and evidently increased the k p of the C i *. [42] The proportion of C i * and catalytic activity increased effectively by adding appropriate amount of diethylaluminum chloride (DEAC) in the Z-N catalyst system with triethylaluminum (TEA) as cocatalyst in propylene polymerization. [43] Polymerization temperature affects not only the polymerization activity, but also the formation and distribution of high stereoregular active centers in propylene polymerization. [44−47] Rishina et al. and Chadwick et al. [44,45] found that the polymerization activity and stereoregularity of PP increased with the increase in temperature in the range of 30−120 °C, and the polymerization temperature affected the formation of isotactic and syndiotactic centers, and thus the polymerization rate constant. Bresadola et al. pointed out that the increase in polymerization temperature affected the oxidation-reduction degree of TiCl 4 and then the valent state distribution of the active centers. [46] In this work, the propylene polymerization kinetics under different polymerization temperatures with MgCl 2 -supported Z-N catalyst were studied. 2-Thiophenecarbonyl chloride (TPCC) quenching method, solvent extraction, DSC, SAXS, SEM were used to analyze the number and distribution ofactive centers with different tacticity, crystalline structure of PP and morphology of the PP/catalyst particles under different polymerization conditions. The formation and evolution of active centers with different tacticity and its relations with the fragmentation of the PP/catalyst particles under different polymerization temperature and time are elucidated for deeper understanding of the factors beneficial to the improvement of catalyst activity and isotacticity of PP. EXPERIMENTAL Chemical The TiCl 4 /MgCl 2 Z-N catalyst (Ti content=2.1 wt%, provided by Shandong Orient Hongye Chemical Co., Ltd.) was used as catalyst in the polymerization. TEA (provided by Yanshan Petrochemical Co.) diluted in n-hexane was used as co-catalyst. Propylene (purity ≥99.6%, supplied by Shandong Chambroad Petrochemical Co., Ltd.) was used as polymerization monomer. TPCC (purity ≥98%, supplied by Alfa Aesar Co.) was used to quench the active propagation chains. The n-heptane (C7) was purified through distillation over Na for 24 h. All other solvents including n-octane (C8) were received and used without further treatment. Polymerization and Sample Purification Firstly, 100 mL of purified n-heptane, precise amounts of TEA, electron donor, appropriate amount of hydrogen, and continuous propylene gas (0.4 MPa) controlled by solenoid valve were put into the reactor, successively. Subsequently, the catalyst powders were introduced to initiate the polymerization of propylene. After a certain polymerization time, TPCC with three times as much as the content of TEA was quickly injected into the reactor to quench the growing PP chains with stirring at the constant temperature for 5 min, ensuring the completion of the quenching reactions with the lowest interferences from side reactions. [41] Then the quenched PP was purified with excess alcohol, filtered, and overnight treatment in a vacuum drying oven at 40 °C. According to the references, [41,42] the purification of the TPCC quenched PP samples was carried out as the following process. Firstly, 200 mg of polymer and 150 mL of ethanol were added into a 250 mL flask with stirring at 80 °C for 60 min. Then the above suspension was separated by filtering and the insolated PP was washed with ethanol for three times. Secondly, the dried polymer was dissolved in 100 mL of n-octane by stirring at 100 °C for 30 min. Then the above solution was introduced into a beaker filled with sufficient ethanol, precipitated, and filtered. Finally, the isolated polymers were eluted with boiling ethanol for 24 h and overnight treatment in a vacuum drying oven at 40 °C. Polymer Fractionation Firstly, about 100 mL of n-octane was heated to 100 °C to dissolve 1 g of TPCC quenched PP, and then the mixture was cooled to 20 °C. Subsequently, the PP was allowed to crystallize and precipitate completely from the solution after sufficient time of standing. The suspension was separated into n-octane insoluble fraction (C8-ins) and soluble fraction (C8-sol) by centrifuging and filtering. After drying in vacuum at 40 °C for 12 h, the C8-ins was extracted with n-heptane at 100 °C for 24 h in the Soxhlet extractor, and the n-heptane soluble fraction (C7-sol) was collected by rotational evaporation and then precipitation with plenty of ethanol. Finally, the C8-sol, C7-sol, and C7-ins (nheptane insoluble fraction) were obtained after drying at 40 °C for 12 h in a vacuum drying oven. Characterization The sulfur content in the TPCC quenched PP was determined by an ultraviolet fluorescence detector (REK-20S, Taizhou Ruike Instrument Co.). Three parallel measurements of the samples with the weight of 3−4 mg were carried out and the average value was taken as the final result. The [C*] was determined by the molar concentration of sulfur. [41] The morphologies of the catalyst particle and PP particles synthesized at different polymerization conditions was characterized by scanning electron microscopy (SEM) on a JEOL-7500F instrument, whose acceleration voltage was 3 kV. Differential scanning calorimetry (DSC) thermograms were recorded using PerkinElmer DSC8500. 4−6 mg of PP was first heated from 25 °C to 200 °C with a rate of 10 °C/min, kept at constant temperature for 5 min to eliminate thermal history. Then, the sample was cooled to 25 °C with a rate of −10 °C/min. Finally, the sample was reheated to 200 °C with a rate of 10 °C/min. The first heating step was used to study the lamellar structures of PP synthesized from the polymerization reactor. The second heating step was recorded for the melting point of the PP fractions without residual thermal history. The crystallinity (X c ) of the PP was determined according to the Eq. (2). where is the melting enthalpy tested from the second DSC heat flow and is the melting enthalpy of completely crystalline PP (165.5 J/g). [48] The molecular weight of PP was investigated by the PL 220 gel permeation chromatography (GPC) analyzer at 150 °C with 1,2,4-trichlorobenzene containing 5‰ antioxidant as eluant, and calibrated by polystyrenes standards. The long spacing (d ac ), the thickness of amorphous layer (d a ) and crystalline lamellae (d c ), of the original PP particle crystals without any thermal treatments were measured with the small-angle X-ray scattering (SAXS) on a Xenoecs 2.0 instrument with wavelength of X-ray 0.15418 nm. The distance between the detector and samples was 2.470 m, and each pattern was collected within 1800 s. The fitted electron density correlation function (K(z)) was obtained from the experimental intensity distribution (I(q)) and the inverse Fourier transformation, determined according to Eq. (3): [49,50] where z is the position tested along the normal trajectory to the surface of lamellar. All the structural parameters were calculated through the K(z) function. The pore size distribution of the catalyst was characterized by an AUTOSORB-1 gas adsorption apparatus (Quantachrome Ins). The sample was degassed in vacuum at 80 °C for 12 h, nitrogen was chosen as the adsorption gas and the balance time was 10 min. The static isotherm adsorption of the sample was measured at liquid nitrogen saturation temperature (77 K). The pore size distributions were obtained according to the desorption isotherms. Polymerization Kinetics of Propylene A series of propylene polymerizations were conducted under different polymerization temperatures with gradually extended polymerization time by using TiCl 4 /MgCl 2 Z-N catalyst, and all the polymerization runs were quenched by adding TPCC at the end of the polymerizations. The concentration of active center ([C*]) was determined based on the sulfur content, [41] and k p could be calculated according to the Eq. (1). The relevant synthesis and characterization results are shown in Table 1 and the catalytic activity, the number of active centers ([C*]/[Ti]) and k p under different polymerization conditions are shown in Fig. 1. Firstly, the activity of the propylene polymerization under all the polymerization temperature increased rapidly during the first 3 min to reach a maximum value, and then decreased obviously at high temperature, but gently at low temperature (Fig. 1a). The rapid growth of the activity at the initial stage of polymerization was due to the producing of large number of active centers by the fragmentation of the catalyst (Fig. 1b) as well as the highest k p (Fig. 1c). The decline of the activity after 3 min was attributed to the decrease in chain propagation rate constant, although [C*]/[Ti] still showed a trend of growth (Figs. 1b and 1c). As shown in Fig. 1(b), the value of [C*]/[Ti] had an increasing trend with the increase Table 1 Kinetic parameters and polymer properties of propylene polymerizations. a , but the higher k p in propylene polymerization with Z-N catalyst. It was obvious that the k p was the key factor affecting the R p and increased with the increase in temperature according to the Arrhenius equation. However, the k p undergoes a de-crease with the increase in polymerization time due to the gradual increase in the monomer's diffusion resistance in the growing particles (Fig. 1c). (Fig. 2). These reflected that [C*]/[Ti] was not only influenced by the catalyst fragmentation reflected by m PP /m cat , but also the polymerization temperature. As Fig. 2(b) shows, with the same polymerization time, the [C*]/[Ti] increased first and then decreased with the increasing in polymerization temperature. When comparing Fig. 2(a) with Fig. 2(b), a conclusion could be drawn that the polymerization temperature was the key factor determining the [C*]/[Ti]. Morphology of PP The catalyst in this study presented spherical particle morphology with the diameter of about 70 μm (Fig. 3a) of hundred nanometers on the rough surface of the catalyst, which could ensure the propylene to diffuse into the inside section of the catalyst for further polymerization easily (Fig. 3b). Importantly, lots of primary particles with globate morphology in the size of about 50 nm could be found on the surface of the catalyst (Fig. 3c). These primary particles initiate propylene Table 1. polymerization to form the microglobule particles in a spherical PP granule, as observed in Figs. 4(i)−4(l) and Figs. 5(g) and 5(h). [51−60] The particle morphologies and the surface morphologies of the growing PP particles obtained at 30 °C with different polymerization time are shown in Fig. 4. PP particles with size in the range of 100−800 μm completely replicated the morphology of the catalyst. The PP particles grew gradually with time prolonging as shown in Figs. 4(a)−4(d). It is assumed that in the initial stage of polymerization (Figs. 4a−4c), catalyst particles were broken into subparticles which can be separated into some primary particles further. The primary catalyst particles initiated propylene polymerization to form the PP primary particles with size in the range of 0.1−0.5 μm (Figs. 4i−4l). With the extension of polymerization time (30 min), PP subglobule particles composed by the primary PP particles are observed (Fig. 4l). It could be inferred that the fragmentation of catalyst and PP occurs mainly in the first 10 min of the polymerization, which contribute to the increased [C*]/[Ti] as shown in Fig. 1(b). With the extension of polymerization time, the accumulation degree of PP primary particles becomes more and more dense (Figs. 4i−4l), which can lead to the increase in diffusion resistance to propylene, and eventually the decrease in k p (Fig. 1c). After n-heptanes extraction, the PP particles composed by the PP subglobule particles are shown in Figs. 5(a) and 5(b). The surface of the PP7 (Table 1) with 10 min polymerization time (Fig. 5g) was more porous than PP8 with 30 min polymerization time (Fig. 5h), which can explain higher [C*]/[Ti] in PP7 than that in PP8 (Fig. 1b). In addition, in the enlarged SEM images of PP8, the primary PP particles in the surface (about 500 nm) are larger than that in the inner section (about 100 nm) (Figs. 5h and 5i). When the polymerization time was long enough, the PP primary particles were piled up very closely, and little space was left for the monomer to diffuse into the inner sections of the particles, resulting in the embedding of the internal active center and smaller primary PP particles (Fig. 5i) Aggregation Structure of the PP In order to study the relations between fragmentation of the catalyst and the matching requirements between lamellar thickness of PP and the pore size in the catalyst, the achieved PP without any other thermal pretreatment was analyzed by DSC and SAXS. As shown in Fig. 6(a), the melting temperature of PP obtained at higher temperature is higher than that of PP synthesized at lower temperature. The lamellar thickness in the PP was calculated according to the Thomson-Gibbs Eq. (4): Primary particle Subglobule particle Primary particle Subglobule particle Subglobule particle where , , L, are equilibrium melting temperature, free surface energy of the end faces at which the chains fold, lamellar thickness and melting enthalpy of the perfect crystal, respectively. For calculation of L in PP samples, the following parameters were adopted: =208 °C, [61] =70×10 −7 J/cm 2 , [62] and =165.5 J/cm 3 . [48] According to Eq. (4), the lamellar thickness d c of PP calculated through the DSC is in the range of 9.1−9.4 nm (Fig. 6b), which is similar to that calculated through the SAXS (8.94− 9.39 nm) (Fig. 7). The polymerization temperature had little effect on the lamellar thickness of nascent PP (Figs. 6 and 7). The pore size distribution of the catalyst as shown in Fig. 8 is mainly in the range of 1.4−57.8 nm, indicating the PP crystals can generate inside most pores of Z-N catalyst to fragment the catalyst. The Fractionation of PP To better understand the evolution of active centers with different tacticity under different polymerization conditions, solvent fractionation of PP was conducted. The PP was fractionated into fractions including C8-sol, C7-sol, and C7-ins, which were assigned to atactic PP (aPP) with no obvious melting temperature (T m ), medium-isotactic PP (miPP) with T m at 126.3−143.8 °C, and iPP with T m at 161.6−167.1 °C (Fig. 9), respectively. [42,63] As shown in Table 2 and Fig. 10, iPP components increased gradually while the miPP and aPP components decreased with the increase in polymerization temperature (Fig. 10a). The isotactic active centers (C i *) producing iPP as the majority ones reached the highest at 30 °C and then declined with the further increase in the polymerization temperature, the active centers producing miPP (C m *) and aPP (C a *) occupied less than 3.4% among the total titanium and reached the highest at 45 °C and then declined with the further increase in the polymerization temperature (Fig. 10b). Comparing the results from Figs. 1 and 10, it can be deduced that the C i * as the majority ones were sensitive to the temperature and a higher temperature (more than 30 °C) resulted in the inactivation of some C i *. While Fig. 10(c) shows that the extremely high k p at 60 °C contributed to the highest iPP component at 60 °C. Considering the effect of polymerization time, the mass percentage of the iPP, miPP, and aPP fractions showed little change in the initial 10 min, while iPP component increased from about 87 wt% to 93.7 wt% when the polymerization time reached 30 min (Fig. 10d). The [C*]/[Ti] producing iPP, miPP and aPP fractions increased while k p decreased with the polymerization time increasing in the initial 10 min (Figs. 10e and 10f). The above results showed that, a large number of active centers especially ones producing iPP fractions were released through the fragmentation of the catalyst during the polymerization process in the first 10 min in the studied time scope. It is generally accepted that the molecular weight of PP from C i * was significantly higher than that from active centers with lower stereoregularity. [64,65] The increase in the M w of PP with the increase in polymerization time further proved the increase in the C i *, as shown in Table 1. The isotactic active centers showed higher k p than active centers with lower stereoselective (Fig. 10c). [42,63,66] CONCLUSIONS The polymerization temperature and time have complex influences on the propylene polymerization. The lamellar thickness of PP crystals in the range of 8.9−9.4 nm was almost unaffected by the polymerization temperature and time, which was less than the most size of the nanopores in the catalyst. The activity of the propylene polymerizations with MgCl 2supported Z-N catalyst increased rapidly during the first 3 min to reach a maximum value and then decreased gradually with the increase in the polymerization time. The largest activity observed in the first 3 min was attributed to the large number of active centers produced by the fragmentation of the catalyst and the highest k p . However, C i * as the majority ones increased gradually with polymerization time prolonging and occupied 91% of the total C* at 30 °C for 30 min. The active centers producing iPP, miPP, and aPP showed decreased k p with the polymerization time extension, which was blamed on the increased diffusion limitation, explained the decay activity of the propylene polymerization with time increasing. The polymerization activity increased obviously with the temperature increasing, which can be attributed to the extremely higher k p at higher temperature. However, the total [C*] as well as [C i *] increased to the highest value when the temperature increased to 30 °C, and then decreased greatly with the temperature further increase. The C i * as the majority ones showed much higher k p and higher temperature sensitivity than the active centers with lower stereoselectivity and produced iPP with higher M w . Therefore, appropriate polymerization temperature (30 °C) and time (10 min
v3-fos-license
2021-07-27T00:05:42.723Z
2021-05-25T00:00:00.000
236363326
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://doi.org/10.1016/j.jtumed.2021.04.005", "pdf_hash": "1e7c0907759d9989b738a524a051627955c1239c", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41116", "s2fieldsofstudy": [ "Medicine" ], "sha1": "1e7e8b576a07467769f484a8e6efe2d38a7e7fc8", "year": 2021 }
pes2o/s2orc
Evaluating the understanding about kidney stones among adults in the United Arab Emirates Objectives The prevalence of kidney stones is increasing worldwide. Multiple risk factors are believed to contribute to the development of kidney stones such as lifestyle, diet, and global warming. In the United Arab Emirates (UAE), there has been limited research exploring the prevalence and risk factors of kidney stones. This study attempts to assess the understanding and prevalence of kidney stones among adults in the UAE. Methods In this cross-sectional study, data were collected using a self-administered questionnaire, distributed among 515 participants (20–49 years old) from Abu Dhabi, Dubai, Ajman, and Sharjah states. IBM SPSS version 25 was used for data analysis. Results The mean of knowledge score was 56.4% (n = 500). There was no correlation between the knowledge of those who had experienced kidney stones and those who did not. Furthermore, a family history of kidney stones increased the risk of developing stones by 2.27 times. Among participants reporting signs, symptoms, diagnosis, and the management of kidney stones, the knowledge and understanding about kidney stones was high. However, the perceptions of the same cohort about dietary precautions were limited. While analysing the sources of knowledge, the Internet and mass media were twice as important as physicians in educating the population. Conclusion This study shows that the study cohort from the UAE population was aware of certain aspects of kidney stones but was quite naïve about its consequential risk factors. This highlights the importance of promoting education about kidney stones through health campaigns. Abstract Objectives: The prevalence of kidney stones is increasing worldwide. Multiple risk factors are believed to contribute to the development of kidney stones such as lifestyle, diet, and global warming. In the United Arab Emirates (UAE), there has been limited research exploring the prevalence and risk factors of kidney stones. This study attempts to assess the understanding and prevalence of kidney stones among adults in the UAE. Methods: In this cross-sectional study, data were collected using a self-administered questionnaire, distributed among 515 participants (20e49 years old) from Abu Dhabi, Dubai, Ajman, and Sharjah states. IBM SPSS version 25 was used for data analysis. Results: The mean of knowledge score was 56.4% (n ¼ 500). There was no correlation between the knowledge of those who had experienced kidney stones and those who did not. Furthermore, a family history of kidney stones increased the risk of developing stones by 2.27 times. Among participants reporting signs, symptoms, diagnosis, and the management of kidney stones, the knowledge and understanding about kidney stones was high. However, the perceptions of the same cohort about dietary precautions were limited. While analysing the sources of knowledge, the Internet and mass media were twice as important as physicians in educating the population. Conclusion: This study shows that the study cohort from the UAE population was aware of certain aspects of kidney stones but was quite naı¨ve about its consequential Introduction Up to 12% of the world's population will develop kidney stones at some stage in their lives. 1 It is one of the most common urinary tract disorders. There are no symptoms when the stone initially forms, but it can later present as severe flank pain, haematuria (blood in urine), urinary tract infection, blockage of urine flow, and hydronephrosis (dilation of kidney). 2 Kidney stones increase the risk of developing chronic kidney disease by 60% and end-stage renal disease by 40%, 3 it has been also associated with the development of papillary renal cell carcinoma (RCC). 4 Kidney stones are among the most painful urological disorders. Recently, it has become highly prevalent worldwide (7e13% in North America, 5e9% in Europe, 1e5% in Asia); its high occurrence rate and the expensive nature of disease management necessitates increasing awareness in the population 4e6 Studies have confirmed the multi-factorial nature of the disease, and one of the most important determinants increasing the risk of developing stones is the exposure to high temperatures. 7 The hot climate in the United Arab Emirates (UAE), especially in the summer, and inadequate compensatory water intake or re-hydration can dramatically influence the possibility of disease occurrence regardless of the age or gender. Moreover, there is evidence that diet and food choice can seriously influence stone formation; however, the study revealed confusion in the choice of certain diets among the public. 8 According to recent studies, the prevalence of obesity and associated non-communicable diseases like diabetes are 32.3% and 15.5%, respectively, among expatriates in the UAE (which is the predominant population); these diseases may continue to rise with lack of interventions, 9 and subsequently increase the prevalence of kidney stones. The importance of preventative strategies in managing the rising prevalence of kidney stone disease is severely underestimated. Guidelines and studies advocate the importance of dietary evaluation and counselling as part of the management of kidney stones, and adherence to these can drastically decrease the morbidity. However, such counselling is complex and the clinician is usually responsible to implement it effectively. 10e12 The objective of this study is to determine the basic level of understanding about kidney stones and their risk factors, to investigate misconceptions, as well as to assess the potential sources of knowledge. This study seeks to address the highlight role of physicians in educating patients and spreading awareness in the population. 13 Materials and Methods This is a quantitative cross-sectional study that focuses on determining public knowledge and awareness regarding renal stones in the UAE. Data were collected using selfadministered questionnaires comprising 15 questions, from adults aged between 20 and 49 years old, from Abu-Dhabi, Dubai, Sharjah, and Ajman during October 2018. By. Sampling method The target population consists of adults between the ages of 20e49 years in the UAE. The inclusion criteria to include any adult resident of the UAE. The exclusion criteria were all non-Arabic and non-English speakers. Participants were consecutively approached and asked to fill out the questionnaire. Tools of data collection A self-administered questionnaire was given to the participants, along with a consent form. Face validity of the questionnaire was reviewed by one of the supervisors who is a biostatical expert, and a pilot study was done on a subset of the sample along with the analysis. The questionnaire included closed-ended multiple choice questions categorised into sections comprising: 1. Demographic data of the participant 2. History of the disease 3. Knowledge of kidney stones 4. Practices 5. Knowledge regarding renal stones prevention 6. Knowledge regarding renal stones management Statistical analysis The data were coded and analysed using SPSS 25 (Statistical Package for Social Sciences). Chi-Square and t-test were used to determine correlations. Risk estimates with odds ratio (OR) were used to estimate how strongly a predictor is associated with kidney stones. Additionally, the KruskaleWallis test was used in the analysis. Bar and pie charts were created using MS Excel to aid in visualising the quantitative analysis of knowledge of risk factors, diagnosis methods, and complications. A p-value less than 0.5 was considered statistically significant. Results This study involved the collection of data from a total of 515 participants. The age groups and percentages of the participants are indicated in Figure 1a. The distribution of the participants across both genders was approximately equal (females, 51.7% and males, 48.2%). The educational status varied across the sample, with bachelor's degree accounting for the highest percentage as indicated in Figure 1b. There was a lack of information to identify bowel condition and family history as important risk factors, as only 24.3% and 29.7%, respectively, identified it correctly. The participants' responses were recorded and presented in Table 1. Majority of the sample (82.5%) identified the complications of kidney stones correctly. Only 6.2% incorrectly considered stool analysis as a method of investigationda very small percentage indicating a good background on the methods of investigation of kidney stones. As shown in Table 2, participants exhibited poor level of knowledge regarding preventative foods; 80% assumed that vegetables prevent kidney stones, which in reality, is not true. The same applies for dark chocolate (27.2%) and spinach (56.5%). A family history of kidney stones increases the risk of future stone development by 2.27 times (P-value 0.008). After comparing and correlating the results, it was found that the presence of a family history of kidney stones, which was observed in 43.1% of the population, increases their knowledge regarding the risk factors. Moreover, a prior history of kidney stones does not affect the knowledge and awareness among these participants. The study participants lacked knowledge regarding appropriate water intake (46% had sufficient knowledge), followed by diet factors (only 50.1%) and risk factors of kidney stones (51.6%). All of these factors were combined to determine the total knowledge score, which represents the overall knowledge of the participants regarding each of the The following statements were used to assess knowledge amongst participants (n ¼ 515). Percentages correspond to participants who deemed these statements correct. individual factors concerning kidney stones (signs and symptoms, diagnosis, complications, risk factors, appropriate water intake and dietary factors). The final mean of the total knowledge score was 56.4%, which surprisingly indicates a poor level of knowledge in the sample. In addition, 53.4% of the study population are unaware of the consequences of kidney stones. There was no difference in the knowledge scores between males and females. Nevertheless, results showed that there is no difference in the knowledge level between participants in the age groups of 20e29 and 30e39 years. However, people in the age group of 40e49 years have a significantly higher knowledge level compared to the other groups. Similar tests were performed to determine the mean difference between knowledge scores according to different educational levels; the results indicate that respondents with higher educational levels do not necessarily have higher knowledge. Participants taking the survey were asked to identify the methods of management for kidney stones. The variables mentioned in Table 3 indicate the list of options people were presented with, which include surgery and medications used to dissolve the stone along with follow up. The results indicate that the survey respondents are evidently knowledgeable about surgical interventions not being a direct method of management; meanwhile, 63.5% were not aware that stones smaller than 5 mm are not to be referred for invasive management. Additionally, 54.6% did not know that stones larger than 5 mm are eligible for management. Finally, only 34.6% of participants did not know that if kidney stones are left untreated, they cause renal failure. Discussion This study shows that our cohort of participants have average level of knowledge about kidney stones, particularly regarding diet-related risk factors. However, the cohort shows sound knowledge and understanding about symptomatic and diagnostic factors of kidney stones. The history of kidney stones and higher educational status do not correlate with better knowledge. These findings reflect the importance of awareness of risk factors and symptomatic manifestations of kidney stones, which will enable the general population to modify lifestyles to mitigate the risk of such a common health care problem. Interestingly, kidney stones constitute a fundamental hallmark in urological conditions that equally affect both genders. It is regarded as a multifactorial problem, influenced by family history, age, sex, diet, weather, and lifestyle. 14,15 Metabolic Syndrome, a common disorder in the UAE, accounting for about 40.1% of the population, was found to be a key factor in the development of kidney stones. This disorder necessitates several lifestyle modifications including attention to hydration and adequate intake of minerals and essential nutritional elements. 16e18 Determining the community's awareness and disposition about kidney stones disease, as well as its treatment and management, helps to resolve issues that might reduce its occurrence. Different environmental factors as well as dietary intake significantly contribute to the occurrence of the stones and the resultant pathological damage, which can be easily predicted by factors such as inadequate water intake and hot weather. 19,20 In addition, most people in the UAE reach out for chilled bottles of soft drinks or consume tea and coffee instead of drinking plain and hydrating water. An informal survey has revealed that more people have green tea or a cola when they should had water instead. This obviously correlates with our topic since the biggest risk factor of kidney stones is dehydration. 19 Historically, the Gulf countries display an increased incidence of kidney stones due to socio-economic, environmental, and nutritional factors such as high intake of oxalate-containing foods, low intakes of calcium-containing foods, and the region's dry climate. These factors eventually lead to dehydration, as it significantly increases the risk of development of renal stones. This highlights the importance of assessing the population's knowledge levels, as an improved understanding about these factors could help reduce morbidity related to kidney stones, specifically through early detection and management. 21 In the study conducted by Baatiah et al. in KSA in 2017, the investigators reported the prevalence of urolithiasis in the community to be as high as 11.2%, 21 The study further states that 37.7% respondents had low levels of awareness; 35.3%, moderate; and around 0.6%, high levels of awareness. It also reiterates the significance of improving the knowledge base of the general population about kidney stones by conducting nationwide public awareness campaigns among all age groups of the community. Studies have shown that a great majority of primary care physicians were aware of suitable preventive measures against recurring kidney stones. However, this information does not appear to be effectively implemented in clinical practice. Very few articles have focused on the awareness and understanding of individuals in our cohort. Limited data about the knowledge and understanding of the general population regarding renal stones is available in the UAE. 22,23 It is worth noting that the presence of modifiable and unmodifiable factors can potentially lead to malignant transformations in different human organs, such as low vitamin D levels and breast cancer, 24 family history for malignant melanoma, 25 high animal fat consumption for colorectal cancer, 26 and cholesterolosis for gallbladder carcinoma. 27 Similarly, long-standing stones in the urinary tract have been shown to be potentially carcinogenic and can lead to renal cell carcinoma and transitional cell carcinoma of the kidneys. 28 It is important to raise people's awareness of whether they have any pain that may be linked to the development of kidney stones and to clarify how to distinguish between them and any other diseases. It is also important to guide people towards a better and healthier lifestyle, especially in terms of water consumption (because people in the UAE are in a hot climate region), eating a healthy diet, tracking weight, and physical activity that will help reduce certain cases of formation of kidney stones and prevent further complications. 29 The significance of this research is that it seeks to clarify the proportion of our study sample that is aware of the risk factors of kidney stones, so that, if necessary, campaigns can be carried out to raise awareness of prevention. Consequently, the knowledge levels among the population will grow to encounter these risk factors or even the disease directly, and the education and expertise of the current population will influence the understanding of the next generation. Limitations The study used a non-probability convenience sampling method which affects the generalisability of the results. The information was acquired through self-reporting which is vulnerable to recall bias. Some of the cited articles used in this study are outdated and were used due to the lack of recent papers in the region on this topic. Conclusion Participants in general had varied responses to the different aspects of knowledge regarding kidneys stones. The findings indicate that approximately half of the respondents are somewhat aware of certain aspects regarding kidney stones, including complications, diagnosis, and management. Individuals who have a family history of kidneys stones had a compellingly higher level of knowledge about kidney stones compared to the other group. In addition, participants in the age group of 40e49 years had a significantly higher knowledge level compared to the other groups. However, participants with a higher education level did not necessarily have greater knowledge than the rest of the population, which was quite enlightening. Recommendation A public health intervention aimed at correcting misconceptions and boosting preventative measures is recommended. A detailed inquiry of how and why the community incorporates certain behaviours that prevent urinary stones is worth investigating. A large-scale study that incorporates these points with a larger number of participants is recommended. Moreover, this study revealed the lack of knowledge in some disease aspects, which highlights the need for public health awareness regarding this disease. Source of funding This study did not receive any specific grant from funding agencies in public, commercial, or not-for-profit sectors.
v3-fos-license
2018-04-03T03:29:28.651Z
2017-10-01T00:00:00.000
6272062
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.cureus.com/articles/9402-duplication-of-the-sphenomandibular-ligament.pdf", "pdf_hash": "6f708f01046afd843c51b7ca314855a7df1b7cdb", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41117", "s2fieldsofstudy": [ "Medicine" ], "sha1": "4d682b240c482e4efa8df8fcf2b8f568711663ff", "year": 2017 }
pes2o/s2orc
Duplication of the Sphenomandibular Ligament The normal origin of attachment of the sphenomandibular ligament is from the spine of the sphenoid bone and derailment of its course might interfere with mandibular nerve anesthetic blockade. During routine dissection of the skull base and mandibular region, a case of an anatomical variation of the sphenomandibular ligament was observed. The ligament was found to be composed of two parts; an anterior part with a wide origin from the spine of the sphenoid bone and a posterior part arising from the mandibular fossa of the temporal bone. This case and related literature were reviewed. To our knowledge, a split sphenomandibular ligament has not been previously reported. Such a variation should be kept in mind by oral surgeons and dentists during procedures in this area such as inferior alveolar nerve anesthetic blockade Introduction The sphenomandibular ligament (SML) develops from Meckel's cartilage and is flat and thin [1]. The superior attachment site of the ligament is the spine of the sphenoid bone [2] and the inferior attachment is on and around the lingula of the mandible. However, variations of the SML have been reported. For example, Garg and Townsend discussed ligaments ranging in shape from short to broad and bi-concave [3]. In all cases, the ligament was surrounded by fascia and could be found in the pterygomandibular space. Some have also described the SML as a continuation of fibers entering either the petrotympanic or squamotympanic fissures to attach onto the malleus of the middle ear [4]. The function of the SML has been reported to prevent inferior distraction of the mandible [5] as when the temporomandibular joint (TMJ) is in a closed position, the SML is slack. However, the most important relationship of the SML is its relationship to the inferior alveolar nerve and nerve to the mylohyoid [4]. Herein, we report an unusual origin of the SML and discuss this in regards to other salient reports from the literature. Case Presentation The head of a fresh frozen cadaveric 91-year-old Caucasian female was dissected. During dissection, an unusual variant of the SML was identified. The SML in this specimen was found to have a dual origin and thus appeared to be duplicated. The origins were from the sphenoid and temporal bones. One part had a wide origin from the spine of the sphenoid bone and extended to the petrotympanic fissure (anterior SML). The spine of the sphenoid bone was not well-developed on this side. The other part originated from the mandibular fossa of the temporal bone (posterior SML) ( Figure 1). FIGURE 1: Lateral view of the right SML (gray) The upper part of the ramus of the mandible has been removed. Note the origin of the posterior SML (white arrow) was the temporal bone. A) anterior SML; P) posterior SML SML: sphenomandibular ligament Both SMLs shared a common wide distal attachment onto the lingula of the mandible. The two ligaments also had a communicating band between them near their insertion ( Figure 2). Note the anterior and posterior SML have a common attachment (arrowheads) with a small communicating fiber (arrow). Discussion It is important to note that the variation in the site of attachment of the SML is significant embryologically. The SML has three distinct regions. Embryologically, the anterior and posterior parts of Meckel's cartilage undergo ossification that later contributes to the malleus, incus, and mandible. The middle region does not ossify and becomes the SML [2]. The spine of the sphenoid bone is known as the origin of the SML. According to Burch, 88.2% (45/51) of SMLs originated from the petrotympanic fissure and 11.8% (6/51) originated from the spine of the sphenoid bone [4]. Ouchi investigated 98 sides from Japanese cadaveric heads and found that 20.4% (20/98) of ligaments originated from the spine of the sphenoid bone (type I), 34.7% (34/98) from the spine and petrotympanic fissure (type II), and 44.9% (44/98) from the spine, petrotympanic fissure, and retrodiscal tissues (type III) [6]. The spine of the sphenoid bone was the most developed in type I ligaments, and this author speculated that this was due to tension from the sphenomandibular ligament [6]. The present case demonstrated a posterior SML arising from the mandibular fossa of the temporal bone and an anterior SML arising from the spine of the sphenoid and petrotympanic fissure. This wide attachment seems similar Ouchi's type III classification, although the origin of this SML in our case had two heads. Interestingly, the older term for the SML was the tympanomandibular ligament or malleolomandibular ligament [2,7] as the ligament might have continuity with the malleus of the middle ear. In the present case, however, the posterior part of the SML was only attached to the surface of the mandibular fossa of the temporal bone. Such an irregular ligament might impact an inferior alveolar nerve blockade [8]. The SML variation identified here may act as a barrier to an injection of the blockade in a case where the injection is given too shallow. Conclusions It is important for dentists, oral surgeons, and maxillofacial surgeons to note that such a variation of the SML, as reported herein, might exist. How such a variant affects mandibular function remains to be determined. A variation like this could complicate an inferior alveolar nerve blockade. We believe that our case is unique and, to our knowledge, has not been previously reported in the extant literature. Additional Information Disclosures Human subjects: Consent was obtained by all participants in this study. Human cadaveric tissue was used for dissection. The present study protocol did not require approval by the ethics committees in our institutions, and work was performed in accordance with the requirements of the Declaration of Helsinki (64th WMA General Assembly, Fortaleza, Brazil, October 2013). Conflicts of interest: In compliance with the ICMJE uniform disclosure form, all authors declare the following: Payment/services info: All authors have declared that no financial support was received from any organization for the submitted work. Financial relationships: All authors have declared that they have no financial relationships at present or within the previous three years with any organizations that might have an interest in the submitted work.
v3-fos-license
2020-12-31T09:05:49.752Z
2020-12-24T00:00:00.000
234393001
{ "extfieldsofstudy": [ "Materials Science" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/2079-6412/11/1/10/pdf", "pdf_hash": "4fb2dcbe6090df225fe7ad568165734ff274add2", "pdf_src": "MergedPDFExtraction", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41119", "s2fieldsofstudy": [ "Materials Science", "Engineering" ], "sha1": "fd5ed7278de510a52a3154fc241e528d325c5d6e", "year": 2020 }
pes2o/s2orc
Improvement of the Tribological Properties and Corrosion Resistance of Epoxy–PTFE Composite Coating by Nanoparticle Modification In order to meet the requirements of high corrosion resistance, wear resistance, and selflubrication of composite coatings for marine applications, epoxy matrix composite coatings containing PTFE and TiO2 nanoparticles were prepared on the steel substrate. With silane coupling agent KH570 (CH2=C(CH3)COOC3H6Si(OCH3)3), titanium dioxide nanoparticles were modified, and organic functional groups were grafted on their surface to improve their dispersion and interface compatibility in the epoxy matrix. Then, the section morphology, tribological, and anticorrosion properties of prepared coatings, including pure epoxy, epoxy–PTFE, and the composite coating with unmodified and modified TiO2, respectively, were fully characterized by scanning electron microscopy, friction–abrasion testing machine, and an electrochemical workstation. The analytical results show that the modified TiO2 nanoparticles are able to improve the epoxy–PTFE composite coating’s mechanical properties of epoxy–PTFE composite coating including section toughness, hardness, and binding force. With the synergistic action of the friction reduction of PTFE and dispersion enhancement of TiO2 nanoparticles, the dry friction coefficient decreases by more than 73%. Simultaneously, modified titanium dioxide will not have much influence on the water contact angles of the coating. A larger water contact angle and uniform and compact microstructure make the composite coating incorporated modified TiO2 nanoparticles show excellent anti-corrosion ability, which has the minimum corrosion current density of 1.688 × 10−7 A·cm−2. Introduction For the advantages of excellent friction, stable chemical property, and low cost [1][2][3], epoxy resin is one of the excellent polymer coating materials, which is widely applied in the metal protection, electronics, and medical equipment [4,5]. Especially, epoxy-PTFE composite coatings have low friction coefficient and anticorrosion ability as well as high temperature resistance [6]. Such coatings could increase the anticorrosion capacity of metals and the self-lubricity of bearings as well as modify the hydrophobic and ice-phobic properties for wind turbine blades [7][8][9]. However, due to the curing shrinkage and warpage deformation of epoxy, there are always a lot of micro-pores and cracks in the composite coating. Moreover, the low hardness of PTFE will also lead to the poor wear resistance of the epoxy-PTFE composite coating [10,11]. A viable solution is to add hard or inorganic particles, such as TiO 2 , CuO, CuF 2 , CuS, and Al 2 O 3 , which will improve the tribological properties of the coating [12][13][14][15]. Larsen et al. [14] obtained a composite coating containing CuO and PTFE by mixing CuO and PTFE with epoxy solution. The evidence shows that both CuO and PTFE particles are well dispersed in the epoxy. The incorporation of PTFE and CuO has a positive synergistic effect on the friction and wear properties when the content of CuO is in the range of 0.1-0.4 vol.%. Hamad et al. [15] investigated the mechanical properties of toughened epoxy by utilizing two kinds of nanoparticles sizes of TiO 2 (17 and 50 nm) at different weight fractions (1%, 3%, 5%, and 10%). The results indicate that the addition of a small fraction of TiO 2 nanoparticles can bring an improvement in the mechanical properties of epoxy composite. Researchers also expect that the addition of some lubricating substances will have a good effect on the performance of the epoxy resin composite coating [16,17]. The tribological properties of epoxy composites were also studied by Chang et al. [18]. With different proportions of graphite, PTFE, short carbon fiber, and TiO 2 and their combinations as additions, the frictional coefficient, wear resistance, and contact temperature of composite coatings were tested in a dry sliding condition with different sliding velocities and contact pressures. Results show that TiO 2 works best in improving the wear resistance of epoxy. On the other hand, in order to meet the requirements of a corrosive environment, the corrosion resistance of polymer coatings has received much interest recently. Shi et al. [19] reported two methods to improve the dispersing of nanoparticles in epoxy coatings, including silane treatment and preparing nano-oxide paste. Compared with nano-TiO 2 paste and silane-treated nano-SiO 2 , the latter is better than the former in improving the corrosion resistance and hardness of epoxy. Fadl et al. [20] fabricated TiO 2 nanoparticles by a simple template-free sol-gel method and mixed them with poly-dimethylamino siloxane (PDMAS) to form a PDMAS/TiO 2 nanocomposite as a modifier for polyamine-cured epoxy coating. The corrosion resistance of PDMAS/TiO 2 epoxy coating versus unmodified epoxy was investigated by a salt spray accelerated corrosion test. The PDMAS/TiO 2 epoxy coating has better corrosion mitigation and a self-healing effect. Radoman et al. [21] obtained epoxy/TiO 2 nanocomposites by the incorporation of modified TiO 2 nanoparticles with gallic acid esters in epoxy. Due to the deoxidizing effect of modified TiO 2 , the nanocomposites have better corrosion resistance than that of the pure epoxy. Therefore, it is possible to obtain better corrosion resistance of epoxy composite coating by incorporating and modifying nanoparticles. From the perspective of comprehensive performance, TiO 2 nanoparticles were selected as the additive in this study. SiO 2 nanoparticles are mainly used to improve the mechanical properties and heat resistance of epoxy composites. Nano-zinc oxide has the characteristics of high activity, large specific surface area, easy agglomeration, and long dispersion time before preparation. Therefore, it is a difficult point to prepare nano-zinc oxide-modified epoxy resin. The surface of Al 2 O 3 contains a large number of hydroxyl groups, which makes it difficult to evenly disperse in epoxy materials. On the contrary, TiO 2 nanoparticles not only have good chemical stability but also have excellent heat resistance and UV protection, so it is widely used in the fields of UV-resistant materials, packaging materials, and coatings. At the same time as rigid nanoparticles and its strong adhesion, TiO 2 is often used as a modified filler to improve the bending strength, tensile strength, and impact strength of epoxy resin. In the wear resistance test, TiO 2 nanoparticles can significantly improve the wear resistance of epoxy resin, because TiO 2 nanoparticles have a large specific surface area and a large contact surface with the substrate, which requires more external energy when sliding [22,23]. Overall, the current research shows that pure epoxy resin coating has good corrosion resistance, but the high friction coefficient and poor wear resistance could not meet the requirements for use under friction conditions. Adding PTFE can reduce the friction coefficient, but the hardness and wear resistance are still low. Generally, incorporating soft phase PTFE and hard phase TiO 2 is an effective way to improve the tribological performance of a coating. Nevertheless, due to agglomeration and the poor compatibility of nanoparticles in the coating, composite coatings that involve nanoparticles often show low corrosion resistance. In the paper, the research focused on the improvements of both the tribological property and corrosion resistance of epoxy composite coating. TiO 2 nanoparticles were modified by a silane-coupling agent to reduce their surface tension, avoid agglomeration, and improve the interfacial compatibility with epoxy firstly. Then, the influence of modified TiO 2 on the tribological properties and corrosion resistance of an epoxy-PTFE composite coating were analyzed in detail. TiO 2 Modification In this study, titanium dioxide nanoparticles were selected as the strengthening particles, the average diameter of which are 40 nm. They were modified by the silane coupling agent KH570 (CH 2 =C(CH 3 ) COOC 3 H 6 Si(OCH 3 ) 3 ) to graft organic functional groups on the surface of TiO 2 nanoparticles. The unique method is described as follows. Ethanol was used as the dispersant, and the pH value was adjusted by acetic acid to 6. The silane coupling agent KH570 and TiO 2 nanoparticles were added into a certain volume of dispersed solutions at a mass ratio of 15:100. Firstly, the configured modified liquid was sonicated by ultrasonic liquid processors for 10 min. Then, it was disposed with a magnetic stirrer at 50 • C for 240 min with a speed of 300 r/min. Subsequently, the modified powder was washed with ethyl alcohol and deionized water three times to remove excess organosilane. Finally, collected precipitates were dried at 80 • C in a drying oven and stored in vials for further testing. The presence of organic phase on the modified TiO 2 nanoparticles surface was tested with the FT-IR spectrum (PerkinElmer Co., LTD., Shanghai, China). Coating Preparation Four kinds of coatings were prepared, including epoxy, epoxy-PTFE, epoxy-PTFE/TiO 2 (unmodified), and epoxy-PTFE/TiO 2 (modified). Carbon tool steel (SK85) and n-butanol are chosen for the matrix material and diluent, respectively, and phenolic amine resin (T13) was the curing agent for epoxy resin (E44). The mass ratio of T13/n-butanol/E44 was 1:2:4. The mass content of PTFE and TiO 2 was controlled at 15% and 2%, respectively. During the preparation process, after stirring the mixture thoroughly, it was ultrasonicated for 10 min to allow complete defoaming. Subsequently, the as-cleaned steel panels were immersed in the mixture solution adequately for 20 min, and we used a small mold to ensure that the thickness of the samples were uniform at 52 ± 3 µm. Then, the drying was carried out in an oven at 90 • C for 180 min. Finally, samples were successfully prepared for evaluating their surface morphology as well as mechanical and corrosion properties. Coating Performance Testing The performance test of coatings mainly includes the surface morphology, hardness, bonding force, tribological properties, and corrosion resistance. The section morphology of the coatings was characterized by scanning electron microscopy (SEM, FEI, Quanta200, OR, USA). Shore durometers (Petey Testing Instrument Co., LTD., Guangzhou, China) were used for testing the micro-hardness. The average micro-hardness value was acquired by averaging the results of six measurements. The binding force of the coatings-the average Coatings 2021, 11, 10 4 of 12 results of two sets of measurements-was tested by pull-off tester (DeFeLsko Co., LTD., Ogdensburg, NY, USA). The tribological properties of the coatings were examined by a friction-abrasion testing machine (SFT-2M, ZhongKe-kaihua, Lanzhou, China) at a load of 5.0 N and a stage rotated speed of 200 r/min. The friction counterparts were GCr15 steel balls with a diameter of 5.0 mm; the radius of the friction circle was set to 2 mm, the duration of each wear test was 10 min. Subsequently, the friction-wear behavior of the coatings was estimated by SEM. The corrosion resistance of the composite coatings was measured by an electrochemical workstation (CHI660B, Chenhua, Shanghai, China) after being soaked in 3.5% NaCl solution for 72 h. The samples were used as the working electrode, while a platinum sheet and a saturated calomel electrode (SCE) were the counter and the reference electrodes, respectively. The test frequency of electrochemical impedance spectroscopy (EIS) ranged from 1000 Hz down to 10 Hz, and the scanning voltage was 10 mV. The corrosion potential (E corr ) and corrosion current density (i corr ) were obtained from the potentiodynamic polarization curves. All corrosion tests were performed at room temperature. The Modification of TiO 2 Nanoparticles The transmission FT-IR spectra of TiO 2 nanoparticles with and without KH570 modification were compared, as shown in Figure 1. For unmodified TiO 2 nanoparticles, the absorption band between 3400 and 3500 cm −1 corresponds to the hydroxyl group (-OH) due to the partial electron-hole pairs migration on the surface of TiO 2 nanoparticles. The vibration absorption peak is situated in the wavenumber interval of 500-750 cm −1 , which demonstrates the presence of Ti-O-Ti groups. However, the new absorption bands appear in the FT-IR spectrum of modified TiO 2 nanoparticles in curve b, and there are some characteristic absorption peaks of KH570 at 2917 cm −1 (-CH 3 and -CH 2 ), 1717 cm −1 (C=O), 1620 cm −1 (C=C), 500-750 cm −1 (Ti-O-Si), so it could be inferred that the organic functional group has been grafted onto the surface of TiO 2 nanoparticles successfully. Figure 2 is the energy-dispersive spectrometer (EDS) pattern of the TiO2 nanoparticles before and after modification. The content of Si and C elements in Figure 2b is much higher than that in Figure 2a. The silane coupling agent molecule can be adsorbed on the surface of TiO2 nanoparticles by its hydrophilic end and can react with the surface -OH groups on TiO2 nanoparticles; therefore, the modified TiO2 nanoparticles contain Si and C elements from KH570(CH2=C(CH3)COOC3H6Si(OCH3)3). All these experimental results demonstrate that the organic functional group of KH570 was successfully grafted onto the surface of TiO2 nanoparticles. Figure 2 is the energy-dispersive spectrometer (EDS) pattern of the TiO 2 nanoparticles before and after modification. The content of Si and C elements in Figure 2b is much higher than that in Figure 2a. The silane coupling agent molecule can be adsorbed on the surface of TiO 2 nanoparticles by its hydrophilic end and can react with the surface -OH groups on TiO 2 nanoparticles; therefore, the modified TiO 2 nanoparticles contain Si and C elements from KH570(CH 2 =C(CH 3 )COOC 3 H 6 Si(OCH 3 ) 3 ). All these experimental results demonstrate that the organic functional group of KH570 was successfully grafted onto the surface of TiO 2 nanoparticles. Figure 2 is the energy-dispersive spectrometer (EDS) pattern of the TiO2 nanoparticles before and after modification. The content of Si and C elements in Figure 2b is much higher than that in Figure 2a. The silane coupling agent molecule can be adsorbed on the surface of TiO2 nanoparticles by its hydrophilic end and can react with the surface -OH groups on TiO2 nanoparticles; therefore, the modified TiO2 nanoparticles contain Si and C elements from KH570(CH2=C(CH3)COOC3H6Si(OCH3)3). All these experimental results demonstrate that the organic functional group of KH570 was successfully grafted onto the surface of TiO2 nanoparticles. The fractured surface of four kinds of coatings was exhibited by scanning electron microscopy (SEM), as shown in Figure 3. It can be seen that the fractured surface of epoxy is very smooth (Figure 3a) even at 5000 times magnification. With the addition of PTFE and TiO2 nanoparticles, the fractured surface of the composite coatings becomes much rougher than that of the pure epoxy coating. Furthermore, by comparing Figure 3b,c, it can be observed that there are still some flat areas and aggregated TiO2 nanoparticles on the cross-section of the coating with unmodified TiO2 nanoparticles ( Figure 3c). However, with modified TiO2 nanoparticles, the composite coating fracture surfaces are much The fractured surface of four kinds of coatings was exhibited by scanning electron microscopy (SEM), as shown in Figure 3. It can be seen that the fractured surface of epoxy is very smooth (Figure 3a) even at 5000 times magnification. With the addition of PTFE and TiO 2 nanoparticles, the fractured surface of the composite coatings becomes much rougher than that of the pure epoxy coating. Furthermore, by comparing Figure 3b,c, it can be observed that there are still some flat areas and aggregated TiO 2 nanoparticles on the cross-section of the coating with unmodified TiO 2 nanoparticles ( Figure 3c). However, with modified TiO 2 nanoparticles, the composite coating fracture surfaces are much rougher (Figure 3d), which can be attributed to the dispersive distribution of TiO 2 preventing crack propagation, and the excellent interfacial compatibility reducing the crack source. Generally, an increased section roughness means that the path at the crack tip is distorted, and the coating might absorb more section energy during the fracture process and possess better fracture toughness. Therefore, due to the grafting of organic functional groups adsorbed on the modified TiO 2 nanoparticles surface, the toughness of the composite coating is improved [24,25]. rougher (Figure 3d), which can be attributed to the dispersive distribution of TiO2 preventing crack propagation, and the excellent interfacial compatibility reducing the crack source. Generally, an increased section roughness means that the path at the crack tip is distorted, and the coating might absorb more section energy during the fracture process and possess better fracture toughness. Therefore, due to the grafting of organic functional groups adsorbed on the modified TiO2 nanoparticles surface, the toughness of the composite coating is improved [24,25]. Figure 4 shows the hardness values of different coatings. It indicates that the hardness of epoxy-PTFE-TiO2 (modify), epoxy, epoxy-PTFE-TiO2 (unmodify), and epoxy-PTFE coatings are arranged from high to low. The soft particles of PTFE improve the lubricity of the coating but also reduce its hardness. In contrast, modified hard TiO2 nano- Figure 4 shows the hardness values of different coatings. It indicates that the hardness of epoxy-PTFE-TiO 2 (modify), epoxy, epoxy-PTFE-TiO 2 (unmodify), and epoxy-PTFE coatings are arranged from high to low. The soft particles of PTFE improve the lubricity of the coating but also reduce its hardness. In contrast, modified hard TiO 2 nanoparticles can enhance the hardness of epoxy-PTFE composite coating better than unmodified hard TiO 2 nanoparticles. This is due to the modified TiO 2 nanoparticles dispersing more evenly in the coating, which has a dispersion strengthening effect in the coating. Interfacial Adhesion The interface bonding adhesion strength of different coatings is given in Figure 5. As Figure 5 shows, PTFE enhances the interfacial adhesion of epoxy coatings from 1.43 to 2.30 MPa. On the other hand, with the addition of modified TiO2 nanoparticles, the bonding strength value of the composite coating reaches 2.70 MPa, which is higher than that of the coating containing unmodified TiO2 nanoparticles. We interpreted that the introduction of organic functional groups on the surface of TiO2 nanoparticles resulted in the formation of hydrogen bonds and Si-O bonds between the TiO2 nanoparticles and the coating. The modified TiO2 nanoparticles have a strong interfacial adhesion with the surrounding particles, and the van der Waals force attraction becomes stronger; therefore, the epoxy-PTFE-TiO2 (modify) composite coating has the highest interfacial adhesion value. This corresponds to the improvement in section roughness and the hardness of the coating. Interfacial Adhesion The interface bonding adhesion strength of different coatings is given in Figure 5. As Figure 5 shows, PTFE enhances the interfacial adhesion of epoxy coatings from 1.43 to 2.30 MPa. On the other hand, with the addition of modified TiO 2 nanoparticles, the bonding strength value of the composite coating reaches 2.70 MPa, which is higher than that of the coating containing unmodified TiO 2 nanoparticles. We interpreted that the introduction of organic functional groups on the surface of TiO 2 nanoparticles resulted in the formation of hydrogen bonds and Si-O bonds between the TiO 2 nanoparticles and the coating. The modified TiO 2 nanoparticles have a strong interfacial adhesion with the surrounding particles, and the van der Waals force attraction becomes stronger; therefore, the epoxy-PTFE-TiO 2 (modify) composite coating has the highest interfacial adhesion value. This corresponds to the improvement in section roughness and the hardness of the coating. Interfacial Adhesion The interface bonding adhesion strength of different coatings is given in Figure 5. As Figure 5 shows, PTFE enhances the interfacial adhesion of epoxy coatings from 1.43 to 2.30 MPa. On the other hand, with the addition of modified TiO2 nanoparticles, the bonding strength value of the composite coating reaches 2.70 MPa, which is higher than that of the coating containing unmodified TiO2 nanoparticles. We interpreted that the introduction of organic functional groups on the surface of TiO2 nanoparticles resulted in the formation of hydrogen bonds and Si-O bonds between the TiO2 nanoparticles and the coating. The modified TiO2 nanoparticles have a strong interfacial adhesion with the surrounding particles, and the van der Waals force attraction becomes stronger; therefore, the epoxy-PTFE-TiO2 (modify) composite coating has the highest interfacial adhesion value. This corresponds to the improvement in section roughness and the hardness of the coating. Figure 6 shows the friction coefficient curves of different coatings versus the sliding time. As shown in Figure 6, with the addition of PTFE to epoxy, the friction coefficient decreases from 0.6 to about 0.16. Furthermore, compared to curve C of adding unmodified TiO 2 nanoparticles, the friction coefficient of the composite coating is less than 0.10 after adding modified TiO 2 nanoparticles, as shown in curve D. Figure 6 shows the friction coefficient curves of different coatings versus the sliding time. As shown in Figure 6, with the addition of PTFE to epoxy, the friction coefficient decreases from 0.6 to about 0.16. Furthermore, compared to curve C of adding unmodified TiO2 nanoparticles, the friction coefficient of the composite coating is less than 0.10 after adding modified TiO2 nanoparticles, as shown in curve D. The SEM images of the worn surfaces are shown in Figure 7. After adding PTFE, the wear track width of the epoxy coating decreases from 634.21 (Figure 7a) to 321.56 μm (Figure 7b). Furthermore, with the addition of unmodified TiO2 nanoparticles (Figure 7c), the width of the wear track narrows to 274.51 μm. With modified TiO2 nanoparticles, the wear track becomes much smoother and narrower (Figure 7d). On the other hand, there are many micropores on the worn surface of epoxy, which indicates that the epoxy coating itself is not densification. When PTFE is incorporated, there are no obvious micropores. The PTFE with self-lubricating property spreads on the wear surface (as shown in curve A and curve B), reduces the friction coefficient, and brings a much smoother worn track (Figure 7b). When unmodified TiO2 nanoparticles were added into the coating, particle agglomeration results in some micropores emerging on the worn surface ( Figure 7c). However, there are seldom obvious micropores on the worn surface with modified TiO2 nanoparticles ( Figure 7d). Obviously, when modified TiO2 nanoparticles coexist in the composite coating, uniformly dispersed nanoparticles and excellent interfacial compatibility lead to the narrowest and smoothest wear track. The composite coating D has the most excellent wear resistance. The SEM images of the worn surfaces are shown in Figure 7. After adding PTFE, the wear track width of the epoxy coating decreases from 634.21 (Figure 7a) to 321.56 µm (Figure 7b). Furthermore, with the addition of unmodified TiO 2 nanoparticles (Figure 7c), the width of the wear track narrows to 274.51 µm. With modified TiO 2 nanoparticles, the wear track becomes much smoother and narrower (Figure 7d). On the other hand, there are many micropores on the worn surface of epoxy, which indicates that the epoxy coating itself is not densification. When PTFE is incorporated, there are no obvious micropores. The PTFE with self-lubricating property spreads on the wear surface (as shown in curve A and curve B), reduces the friction coefficient, and brings a much smoother worn track (Figure 7b). When unmodified TiO 2 nanoparticles were added into the coating, particle agglomeration results in some micropores emerging on the worn surface ( Figure 7c). However, there are seldom obvious micropores on the worn surface with modified TiO 2 nanoparticles (Figure 7d). Obviously, when modified TiO 2 nanoparticles coexist in the composite coating, uniformly dispersed nanoparticles and excellent interfacial compatibility lead to the narrowest and smoothest wear track. The composite coating D has the most excellent wear resistance. Tribological Properties Summing up, the friction coefficient of the composite coatings with PTFE is significantly improved because of the formation of PTFE lubricating transfer film on the wear surface. However, the wear resistance of a soft epoxy-PTFE composite coating is still weakened. With PTFE and TiO 2 coexisting in the epoxy-PTFE/TiO 2 (unmodify) composite coating, the combination of soft phase particles and hard phase particles leads to a higher wear resistance. In contrast, a relatively good interfacial compatibility brought by the modified TiO 2 nanoparticles decreases the furrow and adhesive effects, which leads to a lower friction coefficient, much smoother worn surface, and much higher wear resistance. Meanwhile, the addition of PTFE and TiO 2 nanoparticles may affect the mechanical properties, and the good interfacial compatibility of modified TiO 2 nanoparticles may improve the mechanical stability of the coating, which may also affect the tribological properties. As described by Homaeigohar et al., the uniform distribution of functionalized graphite nanofilaments can improve the mechanical stability of nanocomposite hydrogels, and the addition of tricalcium phosphate can affect the mechanical properties of the polythylene [26,27]. All in all, the synergistic action of the friction reduction of PTFE and dispersion enhancement of modified TiO 2 nanoparticles of PTFE makes the epoxy-PTFE/TiO 2 (modify) composite coating show excellent friction reduction and anti-wear performance. Summing up, the friction coefficient of the composite coatings with PTFE is significantly improved because of the formation of PTFE lubricating transfer film on the wear surface. However, the wear resistance of a soft epoxy-PTFE composite coating is still weakened. With PTFE and TiO2 coexisting in the epoxy-PTFE/TiO2 (unmodify) composite coating, the combination of soft phase particles and hard phase particles leads to a higher Hydrophobic Performance The water contact angle of coatings was measured for evaluating their hydrophobic property, as shown in Figure 8. The water contact angles of the pure epoxy coating and epoxy/PTFE composite coating are 74.81 • and 100.90 • , respectively, which indicates that PTFE can improve the hydrophobic property of the coating. With the further addition of TiO 2 nanoparticles to the epoxy-PTFE coating, due to the presence of hydroxyl groups on the surface of TiO 2 nanoparticles, the water contact angle of the composite coating is reduced greatly to 56.84 • , which is manifested as hydrophilic. The small contact angle of coating C will result in the worst corrosion resistance. In contrast, with adding the modified TiO 2 nanoparticles instead of unmodified TiO 2 , the water contact angle of the composite coating approaches that of the pure epoxy coating, and the hydrophilicity of the coating decreased greatly. However, due to the steric hindrance effect, the hydroxyl groups on the surface of TiO 2 are not fully replaced by organic functional groups and still have a certain hydrophilicity. When PTFE as hydrophobic particles were added to the epoxy coating, the water contact angle rose to 100.90°, showing hydrophobicity. Due to the presence of hydrophilic hydroxyl groups on the surface of TiO2, the extremely poor interfacial compatibility between TiO2 and epoxy coatings, resulting in easy agglomeration of TiO2. Therefore, as shown in Figure 3c, there are still some flat areas on the cross-section of the coating, and the water contact angle of epoxy-PTFE/TiO2 (modified) drops to 56.84° and shows hydrophilicity. Due to the influence of steric hindrance, part of the hydroxyl groups on the modified TiO2 surface is replaced with organic functional groups. The interface compatibility between TiO2 nanoparticles and the epoxy coating is enhanced, and the coating surface becomes rougher, as shown in Figure 3d. The water contact angle of the composite coating approaches that of the pure epoxy coating, while the hydrophilicity of the coating decreases greatly. All in all, the increase of the contact angle is beneficial to improve the corrosion resistance of the coating.The extremely poor interfacial compatibility between TiO2 and epoxy coatings, resulting in easy agglomeration of TiO2. Potentiodynamic Polarization The potentiodynamic polarization curves of different coatings are shown in Figure 9, and Table 1 exhibits the corrosion current density (icorr) and corrosion potential (Ecorr). Experimental results indicate that the corrosion current density decreases and the corrosion potential of the coating increases with the addition of PTFE. When PTFE as hydrophobic particles were added to the epoxy coating, the water contact angle rose to 100.90 • , showing hydrophobicity. Due to the presence of hydrophilic hydroxyl groups on the surface of TiO 2 , the extremely poor interfacial compatibility between TiO 2 and epoxy coatings, resulting in easy agglomeration of TiO 2 . Therefore, as shown in Figure 3c, there are still some flat areas on the cross-section of the coating, and the water contact angle of epoxy-PTFE/TiO 2 (modified) drops to 56.84 • and shows hydrophilicity. Due to the influence of steric hindrance, part of the hydroxyl groups on the modified TiO 2 surface is replaced with organic functional groups. The interface compatibility between TiO 2 nanoparticles and the epoxy coating is enhanced, and the coating surface becomes rougher, as shown in Figure 3d. The water contact angle of the composite coating approaches that of the pure epoxy coating, while the hydrophilicity of the coating decreases greatly. All in all, the increase of the contact angle is beneficial to improve the corrosion resistance of the coating.The extremely poor interfacial compatibility between TiO 2 and epoxy coatings, resulting in easy agglomeration of TiO 2 . Potentiodynamic Polarization The potentiodynamic polarization curves of different coatings are shown in Figure 9, and Table 1 exhibits the corrosion current density (i corr ) and corrosion potential (E corr ). Experimental results indicate that the corrosion current density decreases and the corrosion potential of the coating increases with the addition of PTFE. TiO2 and epoxy coatings, resulting in easy agglomeration of TiO2. Potentiodynamic Polarization The potentiodynamic polarization curves of different coatings are shown in Figure 9, and Table 1 exhibits the corrosion current density (icorr) and corrosion potential (Ecorr). Experimental results indicate that the corrosion current density decreases and the corrosion potential of the coating increases with the addition of PTFE. Due to the high hydrophilicity of TiO 2 nanoparticles, the containing corrosive medium of water molecules is more likely to penetrate the coating, get in the substrate, and reduce the corrosion resistance of the coating thereby. As shown in Figure 9, after adding unmodified TiO 2 nanoparticles into the epoxy-PTFE coating, the corrosion current density of the coating increases, and the corrosion potential decreases. In contrast, after adding the modified TiO 2 nanoparticles, the composite coating of epoxy-PTFE-TiO 2 (modified) has the minimum corrosion current density of 1.688 × 10 −7 A·cm −2 , and the corrosion potential increases to some extent (−0.503 V). In comparison, the corrosion resistance of the coating before the titanium dioxide modification is the worst. It can be speculated that the modified TiO 2 nanoparticles have better interfacial compatibility with epoxy, which prevents the immersion of corrosive media and thus improves the corrosion resistance of the epoxy-PTFE coating. Figure 10 shows the test data of the electrochemical impedance spectroscopy (EIS). As can be seen from the results in Figure 10a,b, with PTFE and unmodified TiO 2 nanoparticles involved in the coating, the arc radius in the Nyquist figure and the impedance value of the low-frequency region in the Bode figure are very small; therefore, the addition of unmodified TiO 2 nanoparticles will reduce the corrosion resistance of the epoxy-PTFE coating. On the contrary, after adding modified TiO 2 nanoparticles into the epoxy-PTFE coating, both the arc radius in the Nyquist figure and the impedance value of low frequency in the Bode figure are larger, and the corrosion resistance of the epoxy-PTFE coating is greatly improved. The analysis results are consistent with the Tafel curve test results. Obviously, the poor corrosion resistance of coating C has been effectively addressed by the modification treatment of TiO 2 nanoparticles. Electrochemical Impedance Spectroscopy coating. On the contrary, after adding modified TiO2 nanoparticles into the epoxy-PTFE coating, both the arc radius in the Nyquist figure and the impedance value of low frequency in the Bode figure are larger, and the corrosion resistance of the epoxy-PTFE coating is greatly improved. The analysis results are consistent with the Tafel curve test results. Obviously, the poor corrosion resistance of coating C has been effectively addressed by the modification treatment of TiO2 nanoparticles. Conclusions In order to meet the requirements of high corrosion resistance, wear resistance, and self-lubricating property of composite coatings for marine applications, the epoxy composite coatings containing PTFE and TiO 2 nanoparticles were prepared in this research. Through the modification treatment of TiO 2 , the compactness of the coating is increased and the hydrophilicity is decreased, which leads to excellent tribological properties and corrosion resistance. The specific conclusions are as follows: • Modifying the surface of TiO 2 nanoparticles grafted with organic functional groups can enhance their dispersion and further improve the interface compatibility with the epoxy matrix. Simultaneously, the bonding force between the coating and the matrix is increased. Compared with pure epoxy, epoxy-PTFE, and the composite coating with unmodified TiO 2 , the epoxy-PTFE-TiO 2 composite coating with modified TiO 2 has the lowest friction coefficient and the most excellent wear resistance. • With the incorporation of modified TiO 2 nanoparticles, the hydrophilicity of the epoxy-PTFE composite coating decreases significantly, which is beneficial to improve the corrosion resistance of the composite coating. Simultaneously, modified TiO 2 nanoparticles improve the interface compatibility and densification of the composite coating. There are very few micropores in the coating, so it is difficult for the water molecules containing corrosive media to penetrate the coating and get into the matrix. The composite coating including modified TiO 2 nanoparticles possesses better corrosion resistance than that of the coating with unmodified TiO 2 . By the modification treatment of TiO 2 nanoparticles, the poor corrosion resistance of an epoxy-PTFE-TiO 2 composite coating has been effectively addressed. Funding: The authors acknowledge the financial support from the Natural Science Foundation of Heilongjiang Province (JJ2019LH1520) and the Fundamental Research Funds for the Central Universities (3072020CF0705). Data Availability Statement: Data is contained within the article. Conflicts of Interest: The authors declare no conflict of interest.
v3-fos-license
2021-08-02T00:06:08.624Z
2021-05-07T00:00:00.000
236580790
{ "extfieldsofstudy": [ "Chemistry" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.ajol.info/index.php/bcse/article/download/206841/195048", "pdf_hash": "4b4ec68d268eef06dfaa10b7b063b7a36600be5b", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41124", "s2fieldsofstudy": [ "Chemistry", "Materials Science" ], "sha1": "e49d857c7b6f91ee729b5c5719fd566faa4ae3d7", "year": 2021 }
pes2o/s2orc
SYNTHESIS, SPECTRAL AND SOL-GEL BEHAVIOR OF MIXED LIGAND COMPLEXES OF TITANIUM(IV) WITH OXYGEN, NITROGEN AND SULFUR DONOR LIGANDS A new route to synthesize nano-sized Ti(IV) mixed ligand complexes have been investigated by the reaction of titanium(IV) chloride with ammonium salts of dithiophosphate and 3(2'-hydroxyphenyl)-5-(4substituted phenyl) pyrazolines. The resultant complex is then treated with H2S gas to get sulfur bridged dimer of Ti(IV) complex, a precursor of TiS2. The morphology of the complexes was studied by employing XRD which shows that all the complexes are amorphous solid. Molecular weight measurements, elemental analysis in conjugation with spectroscopic (IR, H NMR, C NMR and P NMR) studies revealed the dimeric nature of the complexes in which pyrazoline and dithiophosphate are bidentate. Scanning electron microscopic image and XRD indicate that the particles are in the nano range (50 nm). Putting all the facts together, coordination number six is proposed for titanium with octahedral geometry. INTRODUCTION Titanium proves to be an excellent corrosion-resistance material in many environments as it forms a protective oxide layer on its surface [1,2]. The high tensile strength, light weight and excellent corrosion resistant make the titanium a useful alloying agent for many parts of highspeed aircraft, motorbikes, ships and missiles [3][4][5]. Titanium being a biocompatible material found application in prosthetic devices [6,7]. The Ti(IV) complexes with nitrogen, oxygen and sulfur donor ligands have received considerable attention due to their widespread utilization as an active precursor for making TiO 2 and TiS 2 [8]. Owing to the hard acid character of titanium, the synthesis of its simple thiolates was not possible. Attempts have been made to reduce the acidic strength of titanium metal centre by attaching electron-rich ligands such as dialkyl nitrogen and cyclopentadienyl which then forms a stable complex with soft bases [9]. The highly sensitive nature of titanium complexes towards hydrolysis reduces its activity towards different applications [10]. Available reports showed that the addition of bulky electron-rich ligands to Ti metal centre increases the resistance of metal complexes towards hydrolysis [11,12]. The excellent biological activity of sulfur containing transition metal complexes makes them interesting [13]. Several reports are available on alkylene and O,O'-dialkyl dithiophosphate derivatives of Ag(I), Zr(IV), Fe(II) and Cu(II) [14,15]. Carmalt et al. [9] reported titanium pyridine and pyridine thiolates as a precursor for the production of titanium disulfide. Ti(IV) has been extensively used for the polymerization of ethylene and propylene [16,17]. Salen-Ti(IV) complex has been effectively employed in the controlled polymerization of D,L-lactic acid [18]. Park et al. [19] designed and synthesized a new class of green colored titanium complexes with a dithiolate ligand for LCD and TFT panels. The first non-platinum anticancer drug exhibiting excellent efficacy was titanium based titanocene dichloride and budotitane [20]. Later on, The corresponding ammonium salts of the synthesized dithiophosphoric acids have been prepared by passing dry ammonia gas through their benzene solutions (Eq. 3-4). The structure of ammonium salt of substituted dithiophosphate ligands are shown in Figure 1. Synthesis of substituted pyrazoline ligands Substituted pyrazoline ligands were synthesized by reported procedure [38]. (a) Synthesis of substituted 2'-hydroxychalcone. A hot solution of sodium hydroxide was added to a mixture of o-hydroxyacetophenone and substituted benzaldehyde in ethanol. The mixture was stirred at room temperature for 6-8 hours. The sodium salt of the chalcone was obtained as dark yellow thick mass. It was cooled in ice and neutralized with aqueous acetic acid (50%). The yellow solid separated was filtered and washed with water before drying. Crystallization from ethanol yielded yellow needles. (b) Synthesis of substituted pyrazoline. A mixture of substituted 2'-hydroxychalcone and hydrazine hydrate in ethanol was refluxed for 3-4 hours. It was allowed to cool at room temperature. A white crystalline solid thus obtained was separated, washed with water and dried. Recrystallization with ethanol afforded white crystals of pyrazoline. The structure of substituted pyrazoline ligand is shown in Figure 2. Synthesis of TiCl 2 (C 15 H 12 N 2 OX)(RO) 2 PS 2 A benzene solution of pyrazoline (1.21 g, 5.10 mmol) was added dropwise with constant stirring to the titanium tetrachloride (0.96 g, 5.11 mmol) suspension at room temperature. To ensure the completion of reaction, the reaction mixture was stirred for 2-3 hours. To the above reaction mixture, the solution of ammonium salt of dithiophosphate in methanol was added dropwise under constant stirring for 3-4 hours. The by-product (NH 4 Cl) was filtered off using alkoxy funnel. A reddish-brown solid compound was obtained (1.76 g, 88%) after removal of the volatiles from the filtrate under reduced pressure. The same procedure was adopted for the synthesis of all the compounds . The two-step reaction scheme is proposed for the synthesis of mixed ligand titanium complexes of the general formula TiCl 2 (C 15 H 12 N 2 OX)(RO) 2 PS 2 ] (Eq. 5-6). RESULTS AND DISCUSSION All the synthesized compounds are non-hygroscopic orange-colored solid which are stable at room temperature. They are easily soluble in coordinating solvents (THF, DMSO and DMF) as well as in common organic solvents (benzene, chloroform and methanol). The proposed stoichiometries of the synthesized compounds are in good agreement with the elemental analysis (H, C, N, S, Cl, and Ti) data reported in Table 1. Spectral analysis of Ti 2 (C 15 H 12 N 2 OX) 2 [(RO) 4 P 2 S 6 ] Infrared spectral data analysis The medium intensity band observed at 3346-3325 cm -1 could be assigned to vibrations corresponding to [N-H] stretching [39] while the spectral bands in the region 1624-1604 cm -1 are due to the [C=N] stretching vibration [40]. As compared to free pyrazoline the v[C=N] stretching in all the synthesized compounds is observed to be shifted to the lower wavenumber. This suggests that the imino nitrogen of pyrazoline is coordinated to a metal centre. The complete absence of a signal at ~3080 cm -1 in synthesized metal complexes, which is due to (O-H) stretching originally present in pyrazoline ligands suggests that the oxygen is covalently bonded to Ti metal. This is further confirmed by the appearance of the band in the region 485-460 cm -1 corresponding to [Ti-O] stretching vibration. The bands present in 824-899 cm -1 and 1078-1050 cm -1 region has been assigned respectively to [P-O-(C)] [41,42] and [(P)-O-C] [43,44]. The new bands of medium intensity observed in the region 549-529 cm -1 may be assigned to [P-S] stretching modes [45]. In comparison to free ligands, the appearance of two new bands in 335-321 cm -1 and 302-290 cm -1 region corresponds to [Ti-S] stretching vibrations. Splitting of bands into two regions indicates that two types of sulfur are present in the molecule, one is terminal sulfur and another is bridging sulfur. The appearance of bands in the region 396-380 cm -1 has been ascribed to vibrations corresponding to [Ti-N] stretching [46]. The IR data of synthesized complexes are compiled in Table 2. H NMR spectra analysis The 1 H NMR spectra of synthesized mixed ligand complexes, recorded in CDCl 3 exhibit characteristic signals (Table 3). In the region δ 7.42-6.39 ppm, a very complex pattern may be assigned to the aromatic protons of ligand pyrazoline [47]. The pyrazoline ligand exhibits a characteristic peak at δ~11.00 ppm due to hydroxyl protons, the absence of that particular peak in the 1 H NMR spectra of the metal complex suggests that the hydroxyl oxygen atom is bonded to Ti metal. A broad singlet peak observed at δ 5.37-4.86 ppm may be attributed to the N-H group (primarily at δ 5.40-4.90 ppm in free pyrazoline) indicating that the -NH group is not involved in metal complex formation [47]. The bands at 3.82-3.07 and 2.25-2.02 ppm could be ascribed, respectively to -CH and -CH 2 groups. The band at δ 5.54-4.19 ppm for -OCH 2 and at δ 4.94-4.21 ppm for -OCH group and bands for methyl group are observed at δ 1.10-0.90 ppm. The complex pattern observed at δ 7.21-7.04 ppm may be due to the skeletal protons of the phenyl ring. The hydrogen atom calculated through the integrations ratio suggests that two of the dithiophosphate ligands and two pyrazoline ligands are present in synthesized mixed ligand complexes. P NMR spectra analysis The synthesized compounds exhibit only one signal for the phosphorus atoms in protondecoupled 31 P NMR spectra. The 31 P NMR signals of Ti dichlorodithio-compounds are obtained at δ = 90.0 ppm while that of synthesized Ti mixed ligand complexes are observed at δ = 110.0-91.3 ppm. The downfield shifting of the signal due to dithiophosphato phosphorus atom at about δ15.0 ppm confirms the bidentate nature of the ligand [48]. Although two phosphorus atoms are there only one signal is obtained, indicates a similar environment for both the phosphorus atom (Table 3). 13 C and 31 P) data are summarized in Table 3. FAB Mass spectra analysis The FAB mass spectra of the synthesized metal complexes have been recorded to determine the molecular weight. The molecular ion peak confirms that the metal complexes exist in dimeric form. FAB mass spectra of compound numbers 6, 12, 18 and 24 with different substituted pyrazoline ligands in each series have been reported in Table 4. XRD and SEM studies These crystalline/amorphous natures of the complexes have been examined through XRD. The morphology of the complexes was studied by employing XRD which shows that all the complexes are amorphous solid. The average diameter of the complexes has been calculated using "Debye Scherrer" expression (Eq. 9). Particle size = D = 0.9 λ / β cos θ B (9) where, λ is the X-ray wavelength (1.5418Å), β is corrected band broadening (full width at half maxima), θ B is the diffraction angle, D is the average nanocrystal domain diameter. The value of full width at half maximum intensity (β) and corresponding diffraction angle (θ B ) is calculated using an X-ray diffractogram. The average particle size thus obtained was found to be in the range of 41-62 nm, which is further confirmed by the SEM studies ( Figure 3 and Figure 4, respectively. Molecular weight measurement, elemental and spectral analysis confirms the dimeric nature of the synthesized metal complexes and proposes octahedral geometry ( Figure 5). CONCLUSION The present study describes the new route for the synthesis of Ti(IV) mixed ligand complexes with dithiophosphate and substituted pyrazoline ligands.Molecular weight measurements, elemental analysis in conjugation with spectroscopic (IR, 1 HNMR, 13 C NMR and 31 P NMR) studies reveal thedimeric nature of the complexes in which pyrazoline and dithiophosphate are bidentate. Scanning electron microscopic image and XRD indicate that the particles are in the nano range (50 nm). Coordination number six is proposed for titanium with octahedral
v3-fos-license
2024-03-17T17:21:10.599Z
2024-03-01T00:00:00.000
268448918
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/1422-0067/25/6/3196/pdf?version=1710140600", "pdf_hash": "45c84cb9c3965592df44573533d5309ea3656759", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41125", "s2fieldsofstudy": [ "Medicine", "Environmental Science" ], "sha1": "4cfd0478cb8f3400f74b17ab7ae76d2b3acbbfd8", "year": 2024 }
pes2o/s2orc
Mitochondria-Associated Membranes as Key Regulators in Cellular Homeostasis and the Potential Impact of Exercise on Insulin Resistance The communication between mitochondria and the endoplasmic reticulum (ER) is facilitated by a dynamic membrane structure formed by protein complexes known as mitochondria-associated membranes (MAMs). The structural and functional integrity of MAMs is crucial for insulin signal transduction, relying heavily on their regulation of intracellular calcium homeostasis, lipid homeostasis, mitochondrial quality control, and endoplasmic reticulum stress (ERS). This article reviews recent research findings, suggesting that exercise may promote the remodeling of MAMs structure and function by modulating the expression of molecules associated with their structure and function. This, in turn, restores cellular homeostasis and ultimately contributes to the amelioration of insulin resistance (IR). These insights provide additional possibilities for the study and treatment of insulin resistance-related metabolic disorders such as obesity, diabetes, fatty liver, and atherosclerosis. Introduction With the improvement in people's quality of life and changes in lifestyle, the incidence of metabolic diseases such as obesity, diabetes, fatty liver, and atherosclerosis has rapidly increased.Investigating their etiology reveals a common pathological mechanism-insulin resistance (IR).IR is characterized by decreased sensitivity of body tissues and cells to insulin, leading to compensatory insulin secretion by pancreatic β-cells and ultimately resulting in hyperinsulinemia.The blockade of the insulin signaling pathway is a major cause of IR, closely associated with disruptions in lipid homeostasis, calcium homeostasis, mitochondrial dysfunction, and endoplasmic reticulum stress (ERS) [1,2]. Recently, research has identified a liquid ordered structure with unique biophysical characteristics formed between mitochondria and the endoplasmic reticulum (ER), known as mitochondria-associated membranes (MAMs).MAMs are closely associated with the regulation of cellular homeostasis processes, including calcium homeostasis, lipid homeostasis, cell survival, inflammation, ERS, and mitochondrial quality control [2,3].Key proteins in the insulin signaling pathway, such as protein kinase B (PKB/AKT), mammalian target of rapamycin complex 2 (mTORC2) [4], and phosphatase and tensin homolog (PTEN) [5], not only localize to MAMs under specific physiological conditions but also interact with resident MAMs proteins.This suggests that MAMs may play a crucial role in regulating insulin signal transduction. Exercise, as a green and economic therapeutic approach, has been shown to have a definite beneficial effect on IR, although the specific molecular mechanisms remain unclear. However, studies indicate that exercise can regulate the expression of various cellular homeostasis regulatory factors associated with the structure and function of MAMs.It is speculated that there may be a close connection among MAMs, exercise, and IR.Based on this, this paper will review the relationship between MAM-related cellular homeostasis regulation and IR, as well as the potential mechanisms through which MAMs mediate exercise intervention in IR, aiming to provide new targets for the treatment of IR-related metabolic diseases. Overview of Mitochondria-Associated Membranes The formation of intracellular membranes plays a crucial role in the evolution of species, providing a prerequisite guarantee for membrane-bound organelles to fulfill their specific functions.As a complex entity, the cell relies on the synergistic interactions among various organelles to achieve its intricate functions.Effective cellular communication is a fundamental prerequisite for the collaboration among organelles, and this relies on the involvement of organelle membranes.Cellular communication encompasses various mechanisms, one of which is achieved through membrane contact sites (MCSs).The most classic example of this is the membrane contact between the endoplasmic reticulum (ER) and mitochondria, initially referred to as "X components" and later named mitochondriaassociated membranes (MAMs) [6].Recent studies suggest that the minimum distance between MAMs interfaces may reach 10-25 nm [7], allowing for direct contact between ER proteins and the proteins and lipids on the outer membrane of mitochondria, providing various possibilities for inter-organelle information exchange. MAMs are widely distributed in different organisms.In yeast cells, the corresponding structure of MAMs is known as ER-mitochondria encounter structure (ERMES).It is composed of four proteins, Mdm12, Mdm34, Mdm10, and Mmm1, participating in the mediation of lipid transfer facilitated by soluble lipid carrier proteins such as ceramide transfer protein (CERT) and oxysterol binding protein (OSBP) [8].In contrast, mammalian MAMs have a more complex connecting structure formed by single or multiple tethering proteins (As shown in Figure 1).These tethering proteins not only participate in the formation of MAMs but are also closely associated with their functions.Tethering is just one condition for maintaining the structural and functional integrity of MAMs; many other proteins participate in the formation and influence the function of MAMs in a tetherindependent manner, such as Phosphofurin acidic cluster sorting protein 2 (PACS-2), PDZ domain containing 8 (PDZD8), sigma-1 receptor (Sig-1R), FUN14 domain-containing 1 (FUNDC1), and others.Under normal circumstances, the structure and function of MAMs are maintained in a dynamic equilibrium.However, under specific physiological conditions, the molecules mentioned above, through an unknown mechanism, promote the remodeling of the MAMs network.This, in turn, regulates various life processes, such as intracellular calcium homeostasis, lipid metabolism and transport, autophagy, apoptosis, mitochondrial quality control, and endoplasmic reticulum stress (ERS).Subsequently, these processes further influence organismal metabolism and the occurrence and development of related diseases. Mitochondria-Associated Membranes Intervene in Insulin Resistance through Cellular Homeostasis Mitochondria-associated Membranes (MAMs), as a type of lipid raft structure, possess various binding domains for proteins involved in signal transduction, including transport proteins, kinases, ion channels, or phosphatases.This provides MAMs with the potential to regulate numerous physiological and pathological processes within the cell, such as calcium homeostasis, lipid homeostasis, mitochondrial quality control, and ERS.Recent studies indicate that the above-mentioned processes mediated by MAMs appear to be associated with insulin resistance (IR). Mitochondria-Associated Membranes Intervene in Insulin Resistance through Cellular Homeostasis Mitochondria-associated Membranes (MAMs), as a type of lipid raft structure, possess various binding domains for proteins involved in signal transduction, including transport proteins, kinases, ion channels, or phosphatases.This provides MAMs with the potential to regulate numerous physiological and pathological processes within the cell, such as calcium homeostasis, lipid homeostasis, mitochondrial quality control, and ERS.Recent studies indicate that the above-mentioned processes mediated by MAMs appear to be associated with insulin resistance (IR). Studies reveal that the regulation of intracellular Ca 2+ homeostasis by MAMs is a key factor influencing insulin resistance (IR).MAMs enrichment in hepatocytes of obese mice and oocytes leads to mitochondrial Ca 2+ overload [13], resulting in excessive mitochondrial fission and IR [14].Knocking out Phosphofurin acidic cluster sorting protein 2 (PACS-2), a protein associated with MAMs formation, reduces MAMs formation and decreases mitochondrial Ca 2+ levels [13].However, structural disruption of MAMs leads to excessive Ca 2+ release into the cytoplasm, causing cytoplasmic Ca 2+ waves and inducing gluconeogenesis, followed by IR.This indicates that excessive formation or structural disruption of MAMs may lead to IR through different pathways.Additionally, MAMs-related Ca 2+ signal transduction is a critical event in insulin-dependent glucose uptake.Research suggests that the G protein/IP 3 /IP 3 R pathway regulates the fusion of glucose transporter 4 (GLUT4) with the plasma membrane and that Ca 2+ chelators inhibit drug-induced GLUT4-plasma membrane fusion and glucose uptake [15]. In conclusion, MAMs maintain appropriate Ca 2+ flux among the endoplasmic reticulum (ER), cytoplasm, and mitochondria under normal physiological conditions.When the body experiences nutritional or energy overload, MAMs undergo network remodeling, leading to MAMs enrichment causing mitochondrial Ca 2+ overload, promoting excessive mitochondrial fission, and inducing IR.Structural disruption of MAMs, on the one hand, blocks Ca 2+ signal transduction and GLUT4 translocation, inhibiting muscle glucose uptake; on the other hand, it leads to insufficient mitochondrial Ca 2+ absorption and cytoplasmic Ca 2+ overload, thereby inducing IR caused by gluconeogenesis. Mitochondria-Associated Membranes Modulate Insulin Resistance through Lipid Homeostasis Aberrant lipid metabolism is closely associated with insulin resistance (IR).Studies have demonstrated that inhibiting elevated levels of ceramides in obese rats can improve hypothalamic insulin sensitivity and prevent central IR [16].Impairment in the insulin signaling pathway may further lead to elevated ceramide levels and ceramide-induced atypical protein kinase C (PKC) activation, exacerbating IR [17].Mitochondria-associated membranes (MAMs) are lipid raft-like domains enriched with cholesterol and sphingolipids within cells.Numerous enzymes and proteins involved in lipid synthesis, transport, and metabolism are enriched in MAMs (Figure 2).It is suggested that MAMs are likely to be crucial for the regulation of intracellular lipid homeostasis.Indeed, MAMs are not only implicated in the metabolism and transport of phospholipids and cholesterol [18] but also serve as critical pathways for the transport of ceramides from the endoplasmic reticulum (ER) to mitochondria.Even in yeast cells, the MAMs equivalent, known as ER-mitochondria encounter structure (ERMES), serves as a foundation for efficient lipid transport by soluble lipid carrier proteins like ceramide transfer protein (CERT) and oxysterol binding protein (OSBP) [8]. Recent research has unveiled a close interrelation among MAMs, lipid homeostasis, and IR.It has been reported that compromised MAMs integrity leads to reduced fatty acid oxidation and increased levels of fatty acyl-CoA and diacylglycerol (DAG), subsequently upregulating Ser/Thr kinase activity.This results in enhanced serine phosphorylation of Insulin Receptor Substrate 1 (IRS-1) and impedes tyrosine phosphorylation of IRS-1 by the insulin receptor, ultimately inhibiting insulin-induced glucose uptake [19].Additionally, studies indicate that under normal conditions, ceramides can be transferred to mitochondria and converted into sphingosine-1-phosphate and hexadecenal.However, disruption in ERmitochondrial coupling prevents ceramide transfer to mitochondria, leading to elevated cytosolic ceramide levels [20].The positioning of ceramides near the plasma membrane is a critical factor inhibiting insulin signaling [21].Thus, there is compelling evidence to suggest that MAMs may ensure appropriate insulin sensitivity by maintaining intracellular lipid homeostasis. Int. J. Mol.Sci.2024, 25, x FOR PEER REVIEW 5 of 18 oxidation and increased levels of fatty acyl-CoA and diacylglycerol (DAG), subsequently upregulating Ser/Thr kinase activity.This results in enhanced serine phosphorylation of Insulin Receptor Substrate 1 (IRS-1) and impedes tyrosine phosphorylation of IRS-1 by the insulin receptor, ultimately inhibiting insulin-induced glucose uptake [19].Additionally, studies indicate that under normal conditions, ceramides can be transferred to mitochondria and converted into sphingosine-1-phosphate and hexadecenal.However, disruption in ER-mitochondrial coupling prevents ceramide transfer to mitochondria, leading to elevated cytosolic ceramide levels [20].The positioning of ceramides near the plasma membrane is a critical factor inhibiting insulin signaling [21].Thus, there is compelling evidence to suggest that MAMs may ensure appropriate insulin sensitivity by maintaining intracellular lipid homeostasis. The Role and Mechanism of Mitochondrial Quality Control in Mitochondria-Associated Membranes Intervention in Insulin Resistance Mitochondria serve as crucial sites for cellular metabolism, participating in the regulation of organismal nutrition and energy status.Mitochondrial dysfunction is a common pathogenic feature of insulin resistance (IR) [1], and the intracellular mitochondrial quality control system can jointly regulate mitochondrial function through physiological processes such as mitochondrial biogenesis, mitochondrial dynamic changes, and mitophagy.However, the specific connection between mitochondria and insulin sensitivity is not yet clear.Recent studies suggest that mitochondria-associated membranes (MAMs) may play a crucial role in this context. The Role and Mechanism of Mitochondrial Quality Control in Mitochondria-Associated Membranes Intervention in Insulin Resistance Mitochondria serve as crucial sites for cellular metabolism, participating in the regulation of organismal nutrition and energy status.Mitochondrial dysfunction is a common pathogenic feature of insulin resistance (IR) [1], and the intracellular mitochondrial quality control system can jointly regulate mitochondrial function through physiological processes such as mitochondrial biogenesis, mitochondrial dynamic changes, and mitophagy.However, the specific connection between mitochondria and insulin sensitivity is not yet clear.Recent studies suggest that mitochondria-associated membranes (MAMs) may play a crucial role in this context. Mitochondrial Biogenesis, Mitochondria-Associated Membranes, and Insulin Resistance Mitochondrial biogenesis is the process of producing new, well-functioning mitochondria within cells and is a key step for mitochondria to function normally.Peroxisome proliferator-activated receptor gamma coactivator 1-alpha (PGC-1α) is a key transcription factor for mitochondrial biogenesis primarily coordinating this process by promoting the transcription of nuclear genes and mitochondrial DNA (mtDNA) [22].Research indicates that PGC-1α, located on chromosome 4p15.1-2,correlates with basal insulin levels in different populations [1].Additionally, PGC-1α levels are reduced in the skeletal muscle of individuals with insulin resistance (IR) and type 2 diabetes mellitus (T2DM) [23].Therefore, the downregulation of PGC-1α expression leading to decreased mitochondrial biogenesis could be one of the factors contributing to IR. Notably, PGC-1α expression increases with exercise, further promoting mitochondrial biogenesis [24], suggesting that exercise may intervene in IR through PGC-1α-mediated mitochondrial biogenesis.Interestingly, exposure to perfluorooctane sulfonic acid (PFOS) in mouse cardiomyocytes activates mammalian target of rapamycin complex 2 (mTORC2) by phosphorylating epidermal growth factor receptor (EGFR) (Tyr1086), weakening the tethered the inositol 1,4,5-triphosphate receptorsglucose-regulated protein 75-voltage-dependent anion channel (IP 3 R-Grp75-VDAC) interaction on mitochondria-associated membranes (MAMs), leading to intracellular fatty acid accumulation and subsequent reduction in PGC-1α expression, resulting in decreased mitochondrial biogenesis [25].This suggests that the weakened interaction of MAMs tethering complex proteins is a potential key regulator causing the downregulation of PGC-1α, leading to reduced mitochondrial biogenesis and consequent IR. In fact, researchers have identified the critical role of endoplasmic reticulum (ER) in mitochondrial fission [28] and fusion [29] in recent years.Research indicates that FUN14 domain-containing 1 (FUNDC1) expression is essential for MAMs formation, and the loss of FUNDC1 in cardiac muscle downregulates Fis1 expression, inhibiting mitochondrial fission, while overexpression of FUNDC1 leads to excessive mitochondrial fission [14].MAMs can also regulate the expression of genes related to mitochondrial dynamics at the transcriptional level.The increased formation of MAMs leads to an increase in cytoplasmic Ca 2+ concentration, thereby activating the calcium-sensitive transcription factor cAMPresponse element binding protein (CREB).Activated CREB directly binds to the promoter of Fis1, promoting its transcription and thereby enhancing mitochondrial fission [30].Moreover, the unique lipid environment of MAMs is believed to promote membrane curvature, facilitating membrane fission and fusion.Therefore, MAMs are crucial platforms for the regulation of mitochondrial dynamics. Mfn2 is a classical functional tethering protein in MAMs, and its knockout or silencing leads to the dissociation of the endoplasmic reticulum (ER) and mitochondria [31].Liverspecific knockout of Mfn2 causes excessive mitochondrial fission, reduces insulin signal transduction in muscle and liver tissues, and induces susceptibility to insulin resistance (IR) [32].In myocardial cells, FUNDC1 binds to inositol 1,4,5-triphosphate receptor 2 (IP 3 R 2 ) to form MAMs tethers, facilitating the transfer of ER Ca 2+ to mitochondria and the cytoplasm [30].Under normal physiological conditions, FUNDC1-knockout mice exhibit decreased Fis1 expression, leading to excessive mitochondrial fusion and heart dysfunction.However, under stimuli such as high fat and high sugar, Adenosine 5'monophosphate (AMP)-activated protein kinase (AMPK) activity is inhibited, resulting in abnormal elevation in FUNDC1 and FUNDC1-mediated formation of MAMs, leading to abnormal accumulation of mitochondrial Ca 2+ , which further affects mitochondrial function through excessive mitochondrial fission.In this scenario, downregulation of the FUNDC1 gene can prevent excessive mitochondrial fission by inhibiting the excessive formation of MAMs and the associated elevation of mitochondrial Ca 2+ , thus preventing and improving diabetic cardiomyopathy [14].Studies also suggest that exercise activates AMPK, which can improve lipid levels and IR [33].This implies that exercise may alleviate high-sugar-and high-fat-induced IR by activating AMPK, downregulating FUNDC1 expression, inhibiting MAMs enrichment, preventing excessive mitochondrial fission. In summary, under normal physiological conditions, MAMs maintain the dynamic balance of mitochondrial fission and fusion.When the body is in a state of energy or nutrient overload, the MAMs network may undergo remodeling.Enrichment of MAMs lead to mitochondrial Ca 2+ overload, which triggers excessive mitochondrial fission, thus resulting in IR.However, some studies also suggest that changes in mitochondrial dynamics during nutrient overload are associated with mitochondrial depolarization.Therefore, the exact reasons for mitochondrial morphological changes under conditions of nutrient overload require further experimental verification. Mitophagy, Mitochondria-Associated Membranes, and Insulin Resistance Mitochondria undergo frequent cycles of fusion and fission in a "kiss and run" pat-tern.The membrane potential of the daughter units generated during fission events typically undergoes distinct changes, determining their subsequent fate.Daughter units with increased membrane potential can be selectively fused by the mitochondrial network to repair damaged regions.On the other hand, daughter units with decreased membrane potential are unable to undergo fusion for repair and are more inclined to be eliminated through autophagy [34].Mitophagy is a multi-step catabolic process that selectively targets damaged or dysfunctional mitochondria for lysosomal-dependent degradation, ensuring the stability of mitochondrial quality.Mitochondrial dysfunction has been implicated in the development of liver insulin resistance (IR) induced by the accumulation of fatty acids.Mitophagy selectively degrades damaged mitochondria to reverse mitochondrial dysfunction, inhibit hepatic fatty acid accumulation, and consequently improve liver IR.The mitophagy-related protein PTEN induced putative kinase 1 (PINK1) was downregulated in high-fat-fed mice, while overexpression of PINK1 enhanced glucose uptake and downregulated gluconeogenic enzyme levels [35].This suggests that impaired mitophagy function is a significant contributing factor to high-fat-induced IR. It is noteworthy that damaged mitochondria can also generate damage-associated molecular patterns (DAMPs), such as reactive oxygen species (ROS), to activate the Nucleotidebinding oligomerization domain, leucine-rich repeat and pyrin domain-containing 3 (NLRP3) inflammasome.Activated NLRP3 recruits the adaptor protein apoptosis-associated specklike protein (ASC) to specifically localize to mitochondria-associated membranes (MAMs).Under pro-inflammatory stimuli, NLRP3 oligomerizes and exposes its effector domain to interact with ASC.ASC, in turn, recruits pro-caspase-1 to form an active NLRP3 inflammasome complex.Finally, activated caspase-1 cleaves IL-1β to form mature IL-1β [36].Studies have shown that inflammasomes can directly or indirectly affect insulin signaling pathways, contributing to the development of IR and type 2 diabetes mellitus (T2DM) [37].This suggests that MAMs may play an important role in the inflammatory insulin resistance caused by mitochondrial damage. In yeast cells, the disruption of the ER-mitochondria encounter structure (ERMES) leads to reduced mitochondrial division and mitophagy.However, artificial restoration of the ERMES enables the recovery of mitophagy [38,39], indicating the involvement of the ERMES in regulating mitophagy.Indeed, in both mammals and yeast, MAMs/ERMES control the selective degradation of unused or damaged mitochondria and mark sites for mitochondrial division.FUN14 domain-containing 1 (FUNDC1), a key molecule in regulating MAMs formation, also serves as a mitophagy receptor.Under low-oxygen conditions, FUNDC1 binds to calnexin at MAMs, promoting mitophagy.As mitophagy progresses, FUNDC1 dissociates from calnexin, and Dynamin-Related Protein 1 (Drp1) binds to the exposed site, thereby being recruited to MAMs, leading to mitochondrial fission.Downregulation of FUNDC1, Drp1, or calnexin can inhibit mitochondrial division and mitophagy [40].This allows mitochondrial fission and mitophagy to be integrated at the MAMs interface.In high-fat-fed mice, specific deletion of FUNDC1 in adipocytes impaired mitophagy, exacerbating obesity and IR [41].Additionally, MAMs formation significantly increased in the oocytes of obese mice subjected to a high-fat diet [13]. In summary, under normal conditions, MAMs likely maintain the necessary level of mitophagy.When the organism is exposed to high-fat stimuli, the remodeling of the MAMs network increases the number of MAMs, enhancing mitophagy to eliminate a large number of functionally impaired mitochondria induced by high-fat stimuli.Despite the compensatory increases in MAMs under high-fat conditions promoting mitophagy, this does not completely compensate for the mitochondrial dysfunction caused by high-fat intake, resulting in the persistence of IR. Endoplasmic Reticulum Stress, Mitochondria-Associated Membranes, and Insulin Resistance The endoplasmic reticulum (ER) is a crucial cellular organelle involved in maintaining calcium homeostasis, protein synthesis, post-translational modification, and transport.During insulin resistance (IR), the dissociation of glucose-regulated protein 78 (GRP78) from three ER membrane transport proteins-Protein kinase R-like endoplasmic reticulum kinase (PERK), Inositol-requiring enzyme-1α (IRE-1α), and Activating Transcription Factor 6 (ATF6)-occurs, leading to the unfolded protein response (UPR).The UPR can alleviate endoplasmic reticulum stress (ERS), but when ERS exceeds the reparative capacity of UPR, cells may undergo apoptosis [42]. ERS has been reported as a significant factor contributing to IR [43].When ERS occurs, it activates the serine kinase c-Jun N-terminal kinase (JNK) and nuclear factor kappa B (NF-κB) signaling pathways, disrupting normal insulin signal transduction and causing IR [44].Recently, studies have found that mitochondria-associated membranes (MAMs) appear to play a crucial role in the ERS promoting IR.Disruption of MAMs structure and functional integrity in liver cells exacerbates ERS, leading to IR [45].Knocking out MAMs tethering protein Mitofusin 2 (Mfn2) results in the generation of reactive oxygen species (ROS) and ERS in the liver and muscles.The interaction between the two activates JNK, phosphorylating Insulin Receptor Substrate (IRS) proteins and inhibiting insulin signal transduction [32].Overexpressing MAMs tethering proteins glucose-regulated protein 75 (GRP75) or Mfn2 in muscle cells suppresses palmitate-induced ERS, thereby improving IR [46].Furthermore, the dynamics and regulation of MAMs contribute to the interaction between ERS and mitochondrial dysfunction in decreased insulin responsiveness [47].This aligns with recent research indicating enhanced ER-mitochondria coupling in the early stages of ERS [48], but the uncoupling of the ER and mitochondria interrupts calcium transfer [31], leading to subsequent ERS [49]. Collectively, the evidence suggests that maintaining a dynamic balance of MAMs within a certain range is crucial to preventing ERS and maintaining normal insulin sensitivity under normal physiological conditions.Additionally, compensatory increases in MAMs can improve ERS-induced IR when the organism experiences ERS under stress. Role and Mechanism of Mitochondria-Associated Membranes-Mediated Cellular Homeostasis in Exercise Intervention in Insulin Resistance Exercise serves as a green and economically sustainable approach to intervening in insulin resistance (IR).Studies indicate that various forms of physical activity can enhance insulin sensitivity to a certain extent [50].However, the precise mechanisms underlying exercise interference with IR remain unclear.Presently, research on exercise intervention in IR primarily focuses on functional changes within individual cellular organelles, with limited attention being given to the interactions among organelles.In recent years, scholars have started to investigate the role of inter-organelle interactions in exercise-regulated IR, with a significant emphasis on mitochondria-associated membranes (MAMs).Studies suggest that the beneficial effects of exercise on IR may be attributed to its regulation of MAMs-associated calcium ion (Ca 2+ ) signaling, lipid homeostasis, mitochondrial quality control, and endoplasmic reticulum stress (ERS). Mitochondria-Associated Membranes-Associated Calcium Ion Signaling Mediates Exercise Intervention in Insulin Resistance Intracellular mitochondrial or cytoplasmic calcium ion (Ca 2+ ) overload is a crucial factor contributing to insulin resistance (IR).Mitochondria-associated membranes (MAMs), acting as "assembly points" for Ca 2+ channels, finely regulate intracellular Ca 2+ transport and Ca 2+ signal transduction.Disruption of the MAMs structure leads to excessive Ca 2+ release into the cytoplasm, generating cytoplasmic Ca 2+ waves, thereby inducing gluconeogenesis and causing IR [2].Muscle contraction during exercise is a concrete manifestation of the transmission of electrical signals from motor neurons to the muscle cell membrane, the coupling of electrical signals to mechanical signals, and the sliding of thick and thin myofilaments.Current research suggests that the activation of Ca 2+ channels on MAMs may be closely related to changes in muscle electrical activity.When skeletal muscle cell membranes depolarize, MAMs-associated Ca 2+ channels [inositol 1,4,5-triphosphate receptor (IP 3 R)/ryanodine receptor 1 (RyR1)] are activated, leading to increased Ca 2+ absorption by mitochondria [51].Neuronal action potentials can stimulate the expression of MAMs-associated Ca 2+ channel proteins, IP 3 Rs and RyRs, promoting endoplasmic reticulum (ER) Ca 2+ release, subsequently facilitating mitochondrial Ca 2+ uptake induced by another Ca 2+ channel, mitochondrial calcium uniporter (MCU), on MAMs [52].Interestingly, knockout of MCU was shown to impair mouse motor ability and decrease mitochondrial Ca 2+ uptake [53].Therefore, the use of exercise to interfere with IR through MAMs-related Ca 2+ transport pathways may involve two potential mechanisms: (1) Exercise stimulation triggers a mechanism that repairs MAMs structure, leading to an increase in MAMs-associated Ca 2+ channels within cells, promoting mitochondrial Ca 2+ absorption, and inhibiting cytoplasmic Ca 2+ deposition, thereby alleviating IR.This mechanism requires further exploration.(2) Exercise directly upregulates the expression of MAMs-associated Ca 2+ channel proteins (IP 3 Rs/RyRs/MCU), promoting mitochondrial Ca 2+ absorption, decreasing cytoplasmic Ca 2+ deposition, and alleviating IR. The Ca 2+ signaling transduction associated with MAMs is a crucial event for insulindependent glucose uptake [15].The G protein/IP 3 /IP 3 R pathway induces fusion of glucose transporter 4 (GLUT4) with the plasma membrane and subsequent glucose uptake in a Ca 2+dependent manner, increasing intracellular Ca 2+ concentration [15].Impairment of this Ca 2+ transduction pathway inhibits GLUT4 translocation and subsequent glucose uptake.Exercise, by inducing Ca 2+ signaling transduction, promotes GLUT4 translocation, and acute regulation of muscle glucose uptake also relies on the translocation and expression of GLUT4.Exercise training can effectively stimulate the expression of GLUT4 in skeletal muscle.This effect helps to improve insulin action to a certain extent [54].Therefore, exercise likely activates the MAMs-related Ca 2+ signaling pathway (G protein/IP 3 /IP 3 R), induces Ca 2+ signal transduction, promotes GLUT4 translocation to the plasma membrane, and enhances muscle glucose uptake, thus alleviating IR. Mitochondria-Associated Membranes-Associated Lipid Metabolism Mediates Exercise Intervention in Insulin Resistance Mitochondria-associated membranes (MAMs) serve as lipid raft structures closely associated with lipid homeostasis, with many enzymes and proteins involved in lipid synthesis, transport, and metabolism being enriched in MAMs.The accumulation of lipids, especially diacylglycerol (DAG) and sphingolipids, in skeletal muscles is related to decreased insulin sensitivity in humans [55].Studies indicate that disruption of MAMs integrity results in increased DAG levels, inhibiting insulin-induced glucose uptake.Moderate-and highintensity aerobic exercise can reduce DAG levels [56].Both acute or chronic aerobic and resistance exercise enhances insulin sensitivity [55], suggesting that exercise likely reduces DAG levels by repairing the MAMs structure, thereby alleviating insulin resistance (IR). Research also suggests that the regulatory effect of exercise on triglycerides (TGs) in cells is also related to MAMs.Increased intramyocellular triglycerides (IMTGs) are closely associated with IR in obese and type 2 diabetes mellitus (T2DM) patients.In dietary obese mice and obese T2DM patients, disruption of MAMs was demonstrated to result in adverse effects, such as TG accumulation and IR [57].However, chronic aerobic exercise training reduces IMTGs in obese individuals both with and without T2DM [56].In addition, studies indicate that the enzyme acyl coenzyme A: diacylglycerol acyltransferase 2 (DGAT2), involved in triglyceride synthesis, is localized in MAMs.This suggests that exercise may alleviate IR by promoting the restoration of the MAMs structure in obesity or T2DM, inhibiting TG formation. Moreover, ectopic deposition of ceramides also lead to IR. Structural disruption of MAMs leads to excessive accumulation of ceramide in the cytoplasm, resulting in IR [20].We found that aerobic exercise training promotes the expression of MAMs-related proteins in the skeletal muscle of diabetic mice [58].Moreover, research indicates that aerobic exercise or aerobic interval training reduces skeletal muscle ceramides in obese and T2DM populations [59,60].This suggests that exercise may alleviate IR by repairing the MAMs structure, alleviating ectopic deposition of ceramides in obese and T2DM. The above research results suggest the critical importance of MAMs in lipid metabolism in tissues such as skeletal muscle and liver under basal conditions.Under physiological or pathological conditions such as obesity or T2DM, disruption of the MAMs structure induces intracellular lipid accumulation or lipid degeneration.Exercise training can promote the repair of the MAMs structure and improve lipid metabolism, thus alleviating IR.Although numerous studies have demonstrated that exercise can upregulate the expression of MAMs-forming proteins such as Mitofusin2 (Mfn2), FUN14 domain-containing 1 (FUNDC1), and mammalian target of rapamycin complex 2 (mTORC2), the pathways through which exercise restores the MAMs structure are likely more complex and need further investigation, with direct evidence being still somewhat lacking. Mitochondria-Associated Membranes-Associated Mitochondrial Quality Control Mediates Exercise Intervention Insulin Resistance Disruption of the mitochondria-associated membranes (MAMs) structure in insulin resistance (IR) and type 2 diabetes mellitus (T2DM) patients results in downregulation of Peroxisome proliferator-activated receptor gamma coactivator 1-alpha (PGC-1α) expression, inhibiting mitochondrial biogenesis.Conversely, aerobic exercise can upregulate PGC-1α expression in IR patients, enhancing mitochondrial biogenesis and efficiency, thereby improving insulin sensitivity [61].This suggests that aerobic exercise-induced improvement in insulin sensitivity in IR patients may be related to induced mitochondrial biogenesis through MAMs formation.Mammalian target of rapamycin complex 2 (mTORC2) likely serves as a key switch in exercise regulation of MAMs-associated mitochondrial biogenesis.Studies indicate that mouse myocardial cells exposed to perfluorooctanesulfonic acid (PFOS) activated mTORC2 through phosphorylation of epidermal growth factor receptor (EGFR) (Tyr1086), reducing the interaction of MAMs tethering inositol 1,4,5-triphosphate receptors-glucose-regulated protein 75-voltage-dependent anion channel (IP 3 R-Grp75-VDAC), leading to intracellular fatty acid accumulation and subsequently decreasing PGC-1α expression, inhibiting mitochondrial biogenesis [25].However, under insulin stimulation, mTORC2 is located on MAMs and phosphorylates MAMs-residing proteins Phosphofurin acidic cluster sorting protein 2 (PACS-2), IP 3 R, and hexokinase2 (HK2) via Protein kinase B (Akt), regulating the structural and functional integrity of MAMs, calcium flux, and energy metabolism, respectively.Depletion of mTORC2 results in MAMs integrity disruption and exhibits increased gluconeogenesis, hyperinsulinemia, and impaired glucose tolerance [4].This suggests that mTORC2 may serve as a hub receiving external signals, producing specific responses to different stimuli, thereby regulating MAMs structure and function.Additionally, decreased mTORC2 activity in mice during exercise led to reduced muscle glucose uptake, but exercise activated mTORC2 in mouse muscles and increased muscle PGC-1α expression [62,63].This implies that exercise likely promotes MAMs structural or functional repair by activating mTORC2, thereby increasing mitochondrial biogenesis caused by upregulation of PGC-1α expression, ultimately alleviating IR. MAMs serve as sites for mitochondrial fission, and exercise regulates the expression of mitochondrial fission-related factors [Dynamin-Related Protein 1(Drp1) and mitochondrial fission protein 1 (Fis1)] associated with MAMs structure and function.Moore et al. found that the Ser616 site of mouse Drp1 is transiently activated during acute endurance exercise [63].Fis1 mRNA levels rapidly increase within 30 min of low-intensity continuous exercise.Studies also indicate that specific deletion of mitochondrial outer membrane protein Fis1 in skeletal muscles under exhaustive exercise leads to impaired mitochondrial function the swelling of the sarcoplasmic reticulum (SR/ER) [64], suggesting a potential involvement of Fis1 in the regulation of MAMs by exercise.Unfortunately, current research has not definitively established an association between Fis1 and the formation of MAMs.However, studies have shown that the absence of FUN14 domain-containing 1 (FUNDC1) inhibits the expression of Fis1, resulting in excessive mitochondrial fusion.The expression of FUNDC1 is crucial for MAMs formation, and exercise has been demonstrated to upregulate FUNDC1 expression.Therefore, exercise may potentially increase MAMs structural repair by upregulating FUNDC1 expression, leading to increased Fis1 expression, inhibition of mitochondrial excessive fusion, and ultimately alleviation of insulin resistance. Mitochondrial fusion-related molecules [Mitofusin 1 (Mfn1), Mitofusin2 (Mfn2), and Opticatrophy 1 (OPA1)] localize to MAMs and participate in regulating MAMs structure and function.Particularly, Mfn2 deficiency leads to structural disruption of MAMs, excessive mitochondrial fission, and inhibition of insulin sensitivity.Ding et al. found significant increases in Mfn1 and Mfn2 mRNA levels after 3 and 12 h, respectively, during low-intensity continuous exercise [65].However, their levels remain unaffected during acute endurance exercise and post-exercise recovery [63].This suggests that exercise may potentially promote the structural repair of MAMs, prevent excessive mitochondrial fissioninduced IR, and achieve this by upregulating the expression of Mfn2.And, indicating potential differences in the regulatory effects of different exercise intensities and frequencies on MAMs.Research also suggests that OPA1 expression decreases in skeletal muscles of T2DM patients, while high-intensity high-volume training (HIHVT) can upregulate OPA1 expression, promoting mitochondrial fusion [66].The possible mechanism of MAMs intervening in IR through dynamic regulation of mitochondrial fission-fusion cycles has been explained in the preceding section (Section 3.3.2).Moreover, when MAMs integrity is compromised, the balance of mitochondrial fission-fusion cycles is disrupted, suggesting that changes in MAMs structure may occur before mitochondrial dynamic alterations.In summary, it can be inferred that the enrichment of MAMs during nutritional or energy overload, as well as the structural disruption of MAMs in conditions such as IR or T2DM, can lead to the breakdown of mitochondrial fission-fusion mechanisms, resulting in IR.Exercise may promote the remodeling of the MAMs network by regulating the expression of MAMs-related molecules such as Adenosine 5'-monophosphate (AMP)-activated protein kinase (AMPK), FUNDC1, and Mfn2, thereby restoring the balance of mitochondrial fission and fusion, and ultimately alleviating IR. During IR, mitochondrial function is impaired, accompanied by decreased mitophagy activity, while reactive oxygen species (ROS) generated during exercise can activate mitophagy to clear damaged mitochondria, enhancing mitochondrial function and alleviating IR [67].This suggests that exercise may alleviate IR by increasing mitophagy and enhancing mitochondrial function.The structural and functional integrity of MAMs is crucial to maintaining appropriate mitophagy under insulin-resistant conditions.Therefore, exercisemediated alleviation of IR through increased mitophagy likely involves MAMs.Studies have found that the mitophagy-related protein Parkin not only participates in regulating mitophagy but also contributes to MAMs formation.In fibroblasts with Parkin mutations, the structural and functional integrity of MAMs is disrupted.Overexpression of Parkin enhances MAMs structure and function, promoting Ca 2+ transfer from the endoplasmic reticulum (ER) to the mitochondria and increasing Adenosinetriphosphate (ATP) production in mitochondria [68].In previous research, we found that endurance exercise increased the adaptive expression of mitochondrial autophagy-related molecules-PTEN induced putative kinase 1(PINK1), Parkin, Nix, and Bcl2/adenovirusE1B19kDainteractingprotein3 (BNIP3)-mRNA levels in nutritionally obese mice [69].This suggests that exercise may promote MAMs formation by upregulating Parkin expression, thereby enhancing mitophagy and alleviating IR. Under high-fat conditions, downregulation of the FUNDC1 gene inhibits the formation of MAMs, impairs the mitophagy mechanism, and leads to more severe obesity and IR [14,41].Although research indicates a significant increase in FUNDC1 and FUNDC1related MAMs formation high glucose stimulation, this may partially compensate for the deficiency in mitophagy.However, at this time, the number of functionally damaged mitochondria far exceeds normal conditions.Therefore, a higher level of mitophagy is required to eliminate damaged mitochondria and restore mitochondrial function.It was shown that exercise, on the other hand, upregulated FUNDC1 expression, with the changes being more pronounced in the 4-week high-intensity exercise group compared with the 4-week moderate-intensity exercise group [70].This suggests that exercise may also cause a compensatory increase in the formation of MAMs by upregulating the expression of FUNDC1 to obtain higher levels of mitophagy, thereby alleviating high-fat diet-induced IR.Different exercise intensities may produce varying effects. Numerous studies have demonstrated that the dysregulation of dynamic flux in mitochondrial remodeling is associated with metabolic disorders and insulin sensitivity.As mitochondrial fission-fusion, biogenesis, and mitophagy processes are interdependent and MAMs are involved in these processes, MAMs are likely a key hub linking mitochondrial fission, fusion, biogenesis, and mitophagy.While the specific mechanisms by which exercise regulates MAMs need further investigation, strategies aimed at enhancing the capacity of the mitochondrial lifecycle through exercise-induced MAMs regulation may effectively counteract diseases related to metabolic dysfunction. Mitochondria-Associated Membranes-Associated Endoplasmic Reticulum Stress Mediates Exercise Intervention in Insulin Resistance Exercise alleviates insulin Resistance (IR) by inhibiting endoplasmic reticulum stress (ERS).Aerobic exercise may increase the activity of Chaperonin Containing TCP1, Subunit 2 (CCT2) protein through the mammalian target of rapamycin (mTOR)/Ribosomal protein S6 kinase beta-1 (S6K1) signaling pathway, enhancing protein-folding efficiency, thereby reducing unfolded protein response (UPR) and ultimately alleviating IR [71,72].Swimming training reduced the phosphorylation levels of Protein kinase R-like endoplasmic reticulum kinase (PERK) and the alpha subunit of eukaryotic initiation factor 2 (eIF2α) in adipocytes and liver cells of high-fat diet-fed rats, thereby alleviating endoplasmic reticulum stress (ERS) [73].Consistently with this, phosphorylation levels of Inositol-requiring enzyme-1α (IRE-1α) and c-Jun N-terminal kinase (JNK) in the liver of high-fat-fed mice decreased after 16 weeks of treadmill training.Resistance exercise also reduced the phosphorylation of JNK in cells of middle-aged and elderly men and alleviated IR [74]. The disruption of mitochondria-associated membranes (MAMs) integrity is closely related to type 2 diabetes mellitus (T2DM)-induced glucose intolerance, mitochondrial dysfunction, and intense ERS.Disruption of MAMs integrity aggravates ERS, leading to IR [45].Swimming exercise can significantly improve glucose intolerance induced by T2DM and alleviate ERS response [58], indicating that exercise may alleviate IR by repairing the MAMs structure and inhibiting ERS.Research has found that Mfn2 expression was upregulated after two weeks of low-load high-intensity exercise training in diabetic patients [75].Cartoni's study also confirmed that exercise training can upregulate Mfn2 in human skeletal muscle cells [76].Mfn2 is an important tethering protein of MAMs, and its upregulated expression promotes the formation of MAMs.It is speculated that exercise may alleviate insulin resistance by upregulating the expression of Mfn2, promoting the repair of MAMs structure to improve ER stress.However, ERS induced by skeletal muscle exercise can increase the release of muscle factors such as Fibroblast growth factor 21 (FGF21) and IL-6, ultimately reducing liver IR in non-alcoholic fatty liver disease [77,78].This suggests that ERS has a dual role in exercise intervention in IR, and further exploration is needed to determine the specific conditions and degrees of ERS that are beneficial to the body.However, it is evidently clear that ERS caused by the disruption of the MAMs structure is not conducive to the maintenance of or enhancement in insulin sensitivity.Figure 3 shows the potential mechanism by which exercise intervenes in IR through MAMs. ing the MAMs structure and inhibiting ERS.Research has found that Mfn2 expression was upregulated after two weeks of low-load high-intensity exercise training in diabetic patients [75].Cartoni's study also confirmed that exercise training can upregulate Mfn2 in human skeletal muscle cells [76].Mfn2 is an important tethering protein of MAMs, and its upregulated expression promotes the formation of MAMs.It is speculated that exercise may alleviate insulin resistance by upregulating the expression of Mfn2, promoting the repair of MAMs structure to improve ER stress.However, ERS induced by skeletal muscle exercise can increase the release of muscle factors such as Fibroblast growth factor 21 (FGF21) and IL-6, ultimately reducing liver IR in non-alcoholic fatty liver disease [77,78].This suggests that ERS has a dual role in exercise intervention in IR, and further exploration is needed to determine the specific conditions and degrees of ERS that are beneficial to the body.However, it is evidently clear that ERS caused by the disruption of the MAMs structure is not conducive to the maintenance of or enhancement in insulin sensitivity.Figure 3 shows the potential mechanism by which exercise intervenes in IR through MAMs. Conclusions Insulin resistance (IR) is a shared pathogenic mechanism implicated in various metabolismrelated disorders.Extensive research has confirmed the crucial role of mitochondriaassociated membranes (MAMs) in insulin signal transduction.However, the precise molecular mechanisms underlying MAMs-mediated regulation of IR remain poorly understood.Based on a comprehensive analysis of the existing literature, it is evident that MAMs play a pivotal role in maintaining cellular homeostasis by modulating calcium homeostasis, lipid metabolism, mitochondrial quality control, and endoplasmic reticulum stress (ERS), all of which substantially impact IR. Under normal physiological conditions, the intracellular MAMs network maintains a stable dynamic balance.However, MAMs remodeling by energetic stimuli, including structural alterations (alterations in MAMs quantity) or functional impairments (changes in MAMs-associated tethering proteins), can disrupt calcium homeostasis, lipid impairments, and mitochondrial quality control or induce ERS, thereby promoting the development or exacerbation of IR.Exercise, by regulating MAMs formation and/or expression of associated molecules, has the potential to restore MAMs structure or function, thereby ameliorating dysregulation of cellular homeostasis and ultimately mitigating IR. Potential mechanisms through which exercise intervenes in IR via MAMs-cellular homeostasis include those reported below. MAMs-ERS pathway: Exercise repairs the structure of MAMs by upregulating Mfn2 expression, thereby alleviating ERS and ultimately alleviating IR. However, there is relatively little direct evidence that MAMs mediate exercise-regulated IR.Mechanisms governing exercise-mediated modulation of the MAMs network balance require further investigation, with many questions remaining to be addressed.For example, areas requiring further clarification are quantifying the relationship between MAMs and cellular life activities; understanding how exercise promotes or inhibits MAMs formation based on the state of stimulation experienced by the body; and elucidating the impact of exercise load, intensity, and duration on MAMs regulation. As research progresses and advanced experimental techniques develop, the true nature of MAMs hidden behind the veil is gradually being unveiled.The intricate mechanism of MAMs-mediated exercise intervention in IR will continue to be refined, offering additional possibilities for research on and treatment of metabolic diseases. Figure 3 .Figure 3 . Figure 3.The potential mechanisms by which exercise intervenes in insulin resistance through mitochondria-associated membranes.Mechanisms of exercise intervention on insulin resistance(IR) via mitochondria-associated membranes (MAMs)-Ca 2+ pathway:(1) Exercise stimulation triggers a mechanism that repairs MAMs structure, leading to an increase in MAMs-associated Ca 2+ channels Figure 3.The potential mechanisms by which exercise intervenes in insulin resistance through mitochondria-associated membranes.Mechanisms of exercise intervention on insulin resistance(IR) via mitochondria-associated membranes (MAMs)-Ca 2+ pathway: (1) Exercise stimulation triggers a mechanism that repairs MAMs structure, leading to an increase in MAMs-associated Ca 2+ channels within cells, promoting mitochondrial Ca 2+ absorption, and inhibiting cytoplasmic Ca 2+ deposition, thereby alleviating IR. (2) Exercise directly upregulates the expression of MAMs-associated Ca 2+ channel proteins (IP3Rs/RyRs/MCU), promoting mitochondrial Ca 2+ absorption, decreasing cytoplasmic Ca 2+ deposition, and alleviating IR. (3) Exercise activates the MAMs-related Ca 2+ signaling pathway (G protein/IP3/IP3R), induces Ca 2+ signal transduction, promotes GLUT4 translocation to the plasma membrane, and enhances muscle glucose uptake, thus alleviating IR.Mechanisms of exercise intervention on IR via MAMs-lipid homeostasis pathway: (1) Exercise reduces DAG levels by repairing the MAMs structure, thereby alleviating IR. (2) Exercise alleviates IR by promoting the restoration of the MAMs structure in obesity or T2DM, inhibiting TG formation.(3) Exercise alleviates IR by repairing the MAMs structure, alleviating ectopic deposition of ceramides in obese and T2DM.Mechanisms of exercise intervention on IR via MAMs-mitochondrial biogenesis pathway: Exercise promotes MAMs structural or functional repair by activating mTORC2, thereby increasing mitochondrial biogenesis caused by upregulation of PGC-1α expression, ultimately alleviating IR.Mechanisms of exercise intervention on IR via MAMs-mitochondrial dynamics pathway: (1) Exercise promotes MAMs structural repair by upregulating FUNDC1 expression, leading to increased Fis1
v3-fos-license
2018-12-02T16:19:45.480Z
2018-09-15T00:00:00.000
53722148
{ "extfieldsofstudy": [ "Biology", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/eva.12694", "pdf_hash": "1857208c21ad40815fafc32a127c7bf0e4562d4d", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41126", "s2fieldsofstudy": [ "Environmental Science" ], "sha1": "1857208c21ad40815fafc32a127c7bf0e4562d4d", "year": 2018 }
pes2o/s2orc
Metabarcoding using multiplexed markers increases species detection in complex zooplankton communities Abstract Metabarcoding combines DNA barcoding with high‐throughput sequencing, often using one genetic marker to understand complex and taxonomically diverse samples. However, species‐level identification depends heavily on the choice of marker and the selected primer pair, often with a trade‐off between successful species amplification and taxonomic resolution. We present a versatile metabarcoding protocol for biomonitoring that involves the use of two barcode markers (COI and 18S) and four primer pairs in a single high‐throughput sequencing run, via sample multiplexing. We validate the protocol using a series of 24 mock zooplanktonic communities incorporating various levels of genetic variation. With the use of a single marker and single primer pair, the highest species recovery was 77%. With all three COI fragments, we detected 62%–83% of species across the mock communities, while the use of the 18S fragment alone resulted in the detection of 73%–75% of species. The species detection level was significantly improved to 89%–93% when both markers were used. Furthermore, multiplexing did not have a negative impact on the proportion of reads assigned to each species and the total number of species detected was similar to when markers were sequenced alone. Overall, our metabarcoding approach utilizing two barcode markers and multiple primer pairs per barcode improved species detection rates over a single marker/primer pair by 14% to 35%, making it an attractive and relatively cost‐effective method for biomonitoring natural zooplankton communities. We strongly recommend combining evolutionary independent markers and, when necessary, multiple primer pairs per marker to increase species detection (i.e., reduce false negatives) in metabarcoding studies. tion and taxonomic resolution. We present a versatile metabarcoding protocol for biomonitoring that involves the use of two barcode markers (COI and 18S) and four primer pairs in a single high-throughput sequencing run, via sample multiplexing. We validate the protocol using a series of 24 mock zooplanktonic communities incorporating various levels of genetic variation. With the use of a single marker and single primer pair, the highest species recovery was 77%. With all three COI fragments, we detected 62%-83% of species across the mock communities, while the use of the 18S fragment alone resulted in the detection of 73%-75% of species. The species detection level was significantly improved to 89%-93% when both markers were used. Furthermore, multiplexing did not have a negative impact on the proportion of reads assigned to each species and the total number of species detected was similar to when markers were sequenced alone. Overall, our metabarcoding approach utilizing two barcode markers and multiple primer pairs per barcode improved species detection rates over a single marker/primer pair by 14% to 35%, making it an attractive and relatively cost-effective method for biomonitoring natural zooplankton communities. We strongly recommend combining evolutionary independent markers and, when necessary, multiple primer pairs per marker to increase species detection (i.e., reduce false negatives) in metabarcoding studies. K E Y W O R D S 18S, cytochrome c oxidase subunit I, metabarcoding, multigene, multiple primer pairs, zooplankton 2003; Ratnasingham & Hebert, 2007), with high-throughput sequencing (HTS) technologies to reveal species composition in "bulk" samples or environmental DNA (eDNA) samples (i.e., DNA that leaks into the environment; reviewed in Taberlet et al., 2012). Although metabarcoding is a very promising method, its efficient application is still hindered by several technical limitations which are often responsible for generating both false negatives (species being present in a sample but not detected) and false positives (species being detected but not present). This method relies on well-designed primers to amplify a homologous marker gene from a taxonomically complex sample (Creer et al., 2016). Thus, challenges often include finding a suitable DNA region to amplify across target taxa, dealing with PCR amplification errors and sequencing artifacts, developing highquality reference sequence databases, and choosing the appropriate bioinformatic steps to accommodate variable sequence divergence thresholds among species (Cristescu, 2014;Taberlet et al., 2012;Yoccoz, 2012). Choosing one or more appropriate genetic markers for metabarcoding is considered essential for accurate molecular species identification (Bucklin, Lindeque, Rodriguez-Ezpeleta, Albaina, & Lehtiniemi, 2016;Clarke, Beard, Swadling, & Deagle, 2017), as it affects both PCR amplification success and species-level resolution. To allow efficient species identification, the genetic marker used must show high interspecific variation and low intraspecific variation. However, it is often difficult to strike a balance between high amplification success across diverse taxon groups and species-level resolution (Bohle & Gabaldón, 2012). Markers that undergo fast rates of evolution have discriminative taxonomic power for resolving closely related species but often lack conserved primer binding sites appropriate for amplifying broad taxonomic groups. Degenerate primers are often designed when conserved primer binding sites are not available. However, primer-template mismatches can generate imperfect primer match with some DNA templates (Pinol, Mir, Gomez-Polo, & Agust, 2015). This primer bias often distorts the biotic composition. Most current metabarcoding projects use a single locus approach, and the most common markers are the cytochrome c oxidase subunit I (COI) for animals (Hebert et al., 2003;Leray et al., 2013), internal transcribed spacer (ITS) for fungi (Horton & Bruns, 2001;Schmidt et al., 2013), and plastid DNA (matK and rbcL) for land plants (Chase & Fay, 2009;Yoccoz, 2012). Alternative single markers are standardly used for particular taxa. For example, 12S is the most commonly used metabarcoding marker for fish (Miya et al., 2015;Valentini et al., 2016). Using a single organelle marker can occasionally cause erroneous species identification due to interspecific mitochondrial introgressions (Funk & Omland, 2003;Meyer & Paulay, 2005); therefore, the use of both uniparentally inherited organelle DNA and biparentally inherited DNA has been recommended (Taberlet et al., 2012). The mitochondrial COI gene has high resolution for species identification and relatively extensive reference sequence libraries (Ratnasingham & Hebert, 2007), but it is often difficult to amplify consistently across diverse taxonomic groups due to lack of conserved primer binding sites (Deagle, Jarman, Coissac, Pompanon, & Taberlet, 2014). It was suggested that using well-designed degenerated COI primers could reduce the COI primer bias (Elbrecht & Leese, 2017). An alternative approach, tested here, is the use of multiple primers pairs per markers. In contrast to the mitochondrial COI gene, the nuclear 18S gene provides more conserved priming sites for greater amplification success across broad taxonomic groups, but often provides lower resolution for species identification (Bucklin et al., 2016;Hebert et al., 2003;Saccone, Giorgi, Gissi, Pesole, & Reyes, 1999). Another major disadvantage with using 18S as a metabarcoding marker is that it varies in length at V4 region across diverse species, causing sequence alignment uncertainties across broad taxa and consequently difficulties estimating divergence thresholds and implementing clustering approaches for species identification (Flynn, Brown, Chain, MacIsaac, & Cristescu, 2015;Hebert et al., 2003). These marker-related problems led many researchers to propose the need to use multiple markers in metabarcoding studies (Bucklin et al., 2016;Chase & Fay, 2009;Drummond et al., 2015). The multimarker approach has the potential to reduce rates of false negatives and false positives. Despite these promises, a multigene approach has rarely been applied in metabarcoding studies, and comparisons of biodiversity estimates across the different markers are usually not reported (e.g., COI for metazoan and RuBiscCO for diatoms, Zaiko et al., 2015; species-specific primer pairs of COI and cytochrome b markers, Thomsen et al., 2012; chloroplast trnL and rbcL for surveying different terrestrial habitats, Yoccoz, 2012). In addition, many multimarker metabarcoding studies use a single primer pair per marker (Clarke et al., 2017;Drummond et al., 2015;Kermarrec et al., 2013). Using multiple primer pairs is expected to reduce amplification biases and increase the opportunities of detecting all targeted taxonomic groups. To fully understand the performance of a multigene metabarcoding approach, mock communities are ideal because the expected number of species is known a priori (Clarke, Soubrier, Weyrich, Weyrich, & Cooper, 2014;Elbrecht et al., 2016;Kermarrec et al., 2013). Nonetheless, there are few studies that have taken this approach (but see Clarke et al., 2014). As such, there is an urgent need for experiments that test species detection rates and taxonomic identification accuracy in mock communities using multimarker and multiprimer pair metabarcoding to test the validity of this method for biomonitoring. In this study, we use a combination of mitochondrial (COI) and nuclear markers (18S) and multiple COI primer pairs in a single Illumina run for recovering species by metabarcoding mock communities of zooplankton. Species detection is assessed among markers and primer pairs to evaluate the benefit of multimarker and multiprimer pairs per marker. We also compare species detection rates and detection accuracies with a single-marker metabarcoding experiment in which 18S was used alone. We calibrate the multiplexing multigene approach using a series of mock communities containing single individuals per species (SIS), multiple individuals per species (MIS), and populations of single species (PSS). The resulting calibrated workflow performs better than a single marker or single primer pair approach and can be applied to assess zooplankton biodiversity in natural aquatic habitats. | Primer testing Preliminary primer amplification tests were conducted qualitatively on a total of 103 species using 13 COI primer pairs (COI-5P region) and one 18S primer pair (V4 region; see Supporting information Table S1 for the complete list of primers). We selected primer pairs known to provide amplification success across a wide range of taxa as well as good discriminatory power for species identification. The only 18S primer pair tested is known for its successful amplification across a broad range of zooplankton groups (Brown, Chain, Zhan, MacIsaac, & Cristescu, 2016;Zhan et al., 2013). Specimens used for primer testing were sampled from 16 major Canadian ports across four geographic regions (Atlantic coast, Pacific coast, Arctic, Great lakes; Chain et al., 2016;Brown et al., 2016) and were identified morphologically by taxonomists. A total of 103 species belonging to the phyla Rotifera, Crustacea, Mollusca, and the Subphylum Tunicata were selected and tested (see Supporting information Table S2). A subset of those species was used to assemble mock communities for metabarcoding validation (see Supporting information Table S2). PCR amplification was performed in a total volume of 12.5 μl: 0.2 μM of each forward and reverse primers, 1.25U taq DNA polymerase (GeneScript, VWR), 2 mM Mg 2+ , 0.2 μM dNTP, and 2 μl of genomic DNA. The PCR conditions of each primer pair were based on their sources in the literature (Supporting information Table S1). After the broad primer testing, four primer pairs were selected for metabarcoding several mock communities: one targeting the nuclear 18S V4 region (Zhan et al., 2013) and three COI primer pairs producing three different (partially overlapping) fragments within the COI-5P region ( Figure 1, Supporting information Figure S1, Table 1). | Assemblage of mock communities Mock communities were constructed with the aim of incorporating various levels of genetic variation (intragenomic, intraspecific, interspecific) and representing natural zooplankton communities including species from broad taxonomic groups: Mollusca, Rotifera, Tunicata, and Crustacea (Amphipoda, Anostraca, Cladocera, Cirripedia, Copepoda, Decapoda) (see Supporting information Table S3 for species list). Morphologically identified specimens are at the species or genus level, with a few exceptions that were identified only to the family level. DNA was extracted from each specimen using Qiagen DNeasy Blood & Tissue kits and stored in ultrapure water in the freezer at −20°C as described in Brown, Chain, Crease, MacIsaac, and Cristescu (2015). The DNA was eventually combined into several different mock community assemblies. Laboratory blanks were used consistently during DNA extractions to assure no interference from contamination. Table S3: 3a1-d3), respectively. The inclusion of single individuals in the SIS communities allowed examination of species detection with only interspecific variation. The MIS communities, which most closely resembled natural communities, allowed the examination of species detection with both intraspecific and interspecific variation. The PSS communities allowed the examination of intraspecific F I G U R E 1 The amplified fragments used for metabarcoding. The 5′ end fragment of 325 bp refers to the FC fragment matching the COI-5P gene before the nucleotide position 400. The 3′ end fragment of 313 bp refers to the Leray fragment matching the COI-5P gene after nucleotide position 300, and the whole COI-5P gene of 658 bp refers to the Folmer fragment with forward reads R1 matching before nucleotide position 300 and the reverse reads R2 matching after nucleotide position 400. The primers are not included in the fragment lengths, and the gray lines refer to the forward and reverse reads from the paired-end 300 bp Illumina MiSeq next-generation sequencing. *The 18S fragment sizes vary between species, resulting in some forward and reverse reads that do not overlap variation and the relationship between read abundance and the number of individuals of the same species. | Library preparation and next-generation sequencing (NGS) DNA extractions were first quantified using PicoGreen (Quant-iT ™ Picogreen dsDNA Assay Kit, Thermo Fisher Scientific Inc.), then diluted to 5 ng/μl. The protocol "16S Metagenomic Sequencing Library Preparation" (Illumina Inc.) was used with small modifications to prepare the sequencing-ready libraries. Library preparation involved a first PCR, followed by a first cleaning with Agencourt AMPure beads (Beckman Coulter Life Sciences Inc.), a second PCR with Nextera Index kit (V3), and a second clean-up prior to next-generation sequencing (NGS). The first PCR was conducted in two replicates for each library and each of the four DNA fragments. Negative controls were included in each round of PCRs. PCR amplification was performed in a total volume of 12.5 μl: 0.2 μM of each forward and reverse primers, 6 μl of 2xKAPA HiFi HotStart ReadyMix (KAPA Biosystems Inc., USA), and 1.5 μl of diluted genomic DNA. Due to the incompatibility of KAPA kit with primers involving inosine ("I") in the COI primer Ill_C_R (Shokralla et al., 2015), all the FC fragments were amplified using a standard PCR gradient with taq DNA polymerase (GeneScript, VWR) as in the original paper (Shokralla et al., 2015). The PCR thermocycler regimes were the same as in the original papers: 18S V4 (Zhan et al., 2013), FC (Shokralla et al., 2015), Leray (Leray et al., 2013), and Folmer (Folmer, Black, Hoeh, Lutz, & Vrijenhoek, 1994) (see Figure 1 for details). The two replicates of each PCR reaction for each fragment were pooled together after visualization on a 1% electrophoresis gel. The PCR products were quantified and pooled (equal volumes from each fragment) so that each library contained all four fragments. After this step, there was a total of 24 libraries each with four different PCR amplicons (Supporting information Table S3): six SIS libraries (simple communities); six MIS libraries (complex communities); and 12 PPS libraries (single species communities). The 24 libraries obtained were cleaned using ultrapure beads at ratio of 0.875 (28 μl beads in 32 μl solution), indexed using Nextera ® XT Index Kit (24-index, V3), and final clean-up using ultrapure beads to become sequencing-ready. All libraries were submitted to Genome Quebec for final quantification, normalization, and pooling and were sequenced using pair-end 300-bp reads on an Illumina MiSeq sequencer in one run. Note that the four single individuals per species (SIS) libraries (1a, 1b, 1c, 1d) and the four multiple individuals per species (MIS) libraries (2a, 2b, 2c, 2d) were quantified and pooled in equal molar for next-generation sequencing. The PSS libraries (3a1-d3) were quantified and pooled in different molars relative to their number of individuals of the species. It is worth nothing that the pooling of PCR amplicons prior to indexing makes this a more cost-effective approach than methods that separately index each PCR amplicon prior to pooling. The same genomic DNA of the four SIS (libraries 1a, 1b, 1c, 1d) and the four MIS (libraries 2a, 2b, 2c, 2d) communities was TA B L E 1 The four primer pairs used in this metabarcoding study: 18S primer pair amplifying the V4 region and three COI primer pairs amplifying different fragments of the COI-5P gene. See Supporting information Table S1 for the complete list of primers used for the preliminary primer testing step sequenced using only the 18S primers in a separate run. The library preparation and sequencing was performed in the same proportions of 5% of one flowcell using the same Illumina MiSeq pair-end 300-bp platform. This experiment was conducted to compare sequencing depth and species detection rates between a metabarcoding run with a single marker versus a multiplexed metabarcoding run with other markers/fragments (more than one marker/fragment per run for the same sample/library). | Building a local reference database We created a local database composed of 149 total sequences used for taxonomic assignment of reads (see Supporting information Figure 1 for the detailed fragments positions). The 18S reference sequences contained the V4 region without trimming. The best BLAST hit against our local reference database was used to classify each sequence read with a minimum of 95% identity and an alignment length of at least 150 bp in forward and reverse reads. These relatively relaxed thresholds were used to accommodate the species with congeneric or confamiliar reference sequences. In the case of multiple best hits, if the correct species assignments could not be confirmed manually based on reads blasting against the online database on GenBank, they were excluded from further analysis. | Bioinformatics analyses The bioinformatic pipeline in this study consisted of demultiplexing, quality filtering, trimming raw reads, and assigning taxonomy via BLASTN (Altschul, Gish, Miller, Myers, & Lipman, 1990) against our local reference database. Taxonomic assignment was conducted at a minimum of 95% identity. We performed first a local BLAST to find unique best hits. When multiple species had equal identity, a second BLAST search in GenBank was performed to find unique best hits. If multiple species still appeared as having equal identity, the read was excluded from further analysis (Supporting information Figure S2). Each mock community was processed as a separate "library" and could be demultiplexed via their unique combination of indices. Raw reads were assigned to their corresponding libraries, generating paired forward R1 and reverse R2 files for each library (Raw read pairs in Table 2). The raw reads were then quality filtered and trimmed via "Quality Trimmer" from the FASTX-Toolkit (http://hannonlab.cshl.edu/fastx_toolkit/), with a minimum Phred quality score of 20 and a minimum length of 150 bp after trimming (see trimmed-R1 and trimmed-R2 in Table 2). After quality trimming, the R1 and R2 reads were separately used as queries in BLAST against the local database. The BLAST results of R1 and R2 were concatenated together (see paired reads after trimming in Table 2), and only the sequences with both R1 and R2 returning an accepted BLAST match to a reference sequence (>95% identity and >150 bp) were kept for further analysis (see filter-blasting step in Table 2). The BLAST results were then further filtered based on whether both R1 and R2 reads were assigned to the same species (see filter-blasting same species in Table 2 and Supporting information Figure S2). | Primer testing The preliminary amplification success of 14 primer pairs was tested on 103 species (Supporting information Table S2). The highest success of 76% was observed for the 18S fragment (Zhan et al., 2013), followed by the COI_Radulovici fragment (58%) (Radulovici, Bernard, & Dufresne, 2009) and then the COI_FC fragment (50%) (Shokralla et al., 2015). The overall amplification success rate of the COI_Leray fragment (38.5%) (Leray et al., 2013) was similar to the COI_Folmer fragment (37.5%) (Folmer et al., 1994). Although the three COI fragments COI_FC, COI_Leray, and COI_Folmer were designed to target a wide range of phyla, amplification success was found to be dependent on the species group. When selecting the primer for the metabarcoding study, we considered not only the overall performance of the primers under our specific conditions but also the size of the amplicons, as well as the general use of the primer pair in other barcoding-related studies. | Read abundance comparison A total of 20.73 million raw read pairs were sequenced from the mock communities, of which 16.72 million paired reads remained after quality filtering (Table 2) | The performance of the 18S marker when used alone vs. with other markers The method tested here is a multimarker approach with more than one marker sequenced in one run rather than requiring multiple runs, making it versatile and cost-effective. However, the impact of this "multiplex" approach on species detection rates and sequencing depth (number of reads per individual/species) needs to be examined. We compared results from the 18S marker in our multiplexed multimarker approach to those in a single marker approach using both the SIS communities (Supporting information Table S5) and MIS communities (Table 3). In both cases, the sequencing depth (number of reads) on average and per individual or per species was consistent between the single-marker and multimarker datasets (Table 3 and Supporting information Table S5). In the SIS communities of 56 species, discrepancies were only found when read counts were very low. For example, three species were detected exclusively in the singlemarker dataset, while three species were detected exclusively in the multimarker datasets, but the number of reads in all six of these instances was low (≤11 reads), representing about 0.003% of the total taxonomically-assigned reads. In the MIS communities of 14 species, only two species had different detection between the single-marker and multimarker datasets: Leptodora kindtii was only detected in the single-marker experiment with 47 reads, and Corbicula fluminea was only detected in the multimarker datasets with nine reads (see Table 3). This demonstrates that the majority of species were consistently detected in both single-marker and multimarker metabar- | Primer pair choice and species detection The three different COI fragments selected correspond to the COI-5P region (Figure 1), encompass regions with different levels of genetic variation across the species included in the mock communities, and show variation in the amplification success of various taxonomic groups. The number of species detected among the three COI fragments was compared in both SIS communities and MIS communities (Figure 4). We found that few species (2%-3%) were uniquely recovered by a single COI fragment, and most of the species (45%-60%) were consistently detected by all three COI fragments. The three COI fragments together detected 59% of the species in SIS and 80% of the species in MIS communities ( Figure 4). The use of three COI primer pairs improved species detection rates by 3.8%-7.5% in SIS and 3%-17% in MIS communities compared to using a single COI primer pair. | Marker choice and species detection Species detection rates in the SIS communities and MIS communities were also compared between the 18S V4 marker and the three COI fragments considered together ( Figure 5) (Figure 6), improving species detection rates by 11.3%-16.6% compared to using the 18S marker alone and by 13.3%-30.2% compared to using the three fragments of the COI marker (see Supporting information Table S6 for detailed species detection). | False positives: detection of species not intentionally included in mock communities The incidence of false positives (species detected but not intentionally included) in the three main communities were compared between the 18S V4 marker and the three COI fragments (Table 4). In general, a low number of reads (sometime single reads) were assigned as contaminants (Supporting information Tables S7 and S8). Notes. The "n" refers to the number of individuals per species. Two instances when species detection differed between the 18S marker alone and 18S marker multiplexed are marked as an asterisk. Pearson's correlation coefficient r = 0.873, R 2 = 0.763. The use of multiple group-specific COI primer pairs has been suggested as an efficient method for obtaining higher amplification success when studying broad taxonomic groups (Bucklin et al., 2016;Cristescu, 2014). Moreover, the use of both uniparentally inherited markers such as COI and biparentally inherited markers such as 18S has been suggested as an efficient method for increasing the accuracy of species identification (Taberlet et al., 2012). Through the use of mock communities with known taxonomic composition, we demonstrate that a multigene (COI and 18S) and multiprimer pair (three COI primer pairs) metabarcoding approach can improve species detection and provides the built-in ability to cross-validate results. | Multiple primer pairs The mitochondrial COI marker has been reported to be technically challenging for amplification of broad taxonomic groups due to the lack of conserved priming sites (Bucklin et al., 2016;Deagle et al., 2014). Both group-specific (Bucklin et al., 2010) and species-specific (Thomsen et al., 2012) primer pairs have been used in COI barcoding and metabarcoding. The 18S primer pair used in this study targets the V4 region of zooplankton and was successful in previous metabarcoding studies Chain et al., 2016;Zhan et al., 2013). The 13 COI primer pairs tested here showed major differences in overall amplification success depending on the group of species. Overall, amplification success of the 13 COI primer pairs followed species-specific rather than group-specific patterns in the majority of taxa tested here (Supporting information Table S2). In addition to amplification success across taxa of interest, amplicon length is also an important consideration for studies using degraded environmental DNA, which require short amplicons (Meusnier et al., 2008), and is upwardly limited by the capacity of NGS technology to obtain accurate long reads (Shaw, Weyrich, & Cooper, 2017). For example, primer pairs used here that amplified more than 600 bp (Tables S1 and S2) Most metabarcoding studies use a single primer pair, but multiple primer pairs (species-specific or not) has been suggested and shown to improve amplification success from community samples (Bucklin et al., 2010(Bucklin et al., , 2016Clarke et al., 2014;Elbrecht & Leese, 2017;Thomsen et al., 2012). Species detection rates of the three COI fragments in our metabarcoded mock communities were expected to be higher than species amplification success during the qualitative primer testing due to massive parallel sequencing and high level of sensitivity. This was generally true but species detection varied across the three COI fragments presumably due to primer biases (see Supporting information Figure S1 for comparison). (Clarke et al., 2014;Letendu et al., 2014;Thomsen et al., 2012). The multiple COI primer pairs covering different regions of the same marker in this study were found to improve species detection rates in both SIS (3.8%-7.5%) and MIS (3.3%-16.7%) mock communities. However, degenerate COI primer pairs have been shown to have better species detection rates than nondegenerate primers (Elbrecht & Leese, 2017) when very broad taxonomic groups are investigated. Therefore, the use of degenerate reverse primer for the Leray and Folmer fragments may farther improve the species recovery rates. The use of multiple primers pairs can be applied as an alternative approach for the markers without such fully degenerated primers available. | Marker choice It is generally accepted that the choice of metabarcoding marker can greatly affect species estimates (Bucklin et al., 2016;Cristescu, 2014;Tang et al., 2012). Nevertheless, only a limited number of metabarcoding studies have used a multigene approach, and the use of multiple evolutionarily independent markers has even more rarely been sequenced in a single NGS run. A few metabarcoding biodiversity studies have compared 18S and COI markers, with results varying across different taxonomic groups. Drummond et al. (2015) reported both COI and 18S markers providing good proxies to a traditional biodiversity survey dataset for soil eDNA. Tang et al. (2012) reported that COI in eDNA surveys of meiofauna estimated more species than morphospecies (species identified by morphology), Species recovery rates Single individuals per species (n = 53) Multiple individuals per species (n = 30) related species. Overall, the combination of 18S and COI improved species detection rates by 11%-30% compared to using a single 18S or COI marker with the tested primers. Sequencing depth is often of major concern for fully describing community members from a complex sample. The number of libraries pooled in one sequencing run affects the number of reads per species (Letendu et al., 2014;Shaw et al., 2017). As expected, we found that the number of reads per individual or species varied significantly across markers and fragments. We consider that efficient equimolar quantifications prior to pooling including amplicons of similar length and adjusted bioinformatics pipelines could potentially also counter this variation. On a more positive note, the number of reads assigned to each species and overall species detection rates were consistent whether using a single-marker or multimarker metabarcoding approach. Therefore, the sequencing depth and species detection rates were not affected using multiple markers in one sequencing run, indicating that multiplexing several primer pairs and markers can provide a robust method to characterize samples without appreciably sacrificing read depth or species detection. Our study compares species detection success in zooplankton metabarcoding using two evolutionarily independent markers combined with different primer pairs of the same marker. It is important to recognize that the relatively high species recovery we report might not be achieved in studies applying different bioinformatics steps such as implementing OTU clustering methods or using online reference databases which are likely to increase both false positives and false negatives. With the increasing data output from NGS technologies and the ability to pool libraries for sequencing, our results support the use of multiple genetic markers as a cost-effective approach to assessing biodiversity in a broad range of taxa within the same run. This approach also provides a built-in means to cross-validate species detection among the markers. PCR-free methods have been developed to avoid PCR bias and to enable use of more markers Zhou et al., 2013). Through this study, the use of two evolutionarily independent markers significantly improved species detection rates, and the use of three COI primer pairs improved species detection rates for particular taxa. TA B L E 4 Rates of false negatives (species not detected) and false positives (species detected but not included in the mock communities) for the four fragments in all libraries | CON CLUS IONS Most metabarcoding studies to date have sequenced single markers, but the choice of marker is known to greatly affect species estimates and detection accuracy. Our results suggest that a multiplexed metabarcoding approach using multiple markers and multiple primer pairs can ultimately achieve more accurate biodiversity estimates by reducing both false positives and negatives. Furthermore, the sequencing depth (number of reads per species) and species detection rates remained consistent whether multiplexing multiple fragments or using a single marker. Overall, our metabarcoding approach utilizing multiple markers and multiple primer pairs improved the species detection rates compared to using a single primer pair and/or marker. Thus, metabarcoding based on multiplexed fragments can be cost-effective and useful for biomonitoring zooplankton in natural communities. ACK N OWLED G EM ENTS We thank R. Young and S. Adamowicz for providing the reference sequences for many of the zooplankton species included in this study. We also thank E. Brown CO N FLI C T O F I NTE R E S T None declared. DATA ACCE SS I B I LIT Y The scripts used for bioinformatics analysis are available in the
v3-fos-license
2023-04-08T06:17:44.451Z
2023-04-07T00:00:00.000
258009084
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://link.springer.com/content/pdf/10.1007/s10388-023-01000-4.pdf", "pdf_hash": "a46080625ad758b11aa6dbd38ceac2b114a4d060", "pdf_src": "Springer", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41127", "s2fieldsofstudy": [ "Medicine" ], "sha1": "0d3a2f9ec11de3d6d87c105652ece3d3b7341b7e", "year": 2023 }
pes2o/s2orc
Outcomes of solitary postoperative recurrence of esophageal squamous cell carcinoma diagnosed with FDG-PET/CT and treated with definitive radiation therapy Background Surgical resection of esophageal cancer is frequently performed to achieve a complete cure. However, the postoperative recurrence rate is 36.8–42.5%, leading to poor prognosis. Radiation therapy has been used to treat recurrences; solitary recurrence has been proposed as a prognostic factor for radiation therapy, though its significance is unclear. 18F-fluorodeoxyglucose positron emission tomography is a highly accurate diagnostic modality for esophageal cancer. This retrospective study aimed to analyze the outcomes of solitary postoperative recurrences of esophageal squamous cell carcinoma diagnosed with 18F-fluorodeoxyglucose positron emission tomography and treated with definitive radiation therapy. Methods We examined 27 patients who underwent definitive radiation therapy for single or multiple postoperative recurrences of esophageal squamous cell carcinoma between May 2015 and April 2021. 18F-fluorodeoxyglucose positron emission tomography/computed tomography was performed within 3 months before the commencement of radiation therapy. Kaplan–Meier, univariate, and multivariate analyses were performed to examine the overall survival and identify potential prognostic factors. Results The 1-, 2-, and 3-year overall survival rates were 85.2%, 62.6%, and 47.3%, respectively, and solitary recurrence was the only significant factor associated with overall survival (P = 0.003). The 1-, 2-, and 3-year overall survival rates in patients with solitary recurrence were 91.7%, 80.2%, and 80.2%, respectively, and in patients with multiple recurrences they were 80.0%, 50.3%, and 25.1%, respectively. Multivariate analysis also showed solitary recurrence as a significant factor for overall survival. Conclusions When diagnosed with 18F-fluorodeoxyglucose positron emission tomography/computed tomography, solitary recurrence appears to have a more favorable prognosis than multiple recurrences. We hypothesized that solitary recurrence, accurately diagnosed with FDG-PET before the radiation therapy, is an important factor for OS in esophageal cancer patients. This study aimed to analyze the outcomes of solitary recurrence of esophageal squamous cell carcinoma diagnosed using FDG-PET/computed tomography (CT) and treated with radiation therapy. To the best of our knowledge, no studies have evaluated the effect on survival after radiation therapy of solitary postoperative recurrence of esophageal cancer, specifically diagnosed with FDG-PET/CT, compared to multiple recurrences. Methods This study was approved by the Institutional Review Board (Approval No. K2206-001) and followed the Declaration of Helsinki. The need for informed consent was waived because of the retrospective nature of the study. Data were collected from medical records and radiation therapy plans. Study population We retrospectively analyzed patients treated with definitive radiation therapy for localized postoperative recurrent esophageal cancer between May 2015 and April 2021 at Okayama University Hospital. The initial surgery was basically radical subtotal esophagectomy with 2-or 3-field lymph node dissection. The inclusion criteria were: (1) primary tumor pathology confirmed as esophageal squamous cell carcinoma; (2) postoperative recurrent sites excluding the mucosa; (3) no history of radiation therapy for recurrent tumors; (4) no disseminated and/or hematogenous metastases, such as in the liver or lungs; (5) no other active cancers; (6) FDG-PET/CT performed within 3 months before initiation of radiation therapy; (7) all recurrent tumors scheduled for radiation of at least 50 Gy; and (8) at least one follow-up visit after radiation therapy completion. Patients who participated in esophageal cancer clinical trials were excluded. Clinical stages were determined based on the 8 th Edition of the Union for International Cancer Control TNM classification. Recurrence was diagnosed comprehensively by surgeons and radiation oncologists using physical findings, tumor markers, endoscopy, CT, and FDG-PET/CT findings. Treatment Radiation therapy was performed using X-ray beams 5 days/ week. The prescribed dose was 50-66 Gy in fractions of 1.8 or 2.0 Gy. The typical radiation dose was 60 Gy in fractions of 2.0 Gy. When the reconstructed intestinal tract and/or small bowel was irradiated, a dose of 50-54 Gy in fractions of 1.8 or 2.0 Gy was selected. CT simulation was used. Gross tumor volume (GTV) was determined using CT and FDG-PET/CT. The clinical target volume (CTV) was defined as GTV plus a 0.5-1 cm margin. The planning target volume was defined as CTV plus a margin of 0.5 cm. An internal margin was also considered if the tumor moved with the respiration. Three-dimensional conformal radiation therapy (3D-CRT) was used in most cases. Intensity-modulated radiation therapy (IMRT) was considered if the irradiated area included the neck. Surgeons and radiation oncologists determined whether prophylactic irradiation was required. Systemic chemotherapy was administered concurrently if possible. Surgeons used performance status, treatment history, and other factors to determine chemotherapy regimens. Follow-up Follow-up was conducted approximately at 3-month intervals for the first 2 years after radiation therapy and every 6 months thereafter. Recurrence was evaluated using physical findings, tumor markers, endoscopy, CT, magnetic resonance imaging, and FDG-PET/CT findings. Late adverse events were defined as those that emerged ≥ 91 days after initiation of radiation therapy and were evaluated using the Common Terminology Criteria for Adverse Events, version 5.0. Only non-hematologic adverse events of grade > 2 were investigated. Prognostic factors Several possible prognostic factors were evaluated, including sex, age, performance status, initial clinical stage, number of tumors, tumor diameter, location of recurrence, interval from final surgery to diagnosis of recurrence, radiation dose, prophylactic irradiation, and concurrent chemotherapy. Age and performance status were assessed at the beginning of the radiation therapy. Performance status was evaluated using the Eastern Cooperative Oncology Group (ECOG) scale. Based on the tumor number, patients were categorized into solitary or multiple tumor groups. Tumor diameter was defined as the long-axis diameter; if multiple tumors were present, the diameter of the largest tumor was considered as the tumor diameter. The minimum prescribed dose was selected if multiple tumors were treated with different doses. Statistical analyses Kaplan-Meier analyses were performed to determine OS, progression-free survival (PFS), and local control (LC) rates starting from initiation of radiation therapy until death from cancer or other causes, disease progression or death from cancer or other causes, and tumor progression within the irradiation field, respectively. Cutoff values to categorize factors such as age, recurrent tumor diameter, interval from final surgery to diagnosis of recurrence, and radiation dose were determined according to the median values of the study population. Univariate analyses of various potential prognostic factors for OS, PFS, and LC were performed using log-rank tests. If the number of tumors was revealed as a significant prognostic factor in univariate analysis, the solitary and multiple recurrence groups were compared using two-sided Fisher's exact test for categorical variables. Furthermore, we estimated hazard ratios (HRs) and 95% confidence intervals (CIs) for OS using the Cox proportional hazard model to adjust for potential confounders in the multivariate analysis. Various factors were sequentially included in three models: we started by analyzing a crude model (model 1, including only the number of tumors), then we adjusted for sex and age (model 2). Subsequently, we adjusted the analysis for factors that were significantly different between the solitary and multiple recurrence groups and for significant risk factors in the log-rank test (model 3). P values < 0.05 indicated statistically significant differences. In the crude and multivariate analyses of OS, HRs > 1.00 indicated an increased risk of death. Kaplan-Meier and Cox proportional hazard model-based analyses were performed using the IBM Statistical Package for the Social Sciences for Windows, version 26 (IBM, Armonk, NY, USA). Fisher's exact test was performed using Stata 17 software (StataCorp LP, College Station, TX, USA). Patient and treatment characteristics Patient and treatment characteristics are presented in Table 1. In total, 27 patients were included in the analysis (24 males and 3 females; median age: 70 years, range 49-86). A solitary lesion was diagnosed in 12 patients, whereas 15 presented ≥ 2. The median tumor diameter was 29 mm (range 12-49). The median follow-up time was 24 months (range 5-71). Regarding the initial clinical stage, 4, 10, 4, and 9 patients were in stages I, II, III, and IV, respectively. Patients with stage IV disease had no distant metastases other than supraclavicular lymph nodes. Before the initial surgery, 22 patients underwent chemotherapy. Seven patients underwent surgery for the first postoperative recurrence. One patient underwent additional surgery for the second postoperative recurrence. Therefore, they underwent radiation therapy for the second and third recurrence after surgery. All patients completed the planned radiation therapy. Moreover, systemic chemotherapy was administered simultaneously in 25 patients; the remaining two were treated solely with radiation therapy because of advanced age or renal failure. Concurrently used regimens included tegafur/gimeracil/oteracil potassium in 18 patients; cetuximab in 2; cisplatin, docetaxel, and 5-fluorouracil in 2; cisplatin and 5-fluorouracil in 1; cisplatin in 1; and docetaxel in 1. The median radiation dose in 27 patients was 60 Gy (50-66 Gy). Furthermore, 22 patients received a dose of ≥ 60 Gy. Two patients were treated with IMRT, and the other 25 with 3D-CRT. All 12 patients with solitary recurrence were treated with concurrent chemoradiotherapy and received a radiation dose of 60 Gy, without prophylactic irradiation. The initial clinical stage and radiation dose were significantly different between the solitary and multiple recurrence groups (P = 0.006 and P = 0.047, respectively). Treatment outcome The 1-, 2-, and 3-year OS rates were 85.2%, 62.6%, and 47.3%, respectively (median OS: 33 months); PFS rates were 51.9%, 38.9%, and 38.9%, respectively (median PFS: 15 months); and LC rates were 77.8%, 68.6%, and 68.6%, respectively (Fig. 1). In-field and out-of-field recurrences occurred in 8 and 14 patients, respectively. Seven patients had both in-field and out-of-field recurrences, and one patient had only in-field recurrence. In-field recurrences occurred between 2 and 19 months (median 5.5) after initiation of radiation therapy. Six patients underwent additional radical treatment for localized recurrences: three underwent surgery and three underwent radiation therapy. Unfortunately, 15/27 patients died. No late nonhematologic adverse events of grade > 2 were observed during the follow-up period. In the univariate analysis of all variables considered, only solitary recurrence was a significant prognostic factor for OS and PFS (P = 0.003 and P = 0.015, respectively; Fig. 2). The 1-, 2-, and 3-year OS rates of patients with solitary recurrence were 91.7%, 80.2%, and 80.2%, respectively, whereas those of patients with multiple recurrences were 80.0%, 50.3%, and 25.1%, respectively. In contrast, only the interval from final surgery to diagnosis of recurrence was a significant prognostic factor for LC (P = 0.036). Results of the univariate analysis for each variable considered are presented in Table 2. In the multivariate analysis, the number of tumors, sex, age, initial clinical stage, radiation dose, and interval from final surgery to diagnosis of recurrence were entered into the Cox proportional hazards model for model 3. Table 3. Discussion Radiation therapy or surgery are the main treatment options in postoperative recurrence of esophageal squamous cell carcinoma, if the sites of recurrence are limited. Nakamura et al. reported that patients with postoperative recurrent lymph node metastasis who underwent lymphadenectomy and chemoradiotherapy showed significantly higher survival rates than patients who received only chemotherapy or best supportive care [16]. Multimodal treatments, including lymphadenectomy and chemoradiotherapy, could improve survival in patients with esophageal carcinoma lymph node recurrence after curative resection. Several studies have examined the effectiveness of radiation therapy for postoperative recurrent esophageal carcinoma [4][5][6][7][8][9][10][11][12][13][14][15][16][17][18][19][20], and the 2-year OS rates of radiation therapy vary between 15 and 78% [5,6,8,10,11,13,17,20]. Numerous prognostic factors for the outcome after radiation therapy were reported, including age [6,20], performance status [9,13,17], tumor size [8-10, 12, 17], number of recurrences [4,7,10,12], disease-free interval [4,9,20], total dose [4,17], concurrent chemotherapy [9, 12, [21]. SABR was associated with improved OS. Regarding esophageal cancer, solitary recurrence after curative treatment was investigated in several studies including not only radiation therapy, but also other treatment options [4, 5, 7-10, 12, 14, 16-19]. Moreover, some studies showed that solitary recurrence was a favorable prognostic factor for OS [4,7,10,12,14,16,18,19]. In our analyses, solitary recurrence was the only significant positive prognostic factor for OS. Chu et al. analyzed radiation therapy for cervical lymph node recurrence in thoracic esophageal squamous cell carcinoma after curative resection [4]. Univariate and multivariate analyses showed single lymph node recurrence as a favorable prognosis factor. Kawamoto et al. investigated the prognostic factors regarding chemoradiotherapy for postoperative lymph node recurrences of esophageal squamous cell carcinoma [7]. Univariate analysis showed that single recurrence was associated with significantly better prognosis. Ma et al. analyzed the effect of radiation Interval from final surgery to diagnosis of recurrence (months) therapy on recurrent mediastinal lymph node metastases and reported that the number of locoregional recurrences of these metastases (= 1 vs. > 1) was a prognostic factor in multivariate analysis [12]. However, other studies showed that solitary recurrence was not a significant prognostic factor [5,9,17]; we hypothesized that it was not significant because FDG-PET was not performed consistently before radiation therapy. Furthermore, none of these studies on solitary recurrence described the frequency of FDG-PET use before radiation therapy. With other imaging modalities, the conclusive diagnosis of solitary recurrence may have been less accurate. In this study, we might have been able to evaluate true solitary recurrences because all patients were evaluated using FDG-PET/CT. The usefulness of FDG-PET/CT in the initial diagnosis of esophageal cancer has been reported [22,23]. Additionally, several studies have also suggested its efficacy for follow-up and monitoring after surgery [24][25][26]. Kudou et al. reported that FDG-PET/CT has a high capability to detect single small recurrent tumors even outside the chest and abdomen and proposed a follow-up method using FDG-PET/CT after esophageal cancer surgery [24]. Pande et al. evaluated the diagnostic performance of FDG-PET/CT in the suspected recurrence of esophageal carcinoma after surgical resection with curative intent [25]. The sensitivity, specificity, and positive and negative predictive values of FDG-PET/CT were 98.4%, 80%, 98%, and 80%, respectively, with an accuracy of 97%. Based on the evidence of distant metastases identified on FDG-PET/CT, a change in management-from radiation therapy/surgery to palliative chemotherapy/best supportive care-was adopted in 41% (28/68) of patients. Furthermore, Goense et al. reported that in particular, FDG-PET and FDG-PET/CT show a minimal false-negative rate [26]. Pooled estimates of sensitivity and specificity for FDG-PET and FDG-PET/CT in diagnosing recurrent esophageal cancer were 96% and 78%, respectively. In our analyses, the 1-, 2-, and 3-year OS rates overall were 85.2%, 62.6%, and 47.3%, respectively. Kawamoto et al. retrospectively evaluated 21 patients with postoperative solitary lymph node recurrence of esophageal squamous cell carcinoma [8]. Solitary lymph node recurrence was defined as follows: (1) ultrasonography, CT, and physical findings showed single lymph node and (2) PET showed focal uptake at the same lymph node. The median follow-up period was 32 months. The 2-year OS rate was 78%. The OS rates in our study and Kawamoto's study are high compared to those reported in previous studies [4][5][6][7][9][10][11][12][13][14][15][16][17][18][19][20], possibly due to FDG-PET/CT aiding in appropriate patient selection and GTV description. However, possible false-positive cases must be carefully considered. Goense et al. emphasized that histopathological confirmation of a lesion suspected with FDG-PET or FDG-PET/CT is required owing to a considerable false-positive rate [26]. We also agree that a histopathological diagnosis should be performed if the imaging diagnosis is unclear. Nonetheless, FDG-PET is a crucial modality for judging the extent of the tumor; therefore, FDG-PET should be conducted before radiation therapy for postoperative recurrent esophageal squamous cell carcinoma. The 1-, 2-, and 3-year LC rates were 77.8%, 68.6%, and 68.6%, respectively, in our study. In-field recurrence occurred in eight patients with an unsatisfactory LC rate. When the LC rates improve, the OS rates may also improve. [27]. Currently, the optimal technique for radiation therapy has yet to be established. Moreover, the efficacy of the combination of radio-and chemotherapy is yet to be clarified. In our study, the interval from final surgery to diagnosis of recurrence was the only significant prognostic factor for LC. When the interval is longer, the recurrent tumor is likely to be growing slowly. We hypothesized that this aspect could explain why the interval was a significant prognostic factor. Although the disease-free interval has been reported as a prognostic factor for OS [4,9,20], its impact on the LC has yet to be investigated. This study has limitations. First, it was a retrospective study conducted at a single institution; therefore, the relatively short median follow-up period of 24 months may have been insufficient to evaluate the impact of the factors considered on long-term survival and late adverse events. Second, the sample size was limited, and no recurrent lesions were pathologically confirmed. Furthermore, not all patients received concurrent chemotherapy, and the regimens were inhomogeneous; the treatment strategy after radiation therapy varied with patient situations. Currently, immunotherapy is considered as a promising strategy, including nivolumab [28]; in our study, this medication was administered to four patients after failed radiation therapy, possibly contributing to their survival. In conclusion, solitary recurrence appears to have a more favorable prognosis than multiple recurrences, when diagnosed using FDG-PET/CT. Further prospective multicenter studies are required to validate our findings and determine the optimal treatment strategy for postoperative localized recurrence in esophageal cancer. Additionally, more intensive radiation therapy and combination therapy might need to be considered in cases of solitary recurrence to improve prognosis. Author contributions HI contributed to the study design, data collection, analysis and interpretation, and manuscript writing. HI, KY, and SS were responsible for the radiation therapy. ST, MH, NM, and KN performed the patient follow-ups and other treatments. SA and ST contributed to the study design, data analysis, and interpretation. TH contributed to the study design. Funding Open access funding provided by Okayama University. Data availability The data that support the findings of this study are available from the corresponding author upon reasonable request. Declarations Ethics statement All procedures followed were in accordance with the ethical standards of the responsible committee on human experimentation (institutional and national) and with the Helsinki Declaration of 1964 and later versions. The requirement for written informed consent was waived owing to the retrospective nature of this study. Conflict of interest The authors report no conflict of interest. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/.
v3-fos-license
2017-04-13T22:58:37.197Z
2016-01-01T00:00:00.000
13083392
{ "extfieldsofstudy": [ "Biology", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://journals.plos.org/plosntds/article/file?id=10.1371/journal.pntd.0004285&type=printable", "pdf_hash": "999b0d85a641d8ac94d06b2a6318d74e8f6df1a8", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41128", "s2fieldsofstudy": [ "Biology", "Medicine" ], "sha1": "999b0d85a641d8ac94d06b2a6318d74e8f6df1a8", "year": 2016 }
pes2o/s2orc
Trypanosoma cruzi Experimental Infection Impacts on the Thymic Regulatory T Cell Compartment The dynamics of regulatory T cells in the course of Trypanosoma cruzi infection is still debated. We previously demonstrated that acute murine T. cruzi infection results in an impaired peripheral CD4+Foxp3+ T cell differentiation due to the acquisition of an abnormal Th1-like phenotype and altered functional features, negatively impacting on the course of infection. Moreover, T. cruzi infection induces an intense thymic atrophy. As known, the thymus is the primary lymphoid organ in which thymic-derived regulatory T cells, known as tTregs, differentiate. Considering the lack of available data about the effect of T. cruzi infection upon tTregs, we examined tTreg dynamics during the course of disease. We confirmed that T. cruzi infection induces a marked loss of tTreg cell number associated to cell precursor exhaustion, partially avoided by glucocorticoid ablation- and IL-2 survival factor depletion. At the same time, tTregs accumulate within the CD4 single-positive compartment, exhibiting an increased Ki-67/Annexin V ratio compared to controls. Moreover, tTregs enhance after the infection the expression of signature markers (CD25, CD62L and GITR) and they also display alterations in the expression of migration-associated molecules (α chains of VLAs and chemokine receptors) such as functional fibronectin-driven migratory disturbance. Taken together, we provide data demonstrating profound alterations in tTreg compartment during acute murine T. cruzi infection, denoting that their homeostasis is significantly affected. The evident loss of tTreg cell number may compromise the composition of tTreg peripheral pool, and such sustained alteration over time may be partially related to the immune dysregulation observed in the chronic phase of the disease. Introduction Regulatory T cells (Tregs) herein defined as CD4 + Foxp3 + T cells represent a population that plays an essential role in the maintenance of self-tolerance and in the shutdown of inflammatory response. According to their origin, two major classes of Tregs have been described: thymus-derived Tregs (tTregs) and peripherally-derived Tregs (pTregs). The tTreg population is differentiated in the thymus and populates the periphery early, around day 3 of life; whereas in periphery, environmental antigens or other signals can up-regulate Foxp3 in conventional CD4 + T cells, converting them into pTregs [1]. Tregs are also characterized by the expression of certain surface markers, mainly the IL-2 receptor α chain or CD25, which is expressed constitutively in this population. IL-2, together with other cytokines from the same family like IL-15, favours Tregs expansion, maturation and survival [2][3][4]. In the context of infections, Tregs play a special role in controlling the magnitude of immune activation and are also involved in the restoration of the homeostatic environment [5]. However, the studies of Treg dynamics in infectious settings have been mainly focused on the involvement of pTreg population, and little is still known the potential impact of infections upon the thymic compartment of Tregs. Given that tTreg homeostasis is likely to be altered during infectious processes, abnormalities at thymic level may have harmful consequences for host immunocompetence. Chagas disease (also known as American trypanosomiasis) is caused by the protozoan parasite Trypanosoma cruzi (T. cruzi) and represents one of the most frequent endemic parasitic diseases in Latin America. Nowadays, Chagas disease has acquired global relevance because is spreading to non-endemic countries. The disease has a wide spectrum of symptoms and outcome, ranging from an asymptomatic infection, to an acute illness or a chronic gastrointestinal or cardiac disease. The role played by Tregs during T. cruzi infection is still controversial and diverse hypotheses have been proposed. An unfavorable role of Tregs during the infection seems plausible either because a defective or excessive function in the partially autoimmune basis for chronic chagasic cardiomyopathy, or parasite persistence, respectively [6][7][8]. In this regard, it is noteworthy that studies conducted in both humans or experimental settings have evaluated Treg dynamics in blood, secondary lymphoid organs or heart [8][9][10][11][12], but there is no available information regarding the potential impact of T. cruzi infection on the tTreg population. It is well established that severe T. cruzi murine infection induces a strong Th-1 response accompanied by a marked thymic atrophy [13]. In this respect, we previously demonstrated that atrophy is mainly caused by massive apoptosis of cortical CD4 + CD8 + double positive (DP) thymocytes induced by raised glucocorticoid levels [13][14][15]. Interestingly, other non-mutually exclusive mechanisms appear to be involved in the evolution of this atrophy, such a decrease in cell proliferation, and an increase in the thymic output of mature and immature thymocytes [16], while is suspected a low income of T-cell progenitors from the bone marrow [16]. Moreover, a variety of alterations in the migration pattern of thymocytes were observed in association with the anomalous expression of receptors and ligands for extracellular matrix proteins, cytokines or chemokines [17][18][19]. Furthermore, our recent results clearly show important phenotypic and functional changes in the pool of pTregs during infection [12], suggesting that tTregs are also affected. Herein, we investigated whether experimental acute T. cruzi infection impacts upon the dynamics of tTregs. Our results provide a clear demonstration that T. cruzi infection caused profound abnormalities in the tTreg compartment within the thymus. Materials and Methods Mice and experimental infection C57BL/6 and BALB/c male mice, aged 6-8 weeks were obtained from the animal facilities at Rosario Medical School and Oswaldo Cruz Foundation. Trypomastigotes of the Tulahuen or Y strain of T. cruzi, corresponding to T. cruzi lineage VI and II respectively [20], were used. Mice were infected subcutaneously with 100 or 1,000 viable trypomastigotes. To monitor the systemic repercussion of the acute disease, parasitemia and the survival time was recorded following infection. Ethics statement Experiments with mice were performed in strict accordance with the recommendations in the Guide for Care and Use of Laboratory Animals of the National Institute of Health and were approved by each Institutional Ethical Committee (School of Medical Sciences from National University of Rosario, Bioethics and Biosecurity Committees, Resolution N°3740/2009, and Oswaldo Cruz Foundation Ethics Committee on Animal Use, Resolution P-0145-02). Determination of cell death and proliferation Assessment of thymocyte death by annexin V labeling (BD Pharmingen) was carried out according to the manufacturer's instructions. Briefly, thymocytes were washed in annexin V binding buffer, and 1x10 6 cells were stained with FITC-labeled annexin V, followed by permeabilization and PE-Foxp3 staining, as described previously [12]. In all cases, flow cytometry was performed immediately after staining. Dead cells were gated on the basis of forward and side scatter parameters. For proliferation studies, FITC-or PE-coupled antibodies against Ki-67 were added together with anti-Foxp3 antibody and subjected to fixation/permeabilization procedures, as described above. For each sample, at least 100,000 events were collected in a FAC-SAria II Flow cytometer. Data were analyzed using DiVa or FlowJo softwares. Adrenalectomy Mice were anesthetized with 100mg/kg ketamine and 2mg/kg xylazine and afterward bilateral adrenalectomy was performed via dorsal approach, as previously published [21]. Briefly, two small incisions were made on each side of the mouse back, just below the rib cage, and the adrenal glands were removed with curved forceps. Sham operation involved similar procedures, but without removing the adrenals. Following the operation, adrenalectomized mice were supplemented with 0.9% (wt/vol) sodium chloride in drinking water. Animals were infected one week after the surgery. Cell migration assay For evaluation of the in vitro thymocyte migratory response towards fibronectin, of 5-μm-pore transwell inserts (Corning Costar, Cambridge, USA) were coated with 10 mg/mL of human fibronectin (Sigma Aldrich, USA) for 45 min at 37°C, and non-specific binding sites were blocked with PBS-diluted 1% BSA. Thymocytes were obtained at day 17 post-infection or from non-infected counterparts. Cells (2.5x10 6 thymocytes in 100 mL of RPMI / 1% BSA) were then added in the upper chambers. Migration medium was serum free, so that to avoid serumderived fibronectin or other soluble chemotactic stimuli. After 12 hours of incubation at 37°C in a 5% CO 2 humidified atmosphere, migration rate was defined by counting the cells that migrated to the lower chambers containing migration medium alone (RPMI /1% BSA). Migrating cells were ultimately counted, stained with appropriate antibodies and analyzed by flow cytometry. The percentage of each subset was used along with the total cell counting to calculate absolute numbers of each lymphocyte subset. The results are represented in terms of input seen in each subpopulation, using the following formula, as previously reported [17]: Input ð%Þ ¼ Absolute number of migrating cells with a given phenotype Total number of starting cells with a given phenotype  100 Immunofluorescence studies Thymuses were removed at different days post-infection (p.i.), embedded in Tissue-Tek (Miles Inc., Elkhart, USA), frozen in dry ice and stored at -80°C. Cryostat sections of 3-4 mm-thick were settled on glass slides, acetone fixed and blocked with PBS-diluted BSA 1%. Specific antibodies used were: PE/anti-Foxp3 (eBioscience) and FITC/anti-IL-2 (eBioscience) monoclonal antibodies; anti-cytokeratin (Dako, Glostrup, Denmark), anti-laminin and anti-fibronectin (Novotech, Pyrmont, Australia) polyclonal antibodies; and, as secondary antibodies, anti-goat Alexa 488 and anti-rabbit Alexa 546 (Molecular Probes, Eugene, USA). Sections were incubated with the appropriate antibody for 1 h at 4°C, washed, and in the case of staining for cytokeratin, laminin and fibronectin following subjected to a secondary antibody labeling. Background staining values were obtained subtracting the primary antibody. Samples were analyzed by confocal microscopy using a Nikon Eclipse TE-2000-E2 device (Germany) and the images obtained were subsequently analyzed using the Image J software (Bethesda, Maryland, USA). Immunofluorescence labeling and quantitative confocal microscopy were used to investigate the distribution and/or quantification of Foxp3 and IL-2. Optimal confocal settings (aperture, gain, and laser power) were determined at the beginning of each imaging session and then held constant during the analysis of all samples. The number of cells expressing Foxp3 was evaluated as Foxp3 + cells/mm 2 . IL-2 fluorescence intensity was measured as an average of each area and the values were recorded as arbitrary units (pixel/μm 2 ). RNA isolation, cDNA synthesis and qPCR Thymi were obtained at different days p.i. Total RNA was isolated from cells using TRIzol (Invitrogen, USA) according to the manufacture's recommendations. mRNA levels were determined by RT-qPCR. cDNA was synthesized from 1 μg of total RNA using Superscript III reverse transcriptase (Invitrogen, USA) and specific primers for murine IL-2 and IL-15. PCR reactions were performed in a StepOne/Plus Real-Time PCR System (Life Technologies, USA) using SYBR Green I (Roche) to monitor dsDNA synthesis. Data were normalized using GAPDH cDNA quantification. Primers sequences were: Statistical analyses Depending on the characteristics of the variable, differences in quantitative measurements were assessed by ANOVA followed by Bonferroni test or by the Kruskall-Wallis followed by post-hoc comparisons when applicable. Results were expressed as mean ± standard error of the mean (s.e.m) unless otherwise indicated. The Graph-Pad Instat 5.0 software (Graph-Pad, California, USA) was applied for statistical analyses, and differences were considered significant when p value was 0.05. T. cruzi infection induces changes in the frequency and number of tTregs cells First, to evaluate whether thymic content of tTregs (detected as Foxp3 + cells inside CD4 + single-positive -CD4 SP-compartment) were affected, we monitored by cytofluorometry their frequency and cell number during infection in C57BL/6 mice. A progressively increased tTreg frequency was observed within the overall CD4 SP population, raising the highest values by day 21p.i. (Fig 1A and 1B). Nevertheless, there was a dramatic decrease in their absolute numbers ( Fig 1C). To evaluate whether these findings were mouse-or parasite-strain dependent, we carried out similar studies in parallel well-established murine models for T. cruzi infection. Thus, BALB/c and C57BL/6 mice were both infected with either the Tulahuen or Y strains of T. cruzi [13,22,23]. As shown in S1 Fig, we observed similar results. It follows that T. cruzi acute infection induces an increase in the proportion of Foxp3 + cells within the CD4 SP compartment, while reducing their absolute number independently of the mouse or parasite genetic background. In addition, chronically infected mice tended to restore thymic architecture and tTregs numbers (S2 Fig). These findings suggest that specific mechanisms, such as cell death or altered proliferation, may be influencing the homeostasis of Foxp3 + CD4 + SP and also Foxp3 − CD4 + SP thymocytes during the acute phase of infection. T. cruzi infection induces changes in tTreg death and proliferation As previously reported, the thymic atrophy is mainly the consequence of an extensive apoptosis of DP cells, although CD4 SP thymocytes are also affected. Thus, the increase in Foxp3 + cell proportion within the CD4 SP compartment might be due to a diminished frequency of Foxp3 + cell death compared to the corresponding Foxp3 − thymocytes. To test this hypothesis, C57BL/6 mice challenged with the Tulahuen strain of T. cruzi were sacrificed at day 17 p.i. for an ex vivo staining of thymocytes with annexin V. The CD4 SP compartment showed an enhanced proportion of cell death (CD4 SP annexin V + cells (%) = day 0: 3±0.7 versus day 17 p.i.: 5.7±2.1; p<0.05). Within the CD4 SP population, and beyond the increased basal death levels of Foxp3 + compared to Foxp3 − cells, the latter population showed a significant increase in annexin V staining after infection; this not being the case among Foxp3 + cells in which such staining was visibly diminished (Fig 2A). These results suggest that the enlargement of Foxp3 + cell proportions within the CD4 SP compartment during infection may be related to the induction of survival signals in the remaining Tregs, whereas conventional Foxp3 − CD4 SP thymocytes became more susceptible to death. An alternative but not excluding explanation for thymic relative accumulation of tTregs within CD4 SP compartment is that Foxp3 + cells proliferate more than their Foxp3 − counterparts. To check this, we evaluated in both populations the expression of the nuclear antigen Ki-67, a marker cycling cells, as an estimation of proliferation. Strikingly, Foxp3 − and Foxp3 + cell proliferations were diminished after infection ( Fig 2B). Nevertheless, only Foxp3 + cells showed an enhanced Ki-67/Annexin V ratio following infection ( Fig 2C). Hence, the relative accumulation of tTregs within the CD4 SP compartment may be explained by a better balance of Foxp3 + cell cycling versus cell death, as compared to Foxp3 − cells. Decreased numbers of tTregs are partially caused by diminution in DP cell precursors Despite the enhanced proportion of Foxp3 + cells among CD4 SP thymocytes, tTreg numbers progressively fell in the course of infection, as shown previously in Fig 1. The intrathymic loss of tTregs may be also linked to the depletion of DP cells, which are the most important tTreg cell precursors, at least in numbers. We previously showed that Adx prevented the loss of DP cells induced by glucocorticoids during T. cruzi infection, although at the same time shortened mouse survival [21]. To test tTreg dynamics in the absence of DP loss driven by glucocorticoids, we performed a bilateral Adx one week after infection. Thymuses were obtained after 14 days p.i. (survival -in days-: Infected = 24.4±1.5; Adx+Infected = 16±2.0) and the absolute numbers of tTregs were evaluated. Although the premature death of Adx+Infected animals is a technical obstacle to carry out studies after 17 or 21 days p.i., the Foxp3 + cell reduction after 14 days p.i. was partially prevented by Adx, suggesting that other mechanisms are also operating in the loss of tTregs (Fig 3). Decreased absolute numbers of tTregs are linked to a diminution in thymic IL-2 contents The numerical loss of tTregs during infection may be also due to a lack of survival factors, such as IL-2. To test IL-2mRNA and protein levels in the thymus during infection, real time PCR and immunofluorescence experiments were carried out. Expression profiles of IL-2 mRNA showed an increase at day 17 p.i., followed by an evident collapse at day 21 p.i (Fig 4A). The same was true when analyzing mRNA for IL-15 ( Fig 4B), a related IL-2 cytokine that also is involved in the transmission signalling through common pathways. In the thymus of noninfected mice IL-2 immunoreactivity was evident in the entire organ; whereas in the infected individuals, IL-2 immunoreactivity progressively diminished (Fig 4C). In addition, in the thymus from 21 days-infected animals IL-2 protein expression was diminished in both medulla and cortex, as seen by fluorescence quantification (Fig 4D). Interestingly, the enhancement of IL-2 mRNA at day 17 p.i. did not correlate with the diminished thymic contents of this cytokine, suggesting that IL-2 transcripts are regulated post-transcriptionally. T. cruzi infection induces enhancement of CD25, GITR and CD62L expression intTregs To further evaluate tTregs during experimental T. cruzi infection, we analyzed the expression of signature molecules, such as CD25, GITR and CD62L. The expression of CD25 and CD62L increased along tTreg maturation. Foxp3 + cells expressing CD25 increased~15% at days 17 and 21 p.i.; with a~10% augment in cells expressing GITR and CD62L being found by day 21p. i. (Fig 5A). An enhancement in the co-expression of CD25 + GITR + or CD25 + CD62L + was also observed after infection among CD4 + Foxp3 + cells (S3 Fig). Additionally, the mean fluorescence intensity of the three surface markers was significantly augmented among Foxp3 + cells after infection, as shown in Fig 5B Simultaneously, the proportion of Foxp3 − cells within the CD4 +-CD25 + compartment decreased, suggesting a loss of the most closely tTreg precursors (CD4 +-CD25 + Foxp3 − cells) during infection (Fig 5C). These results indicate that T. cruzi infection induces a loss of tTreg cell precursors, while the remaining Foxp3 + cells display a more differentiated phenotype. T. cruzi infection induces changes in the localization of both cortical and medullary Foxp3 expressing cells We next carried out double-labeling immunofluorescence studies using pan-cytokeratin along with Foxp3 staining, to analyze the location of Foxp3 expressing cells within the thymic epithelial meshwork. This approach enables an adequate interpretation of Foxp3 + cells location in cortical versus medullary regions, mainly in infected mouse-derived thymuses given the shrinkage in their cortical area secondary to the DP cell loss. Analyses performed in normal thymuses revealed regions with high immunoreactivity for Foxp3 in the medulla and in a much lesser amount in the cortex (Fig 6A, left panel). The localization of Foxp3 + cells in the thymus of infected mice was less restricted to the medulla, since cortical areas with well-defined immunoreactivity began to be detected from day 14 onwards. (Fig 6A, right panel). After 17 days p.i., the shrinkage of cortex reflected the massive loss of DP cells (Fig 6B). In the thymus of control mice, Foxp3 + cells in the medulla were on average 2.3 times more frequent than in the cortex (32 vs 14 Foxp3 + /cells mm 2 , Fig 6C and 6D respectively), whereas in the thymus of 17-day infected mice, this relation was~1.9 (39 vs 20 Foxp3 + /cells mm 2 , Fig 6C and 6D respectively). Next and for comparative purposes, we selected by cytometry all thymocytes expressing Foxp3 (gate "total Foxp3"), for evaluating Foxp3 expression among thymic subpopulations, as described in Fig 6E. Total Foxp3 expressing thymocytes showed a six-fold increase in the infected thymus compared to controls, after 21 days p.i. (Fig 6F). As expected from earlier results, Foxp3 + proportion inside the CD4 SP compartment cells increased significantly at day 21 p.i. (Fig 6G). Unlike to what was expected by microscopy, the fraction and the number of Foxp3 + cells inside the DP compartment from infected animals are declined compared to control ones (S4 Fig). In addition, only minor changes were observed in Foxp3 + within the CD8 compartment cells after infection (S4 Fig). These results suggest that Foxp3 + cortical cells were not necessarily DP cells. It follows that, besides morphological changes noticed in T. cruzi infected thymus, there is a clearly ectopic location of Foxp3 + cells outside the medulla. T. cruzi infection induces variations in the migratory response of tTreg cells Partly because Foxp3 + cell frequency was increased inside the CD4 SP compartment of infected animals, and that Foxp3 + cells were found in the thymic cortex, we hypothesized that tTregs might have an altered migratory behavior, known to be controlled by distinct signals triggered by integrins and chemokine receptors. Since the thymocyte migratory capacity is linked to their possibility to bind components of the extracellular matrix through receptors belonging to the very late antigen (VLA) family of integrins [19], we first evaluated their expression in both Foxp3 + CD4 SP and Foxp3 − CD4 SP cells. In both normal and infected thymuses, Foxp3 + cells were located in close contact with the epithelial microenvironmental cells and with the extracellular matrix network, as seen in Fig 7A. Additionally, a decreased proportion of CD49d and CD49e (α chains of fibronectin receptors VLA-4 and VLA-5 respectively) in Foxp3 + cells was observed by flow cytometry after infection, while the frequency of CD49f (the α chain of the laminin receptor VLA-6) did not differ (Fig 7B). Moreover, all integrin α chains strongly decreased their mean fluorescence intensity (MFI) after infection (Fig 7C). Conversely, the Foxp3 − CD4 SP thymocyte subset showed an increased frequency of cells expressing CD49d, CD49e and CD49f (Fig 7B), accompanied by increased cellular expression levels (as defined by MFI values), except for CD49f (Fig 7C). We also evaluated in tTregs the expression of two typical tTreg chemokine receptors: CD197/CCR7 and CD184/CXCR4. Notoriously, in normal thymus, the Foxp3 + population expressed a higher proportion of both receptors compared to Foxp3 − cells (~94 vs 51% on average, respectively) ( Fig 7C). Moreover, while infection induced an enhancement in their frequency within the Foxp3 − population, no differences were found within the Foxp3 + subset, which was maintained around 90% ( Fig 7C). However, the membrane level of both receptors (ascertained by MFI) was diminished in Foxp3 + cells (Fig 7D). Taken together, these findings support the notion that tTreg cells may undergo alterations in their trafficking capacity during T. cruzi infection. The findings presented here, together our previous reports showing migratory alterations in fibronectin-driven migration of SP or DP subpopulations after infection, led us to hypothesize that the disease can also modify the fibronectin-mediated migratory response of tTreg cells. Using transwell assays for evaluating the ex vivo ability of Foxp3 + cells to transmigrate, we confirmed that the whole CD4 SP population from infected mice exhibited increased migratory responses through fibronectin-coated surfaces (Fig 8A). Strikingly however, within the CD4 compartment, Foxp3 + cells showed the opposite trend. In fact while Foxp3 − cells from uninfected animals exhibited an enhanced migration through fibronectin similarly to whole CD4 SP population (Fig 8B), while the fibronectin-driven migration of Foxp3 + thymocytes from infected animals was clearly diminished (Fig 8C), thus revealing that T. cruzi infection induces functional changes regarding fibronectin-driven migratory behaviour of tTregs compared to other thymocyte subpopulations. . Panel A shows medullary Foxp3 + cells in close contact with the medullary epithelial stroma (denoted by cytokeratin red staining). Panels B and C show Foxp3 + cells located in regions with high extracellular matrix network density, as seen by staining with the anti-laminin antibody (red staining, panel B) and anti-fibronectin (red staining, panel C). b) Frequency of integrin α chains CD49d, CD49e and CD49f within CD4 + Foxp3 + and CD4 + Foxp3cells. c) Mean fluorescence intensity (MFI) of each integrin α chain within CD4 + Foxp3 + and CD4 + Foxp3cells. d) Frequency of chemokine receptors CD184/CXCR4 and CD197/CCR7 within CD4 + Foxp3 + and CD4 + Foxp3cells. e) MFI of each chemokine receptor within CD4 + Foxp3 + and CD4 + Foxp3cells. Data are expressed as mean ± s.e.m. of 3-6 mice/day, after 17 days p.i. and exemplify one representative of three experiments performed independently in BALB/c mice infected with Y strain. * p <0.05; ** p<0.01 and *** p<0.001. doi:10.1371/journal.pntd.0004285.g007 Discussion Thymic atrophy and DP cell loss during T. cruzi experimental infection is well documented [13,[24][25][26]. However, no data were available on the potential effect that T. cruzi infection have on the compartment of tTregs. Here we show that T. cruzi infection induces a marked loss of tTreg cells, while at the same time this cell population exhibited locational, phenotypic and functional changes. In healthy animals, the analysis of Foxp3 frequency among different thymocyte subpopulations confirmed that Foxp3 + cells roughly represent 3% of CD4 SP cells, coinciding with data previously reported by Liston et al [27]. The same analysis carried out in the thymus of T. cruzi infected animals showed enrichment on the frequency of Foxp3 expressing cells within the CD4 SP compartment due to an imbalance between cell death and proliferation. By contrast, there is a patent exhaustion of Foxp3 + cell numbers, clearly associated to a DP and CD4 + CD25 + Foxp3 − cell precursor depletion. Although we previously showed that DP cells die during infection by glucocorticoid-driven apoptosis [28], we cannot discard that other factors account for DP death. These results are in apparent contradiction with data showed by Sanoja et al, where the authors observed augmented proportion and numbers of tTregs in mice infected with the Y strain of T. cruzi [29] This difference may be partly due, at least, to differences in the methodology used in both studies (gender of mice, parasite doses, inoculation route and cytometric analysis). Medullary CD4 + CD25 + Foxp3 − cells appear to be highly dependent on IL-2 and IL-15 signaling to differentiate into tTregs. Moreover, the appearance of CD4 + CD25 + Foxp3 + in the medulla depends on IL-2 and IL-15 secretion by CD4 SP cells located in close contact with medullary epithelial cells [30]. Both IL-2 and IL-15 trigger signal pathways on cell precursors through receptor complexes containing the common cytokine γc-chain subunit, which is involved in the activation of Stat5 proteins and Foxp3 expression [2,31,32]. Consequently, the failure of medullary CD4 + CD25 + Foxp3 − cells to differentiate into tTreg in T. cruzi infected animals may be the result of reduced thymic contents of IL-2 and IL-15. Accordingly, we propose that the T. cruzi induce the depletion of cytokines that trigger γc-chain family of receptors or may alter the expression of these receptors, impairing the normal tTreg development [33,34]. Interestingly, during Th-1 polarized responses induced by parasites such as T. cruzi or Toxoplasma gondii there is also a limitation of systemic IL-2 availability [12,35]. Therefore, IL-2 withdrawal favours the collapse of regulatory response and the development of pathology, while the treatment of mice with IL-2 plus glucocorticoids or an IL-2-anti-IL-2 complex can improve Treg response following parasite infection [12,35]. Overall, such findings point out to a relevant role of IL-2 upon thymic and peripheral regulatory response during T. cruzi infection. Under physiologic conditions, only a minor proportion of cells differentiate into tTregs. The overrepresentation of Foxp3 + cells within the CD4 SP compartment seen during infection, conjointly with a diminished cell death, reveal that this population is more resistant to apoptosis compared to the Foxp3 − CD4 thymocytes, but not enough to prevent the tTreg loss in absolute numbers. Reinforcing this view, there is an increase in CD25, CD62L and GITR expression among the remaining tTregs. The constitutive expression of CD25 is related to the ability of Foxp3 + cells to respond to IL-2, and their expression level has been strongly linked to their survival, cell number, thymic maturational stages and suppressive function [31]. However, in normal animals there is a proportion of Foxp3 + CD4 SP cells that does not express CD25, and that reaches around 20%; similar to values previously described in thymus and periphery [1,36,37]. It is not clear whether CD4 + CD25 − Foxp3 + cells develop from a distinct precursor than their CD4 + CD25 + Foxp3 − counterparts or whether this population represents unstable tTregs [38]. Nevertheless, since not all Foxp3 + tTregs express CD25, is possible that Foxp3 expression may proceed through an IL-2/STAT5-independent pathway [39]. Thus, the exact mechanism involved in the generation of thymic CD4 + CD25 − Foxp3 + remains to be determined. Intriguingly, the proportion of Foxp3 + cells expressing CD25 increased as a result of infection, mainly after 21 days of infection, coinciding with the minimal IL-2 contents. Interestingly, Tai and colleagues proposed that Foxp3 expression in thymocytes induced pro-apoptotic signals resulting in cell death, provided there is no counterbalance of signals triggered by γc-mediated cytokine receptors [40]. In the context of T. cruzi infection, CD4 + Foxp3 + CD25 − cells might be those most prone to Foxp3 induced apoptosis since they do not require γc-mediated signals to survive and complete their differentiation into mature tTreg cells, resulting in a raised proportion of CD4 + Foxp3 + CD25 + cells. Moreover, the increase of CD25 membrane expression seen after infection may imply that cells with higher CD25 levels, in an environment poor in cytokines critical for their differentiation, can keep a relative efficiency for the IL-2-or IL-15-dependent signaling pathways [6,31,41]. Furthermore, the increased GITR expression may favor anti-apoptotic pathways in CD4 + CD25 + Foxp3 + cells, reinforcing results about the increased Ki-67/ Annexin V ratio [42,43]. Another plausible explanation is that local or environmental inflammatory signals may influence the phenotype of tTregs, as seen in the periphery with T effector and extrathymic DP cells during murine T. cruzi infection [44]. Confocal microscopy studies performed in normal thymuses showed that Foxp3 + cells are located predominantly in the medullary region, in light with the studies by Sakaguchi and Fontenot [45,46]. In the case of infected animals, although positive immunostaining was concentrated mainly on the medulla, the cortex revealed an increase in the presence of Foxp3 + cells. This abnormal localization may affect the type of antigens recognized by the tTregs and their selection process, taking into account that medullary and cortical machinery of antigen presentation differs considerably [47]. Undoubtedly, this point deserves to be studied more deeply considering the suspected autoimmune basis of Chagas disease. Besides this, cortical localization of tTregs strongly suggests that suffers intrathymic migratory abnormalities. The fact that infected mice present decreased proportions of α subunits of fibronectin receptors (and also for VCAM-1 in the case of VLA-4) together with the diminished ex vivo fibronectin-driven migration of Foxp3 + cells point out that tTreg cells may have an abnormal decrease of intrathymic migratory activity, in opposition to other thymic subpopulations that exhibited increased migratory response during T. cruzi infection [2]. Abnormalities in the expression of migratoryrelated molecules may affect maturation, stability and selection process of thymocytes [19,48,49], including tTregs, but also could influence the migration of pTregs to target organs during infectious process [50]. The intensity of CD62L expression is closely related to the maturational stage of thymocytes, with Foxp3 + CD62Lor Foxp3 + CD62L low cells representing a more immature lineage, whereas Foxp3 + CD62L + or CD62L high cells being considered more advanced in their development [3]. Increased CD62L expression in Tregs indicates that the intrathymic migration process is modulated by the infection, accelerating the maturation development or favoring their suppressive capacity [51]. A further aspect to be discussed is that a proportion of Foxp3 + cells with a marked maturational profile may correspond to peripheral cells that have re-entered the thymus. Under physiological conditions, the re-entry of mature T cells is restricted to activated or memory cells [3,52]. Because there is no marker accurately distinguishing tTregs from pTregs [6], we cannot establish in which proportion these cells come from periphery. However, if this is the case, peripherally activated Foxp3 + cells should rather correspond to an effector/memory profile with CD62L low or CD62L − expression, even though central memory populations express CD62L high [48]. Furthermore, we recently reported that during infection, pTregs exhibited a weak capacity to turn into an effector/memory phenotype, as they remain mainly in a naïve state loosing Foxp3 expression. This may be at the origin of impaired pTreg cell function [12], suggesting that the number of pTregs with an activated profile entering the thymus is very low. In any case, the physiological relevance of the re-entry into the thymus of Treg cells is still debatable. Recent data suggest that the re-entry of activated pTregs cells (expressing CD62L − CXCR4 + ) into the thymus may inhibit IL-2-dependent differentiation of tTreg cells [47,51]. On the other hand, some authors speculate that the re-entry of T effector cells may contribute to the induction of tolerance by promoting the tTreg development by acting as a source of IL-2 [53]. Considering that the thymus can be infected by T. cruzi [26], pTregs and T effector cells within the gland may also regulate tTreg differentiation. Whatever the case, remaining tTregs observed during T. cruzi infection increased and diminished the respective levels of CD62L and CD184/CXCR4 expression, weakening the possibility that the proportional tTreg increase may respond to an enhanced return of activated pTregs. There is now accumulating evidence in favor that normal thymus continuously produces Treg cells recognizing a broad repertoire of self-and non-self antigens [52]. The clear-cut in trathymic numerical loss of tTregs during the acute phase of T. cruzi infection may compromise the composition and replenishment of the peripheral tTreg cell pool. If sustained over time, this may have implications in the dysregulated immune responses seen during the chronic phase of this disease. Our previous results also showed that acute infection also induces the collapse of pTregs due to the acquisition of an abnormal Th1-like phenotype and altered functional features [12], reinforcing the idea that T. cruzi disrupts the Treg response via multiple pathways including central and peripheral mechanisms. This hypothesis is further strengthened by the fact that individuals in the asymptomatic phase of human Chagas disease exhibit a higher proportion of Tregs compared to those who have developed an overt cardiac pathology [7,8]. Our work provides a clear demonstration that T. cruzi infection impact upon the tTreg cell compartment inducing a noticeable loss of tTreg cells, while the remaining population of tTregs showed an abnormal localization, phenotypic and functional changes. These data, together with the alterations previously described in the peripheral compartment of Tregs [12], suggest that tTreg abnormalities may have harmful consequences for the immunocompetence of the host, further influencing the development of chronic disease. In essence our results provide a stimulating background for further elucidation of important aspects of thymic function in the context of T. cruzi infection. Future studies performed both in chronic experimental models and humans with Chagas disease will clarify whether tTregs dysfunction may be involved in the suspected autoimmune component of Chagas disease.
v3-fos-license
2019-04-06T13:11:11.976Z
2007-04-01T00:00:00.000
96906736
{ "extfieldsofstudy": [ "Materials Science" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.scielo.br/j/jbchs/a/b5P4MCCxZ8rzNRHhHrGy7vB/?format=pdf&lang=en", "pdf_hash": "e67ffeaaf48da970a8a7e1c8664fdc740f387c24", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41129", "s2fieldsofstudy": [ "Chemistry", "Materials Science" ], "sha1": "a88739fd9ae7b18d81bd03ce02ee01270c6e6961", "year": 2007 }
pes2o/s2orc
Mesophase Formation Investigation in Pitches by NMR Relaxometry Piches são utilizados como precursores de diversos materiais avançados de carbono. O objetivo deste trabalho foi combinar as metodologias de extração com solvente com a ressonância magnética nuclear de baixo campo, através da técnica de relaxação, caracterizando piches de petróleo tratados termicamente. O tempo de relaxação T 1 apresentou dois domínios: um na região aromática e o outro atribuído a mesofase. Os resultados mostraram que a técnica de relaxometria por RMN de H pode ser empregada como uma nova ferramenta para a caracterização desse tipo de sistema. Introduction Carbonaceous pitches are used as raw materials in advanced carbon products. 1The growth of the mesophase affects the physical properties of the pitch, softening point and viscosity, and also affects the final properties of the resultant carbon products.3][4][5][6][7] Polarized optical microscopy (POM) and solvent-insoluble fractions are conventional tools for the study and measurement of the amount of mesophase formation. 2,6,8Although POM is a standard identification tool, extensively used by liquid crystal researchers, Li et al. 9 have concluded that conventional POM observation could not be regarded as a good method to analyze the size and size distribution of the mesophase spheres in the isotropic matrix of heat-treated pitches, because of their random distribution.Even the statistical assumptions used in some works could not help to obtain the precise size, since the different apparent sizes could be caused by the random positions.In these random positions spheres were cut in the preparation of the samples and also caused by the size distribution of mesophase spheres in the pitches. Another analytical method frequently used to follow the growth of the mesophase is solvent extraction.The literature reports a wide range of solvents, e.g.heptane, toluene, tetrahydrofuran, pyridine, quinoline and N-methyl pyrrolidinone.However, in different systems, each different extracted material and its extract behave differently.The mesophase spheres in the heat-treated coal tar or petroleum pitches are extracted at a very low yield.Besides, extraction and filtration are very tedious procedures. 2,3,5he aim of this work is to combine solvent extraction methodology by using quinoline, N-methyl pyrrolidinone and toluene, with low field nuclear magnetic resonance relaxometry technique in order to characterize heatedtreated samples of petroleum pitches.Longitudinal proton J. Braz.Chem.Soc.relaxation time data, which is conventionally characterized by relaxation time T 1 , was used to investigate the presence of domains in the samples studied.1][12][13] Torregrosa-Rodriguez et al. 14 and Evdokimov et al. 15 have also studied the formation of asphaltene dispersions in oil/toluene solutions by low field NMR relaxation, with measurement of the spin-spin relaxation times (T 2 ), identifying monomers below 10 mg L -1 .In this investigation, low field NMR relaxometry and insoluble fractions techniques were used to characterize different domains in the samples studied. Experimental The pitch precursor (sample A) comes from petroleum cracking residue submitted to heating treatment; it was heated at 430 o C per 4 hours in a N 2 atmosphere, and five different samples were obtained (samples B-E) as specified in Table 1.Two other pitch samples were obtained from the precursor by density difference with hot stage centrifugation, the upper (isotropic-sample F) and lower (anisotropic-sample G). 16 The low-field 1 H NMR relaxation measurements were done on a Resonance Instruments Maran Ultra 23 NMR analyzer, operating at 23.4 MHz (for protons) and equipped with an 18 mm variable temperature probe operating at 300 K. Proton spin-lattice relaxation times (T 1 H) were measured with the inversion-recovery pulse sequence (D 1 -πτπ/2 -acq.), using a recycle delay value greater than 5T 1 (e.g.D 1 of 10 s), and π/2 pulse of 4.5 µs calibrated automatically by the instrument software.The amplitude of the FID was sampled for twenty τ data points, ranging from 0.1 to 5000 ms, with 4 scans each point.The T 1 values and relative intensities were obtained with the aid of the program WINFIT by fitting the exponential data.Distributed exponential fittings as a plot of relaxation amplitude versus relaxation time were performed by using the software WINDXP. Results and Discussion The T 1 H relaxation time data obtained at 300 K for the samples studied are shown in Table 2.The mesophase formation can be followed across the distributed exponential fittings as a plot of relaxation amplitude versus relaxation time; this was performed using WINDXP software (Figure 1).In our studies of T 1 relaxation times, two distinct domains were observed: one referring to the aromatic region and the other was attributed to mesophase.Jurkiewicz et al. 10 have used spin-lattice characteristics of coal 1 H NMR signals and have suggested that two phases, one molecular and other macromolecular, could be distinguished in the coal structure. In the present work, T 1 H longitudinal relaxation time and the insoluble fraction data were correlated to the presence of different domains in the samples studied (Figure 2).In Figure 2, the highest one belongs to the domain controling the relaxation process, which is a rigid one.The results from the insoluble fraction determinations were lower than those obtained with 1 H NMR relaxometry 10 and proved that the mesophase formation was understimated, probably due to the scale of the measurement.It was also observed a good correlation between lower insoluble fraction concentration and NMR relaxation data. Conclusions The NMR relaxation results showed that the system in investigation presented more than one domain, according to their molecular mobility, as a function of phase interaction and dispersion.These results also supported that NMR relaxometry could be used as a new tool for the characterization of this kind of system. Figure 2 . Figure 2. T 1 H longitudinal relaxation time data and insoluble fraction of the samples Table 1 . Characteristics of samples studied Table 2 . Proton spin-lattice relaxation times of the samples determined by low field NMR using WINFIT software * These values result from T 1 curve adjustment for three exponentials.
v3-fos-license
2017-01-31T08:35:28.556Z
2017-01-17T00:00:00.000
3469375
{ "extfieldsofstudy": [ "Biology", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://malariajournal.biomedcentral.com/track/pdf/10.1186/s12936-017-1686-2", "pdf_hash": "f3c90b0fd3db5622736cbcd180a84738aa95dd40", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41131", "s2fieldsofstudy": [ "Medicine" ], "sha1": "f3c90b0fd3db5622736cbcd180a84738aa95dd40", "year": 2017 }
pes2o/s2orc
Insecticide-treated net effectiveness at preventing Plasmodium falciparum infection varies by age and season Background After increasing coverage of malaria interventions, malaria prevalence remains high in Malawi. Previous studies focus on the impact of malaria interventions among children under 5 years old. However, in Malawi, the prevalence of infection is highest in school-aged children (SAC), ages 5 to 15 years. This study examined the interaction between age group and insecticide-treated net (ITN) use for preventing individual and community-level infection in Malawi. Methods Six cross-sectional surveys were conducted in the rainy and dry seasons in southern Malawi from 2012 to 2014. Data were collected on household ITN usage and demographics. Blood samples for detection of Plasmodium falciparum infection were obtained from all household members present and over 6 months of age. Generalized linear mixed models were used to account for clustering at the household and community level. Results There were 17,538 observations from six surveys. The association between ITN use and infection varied by season in SAC, but not in other age groups. The adjusted odds ratio (OR) for infection comparing ITN users to non-users among SAC in the rainy season and dry season was 0.78 (95% CI 0.56, 1.10) and 0.51 (0.35, 0.74), respectively. The effect of ITN use did not differ between children under five and adults. Among all non-SACs the OR for infection was 0.78 (0.64, 0.95) in those who used ITNs compared to those that did not. Community net use did not protect against infection. Conclusions Protection against infection with ITN use varies by age group and season. Individual estimates of protection are moderate and a community-level effect was not detected. Additional interventions to decrease malaria prevalence are needed in Malawi. Electronic supplementary material The online version of this article (doi:10.1186/s12936-017-1686-2) contains supplementary material, which is available to authorized users. on infection prevalence is vital to determining appropriate measures to bring about reduction of the malaria burden in Malawi. School-aged children (SAC), aged 5-15 years, make up ~30% of the population, and recent data from Malawi show that they have the highest infection prevalence, at 4.8 times the odds of infection compared to other ages [14]. Little is known about the effectiveness of ITNs at preventing infection in SAC and how it compares to the effectiveness in other age groups [15]. With the exception of the universal ITN distribution campaign in 2012, most ITN distribution programmes focus on pregnant women and children under 5 years old. Studies on the impact of ITN distribution and use generally study parasite prevalence or disease rates in children under 5 years of age [16][17][18]. Previous studies have found that even after universal net distribution, SAC were significantly less likely to use ITNs compared to other age groups [19]. Low ITN use among high-prevalence populations such as SAC, even while ITN use increases among other demographics, may contribute to the lack of decline in malaria transmission in Malawi. When previous studies have included wider age ranges, with children both under and over five, the results for impact of community ITN ownership and household ITN ownership on population-wide infection prevalence have been non-significant [20][21][22]. This may be related to behavioural differences in net use between age groups or persistence of prevalent infections among SAC. Given the contribution of SAC to overall Plasmodium prevalence and the age-based differences in behaviour and exposure there is a need for research into age-related variations in the impact of ITN use on infection prevalence. This analysis used data from six cross-sectional studies conducted over 3 years to evaluate factors that may impact the effectiveness of ITN use for preventing infection. Mixed models were developed that explore the interaction between age and ITN use for prevention of infection, adjusting for season and transmission intensity. An additional model was developed to evaluate the age-specific impact of community ITN use on parasite prevalence in the community. The potential for age-based effect modification of intervention efficacy is important to consider. If there are age differences in the impact of ITN use on infection, this may explain the relative failure of ITN distribution to reduce malaria prevalence in Malawi and have significant impact on the design of future interventions. Survey Data come from six cross-sectional household surveys conducted in southern Malawi. Two surveys, the first at the end of the rainy, high transmission season (April-May), and the second at the end of the dry, low transmission season (September-October), were conducted each year from 2012 to 2014 [14]. Three-hundred households were selected from each of three districts using two-stage cluster sampling [23]. Districts included in the study were Blantyre (an urban, low transmission setting), Chikhwawa (a rural, low altitude, high transmission setting), and Thyolo (a rural, high altitude, low transmission setting). Using probability proportionate to size, 10 enumeration areas (EA) were randomly selected within each district. Households were selected from each EA by compact segment sampling [23]. Each selected EA was divided into segments each containing approximately 30 households. One segment from each EA was randomly selected and all households within that segment were visited (referred to as a community in the analysis). The same selected communities in each district were surveyed each season, and all households within a given community were visited on a single day. Households were excluded if there were no adults over 18 years old present to provide consent. If excluded, a household was replaced with the nearest household within the compact segment, this household was selected by convenience. A universal net distribution campaign occurred in Malawi in 2012 after the first survey. The goal of the distribution campaign was to distribute nets to all households at a ratio of one net for every two people in the household. Ethical treatment of human subjects Prior to study initiation, permission to survey each village was provided by the village leaders. Written informed consent was obtained for all adults and children and assent was obtained for children age 13-17 years. All questionnaires were administered in the local language. The study received ethical approval from both the University of Maryland Baltimore and Michigan State University Institutional Review Boards and the University of Malawi College of Medicine Research and Ethics Committee. Study participants Members of a household were defined as individuals who slept in the house for at least two weeks of the previous month. Data about all household members were collected from adult household members; blood specimens were collected from household members over 6 months of age who were present at the time of survey. Data was not collected on individuals who did not qualify as members of a household. Households in the same geographic area were selected in each survey, and the same households may have been visited repeatedly, however, it was not possible to collect identifying data to track individuals or households between surveys. The specific individuals who answered survey questions varied from survey to survey depending on those present on the day of the survey. Data collection and variable definitions Survey questionnaires were adapted from the standardized Malaria Indicator Survey tools [11]. Data were collected on android-based tablets using OpenDataKit and managed using Research Electronic Data Capture (RED-Cap) tools [24]. Data collected included household characteristics and socio-economic indicators, individual demographics, household net ownership and individual net use. Participants were initially classified into three age groups, defined a priori, for preliminary analysis: children under five, SAC (children 5-15 years old), and adults (over 15 years). SAC were later further divided into two categories for final analysis: young SAC (ages 5-9) and older SAC (ages 10-15). The primary outcomes of interest were the odds of individual Plasmodium falciparum infection and community infection prevalence. Real-time PCR (rtPCR) performed on dried blood spots collected on 3M Whatman filter paper and microscopic examination of blood smears collected at the time of survey were used to determine presence or absence of Plasmodium infection. Individuals were considered parasite positive by rtPCR if blood samples were positive for the P. falciparum lactate dehydrogenase gene [25]. A secondary analysis used blood smear results to determine presence of microscopically detectable infection. Thick blood smears were stained with Field's or Giemsa stain and read by at least two independent readers. Individuals were considered parasite positive by microscopy only if presence was confirmed by at least two readers. Community infection prevalence was defined as the per cent of specimens positive for P. falciparum in a given community at a given survey. Site transmission intensity ('transmission setting') was defined using the average community prevalence by rtPCR from all six surveys. Prevalence among children under five was tested as an alternate method for defining transmission setting but the variable was zero-inflated. Tertiles of low, moderate and high transmission communities were defined as having less than 7% prevalence, from 7 to 11% prevalence and over 11% prevalence, respectively. The primary exposures of interest were individual, household and community net use. Survey staff asked to observe all bed nets in the household at the time of survey. An individual was categorized as using a particular bed net if they were identified in the question "Which members of the household slept under this bed net last night?" Household net use was defined as the proportion of household members reported to have slept under a net the night before the survey and was categorized as high (80% or greater), low (<80%) or none based on the Malawi Malaria Strategic Plan recommendations [26]. Household net use was alternately examined as a linear variable, but did not change the results. The categorical variable was chosen for final models as it is more easily interpretable for public health purposes. Community prevalence of net use was defined as the proportion of individuals in the community reported to have slept under a net the night before the survey and was categorized as high (80% or higher), or low (<80% net use). A household wealth index, broken down into tertiles, was developed using principal components analysis, including ownership of common economic indicators, presence of electricity in the household, income category, and food security; wealth index was calculated for each of the households in the study population. Head of household education level was classified into two categories: having some secondary education or more, and no secondary education. Statistical analysis All statistical analyses were run twice, defining the outcome of interest alternately as P. falciparum infection detected (1) by rtPCR and, (2) microscopically. Two sets of analyses were run. The first examined the association between net use and individual infection, and the second examined the association between community net use and prevalence of infection. Association between individual net use and individual infection Multivariable mixed effect logistic regression (SAS PROC GLIMMIX) was used to model the association between infection with malaria parasites and individual ITN use (predictor variable of interest), adjusted for potential confounders. The analyses accounted for clustering at the household and community level using nested random intercepts. Potential confounders were tested for inclusion in the model if in bivariate analysis they were associated with individual infection and were known from previous work [19] to be associated with individual ITN use (transmission setting, age category, ratio of nets to sleeping spaces, proportion of nets hanging in a household, head of household education level, household wealth index, and person to net ratio). The following additional variables were included in the multivariable analyses due to association with both infection and ITN use: season (rainy vs dry season), household net use and community net use. A variable for community infection prevalence at the time of each survey was included in multivariable statistical models to adjust for local prevalence differences within transmission settings and between seasons. All final models included individual net use, community net use, household net use, wealth index, age group, and community infection prevalence. Interaction terms were included to test for modification of the association between net use and infection by age, by presence of a school-age child in the household, and by season. If presence of interaction was suspected, stratified models were constructed. Final analysis was done in the following three strata: SAC in the rainy season, SAC in the dry season, and non-SAC. In analyses among non-SAC, SAC were excluded from the pool of study subjects and variables for the presence of a SAC in the household, and SAC ITN use were included. Association between community net use and prevalence of infection Linear mixed models (PROC MIXED) were constructed to examine the crude association between community ITN use and community prevalence of infection. These models were used to assess the bivariate association between all independent variables and community infection prevalence, as well as the bivariate association between covariates and community ITN use. One-way ANOVAs were used to compare infection prevalence within levels of categorical variables. Variables were retained in the final model if they had p value <0.05 after all other variables were added, or if addition of the variable qualitatively changed the estimate for ITN use and if there was not excessive collinearity with any other variable in the model (identified when high variance inflation was present). Community prevalence of infection has a right skewed distribution so community parasite prevalence was log transformed. Various covariance structures were tested to account for the serial autocorrelation of repeated measures obtained from the same community. Criteria for selection of covariance structure were parsimony (models with fewer covariance parameters were preferred) and decreasing Akaike's information criteria. Variables chosen a priori to test for inclusion in this analysis based off causal assumptions were community prevalence of net use, proportion of children under five who used nets, proportion of SAC who used nets, average community wealth index, proportion of community population comprised of children under five (to account for population structure), and transmission setting. Mixed models included random effects for variation in baseline community prevalence (random intercept) and for community-level variation in slope over time to account for repeated measurement of community level variables over time. All a priori selected variables were tested for inclusion in the final model. All adjusted models included community net use, season and transmission setting. All analyses were conducted with SAS 9.4. Results There were 22,132 individuals from six surveys with ITN use data collected (Table 1). Population characteristics did not vary significantly between surveys; average household size was 4.1 ± 1.7 (mean ± SD); 55% of the surveyed population was female; in approximately 25% of households, the head of household had attended some secondary school or more, and in 60% of households, the head of household had not completed primary school. Fig. 1); 47.3% of all PCR-identified infections were undetected by microscopy with 33.5, 37.9 and 57.0% of infections being sub-microscopic among children under age five, SAC, and adults, (p < 0.0001), respectively. ITN ownership increased significantly (p < 0.0001) after universal net distribution in 2012, from 51% in the first survey to 83% in the second survey. ITN use also increased significantly after universal net distribution between the first and the third surveys in all age categories. Association between net use and PCR-detected infection in individuals In bivariate analysis (Additional file 1: Table S1), covariates which were associated with infection were net use, season, age category, gender, head of household education level, presence of a child under the age of five in the household, presence of a SAC in the household, household wealth index, household net use, proportion of nets to people in a house, proportion of EA using nets, and transmission setting. The crude odds ratio (OR) for infection comparing individuals who used nets to those who did not was 0.75 (95% CI 0.68, 0.83). There was evidence of interaction with ITN use by season and by age category in adjusted models (p for interaction = 0.04 and 0.02, respectively). The association between net use and infection did not differ for children under five and adults over 15 years of age (p for interaction = 0.81) so they were analysed together and defined as non-SAC (Table 2). In age-category stratified models, there was evidence of seasonal effect modification only among SAC (p for interaction = 0.03). All subsequent analysis was done in the following three strata: SAC in the rainy season, SAC in the dry season, and non-SAC. In adjusted models, among SAC in the rainy season, the OR of infection for net users compared to those who did not use nets was 0.78 (95% CI 0.56, 1.10) ( Table 2; Fig. 2). SAC net use in the dry season was associated with a 49% decrease in the odds of infection (OR = 0.51 (95% CI 0.35, 0.74), p < 0.001). Among non-SAC, net use was associated with a 22% decrease in odds of infection in all seasons (OR = 0.78 (95% CI 0.64, 0.95), p = 0.01). Variables associated with increased odds of infection in at least one of the stratified adjusted models were high community net use, decreasing household wealth index, male gender, decreasing education level of household head, lack of SAC in the household, and older SAC age category (aged 10-15) ( Table 2). Neither per cent of household using nets nor dichotomous household net ownership were strongly associated with decreased odds of infection in any stratified models. Association between community net use and prevalence of PCR-detected infection There were 30 communities sampled over six time points. Comparing results from the rainy season surveys from before (2012) and after (2013) the universal net distribution campaign, despite increases in net use (Chi square p value <0.0001), there was no decrease in infection prevalence (Table 1). In crude analysis (Table 3), a 10% increase in community net use was associated with a 6% (SE = 2%, p = 0.02) increase in infection prevalence. Other variables strongly associated with increasing prevalence in preliminary analysis were increasing proportion of SAC net use, decreasing wealth index and higher transmission setting. After adjustment for season, transmission setting, community wealth index, and community net use, only community net use remained strongly associated with infection prevalence (Table 3). Individual and community-level association with net use and microscopically detected infection Results from all analyses using microscopically detected infections were similar to results using rtPCR-detected infections (Additional file 1: Tables S2, S3). However, results using microscopy varied from rtPCR results in that ITN use was more closely associated with protection (significant at a p < 0.05 level) against individual level infection in all three strata in adjusted models (Additional file 1: Discussion This study found evidence of effect measure modification in the relationship between ITN use and P. falciparum infection by both age and by season. Using repeated cross-sections of randomly selected communities, nets were highly protective among SAC during the dry season, yet provided weaker protection from infection during the rainy seasons in this age group. This relationship in SAC differed from that seen in children under five and adults over 15, among whom ITN use was associated with a statistically significant 22% decrease in infection year round, confirming previous findings among children under five in Malawi [8]. Previously published results from these cross-sectional surveys [19] have found that SAC consistently used nets less frequently than children under five and adults, even after the universal distribution campaign. As patterns of net use between age groups remain the same over time and by season, other hypotheses must be explorde to explain the results. The underlying mechanisms driving this effect modification are unknown but may be related to multiple factors including mosquito density, behaviours prior to net use, acquired immunity, and the high prevalence of prolonged asymptomatic infections among SAC [14,27]. Transmission intensity Mosquito density is notably lower during the dry season than the rainy season, and ITN use at night in the dry season may protect against a larger relative proportion of mosquitoes than during the rainy season. Transmission intensity has previously been demonstrated to modify intervention efficacy [4]. The seasonal difference in ITN effectiveness among SAC could be related to wide variation in mosquito density (and thus transmission intensity) between the rainy and dry seasons. With increasing insecticide resistance in Malawi [28], ITNs may only be able to protect against a small proportion of mosquitoes. As mosquito density increases, ITNs may fail more frequently. Additionally, vectors may have more restricted biting behaviours in the dry season and individuals may be less likely to be bitten outside of sleeping hours. Age, behaviour and acquired immunity The differing effectiveness of ITNs by age group may have to do with a combination of behavioural differences and acquired immunity. Behavioural differences may play a role in the age-based effect modification this study found. In the rainy season, when a greater proportion of individuals overall use ITNs [19], SAC may be more likely to use lower quality, older ITNs which are less effective [29]. Additionally, due to a development of immunity to disease but not infection, a high proportion of infections detected in SAC may be persistent, asymptomatic infections. There is evidence that SAC may carry infections for a longer duration than other ages [27]. A high prevalence of persistent asymptomatic infection could contribute to the apparent lack of effectiveness when measuring net use using cross-sectional surveys. If SAC carry a high prevalence of persistent infections, ITN use would appear less effective in comparison to a population where a larger proportion of infections detected are incident. Due to the high risk of malaria mortality at young ages, the majority of previous studies on efficacy of ITNs focus on children under five [7][8][9][10]13]. Findings from this study are in agreement with one previous study from Malawi, in which no statistically significant protective effect of ITNs was detected when including children of all ages. However, the study did not address effect modification related to age [22]. Studies in other settings that have included SAC found either varying efficacy of ITNs among SAC [30,31] or varying efficacy by age [32]. By evaluating effect modification by age, this study was able to identify heterogeneity in the efficaciousness of ITNs by age group and season, perhaps explaining some of the inconsistency in magnitude of previous results. Microscopy versus molecular detection of infection Previous studies examining the effect of ITN use on infection prevalence have relied on microscopy to measure P. falciparum prevalence [10,13,[30][31][32][33], however 47% of all infections detected in these surveys were sub-microscopic and SAC were more likely to have sub-microscopic infections than younger children [14]. Unlike microscopy, rtPCR can detect prevalent lowdensity asymptomatic infection. In this study ITNs protected against microscopically detectable infections but had weaker associations with PCR-detected infections. Individuals with sub-microscopic infections may still transmit parasites to mosquitoes [34,35] suggesting microscopy may not be sufficiently sensitive to detect the full effect of ITN use on transmission. Unique aspects of this study design may have biased the results. Despite having multiple time points, individual-level data are cross-sectional and, although study staff returned to the same location for each survey, analysis was unable to account for the possibility of surveying different individuals over time. Additionally, these data were collected as part of a surveillance programme without control arms and with only one season of preintervention data. A likely explanation for a lack of community effect in this data is residual confounding due to unmeasured environmental variables and mosquito density fluctuation. This study did not collect data on environmental and climatic factors influencing mosquito density and parasite prevalence. Lack of change in community prevalence over time may be confounded by unmeasured environmental factors. Increasing mosquito density leads to increased net use independent of malaria prevalence. In individual-level models, community ITN use may have functioned as a proxy for mosquito density and allowed statistical models to detect a protective effect of individual ITN use on malaria infection prevalence. However, for community-level models, the data did not have an appropriate measure of mosquito density to detect the true protective efficacy of community net use. Increasing community net use was consistently associated with an increase in infection prevalence, however this association is likely not causal as increasing mosquito densities lead both to increasing net use and increasing prevalence. A long-term decline in P. falciparum prevalence in Malawi may provide an additional explanation for why it was not possible to find a community-level impact of ITN distribution. Looking at children under five alone, baseline prevalence in this study was significantly less than that seen in previous studies in Malawi [36,37]. Individuals excluded from the study for not meeting the definition of "member of household" were most likely to be adult males. There was no data collected data on net use for individuals sleeping away from home, only data on those that were sleeping regularly in the household at the time of survey. While this could potentially effect the generalizability of the study results as these results could not be applied to individuals who move around frequently and have no stable household, this would be unlikely to influence the results among the population studied. Conclusion In this study, ITN use did not have a uniform impact in all ages, with seasonal variation among SAC. This study found an association between ITN use and infection, however the association was modest. Among SAC, a population carrying the majority of P. falciparum infections in Malawi, the association between ITN use and infection was diminished during the rainy season using molecular detection methods. This study adds evidence to a growing body of literature suggesting that SAC may represent an important reservoir population. Unique qualities of both intervention use and the nature of infection in SAC may allow them to serve as a reservoir of infection in malaria-endemic regions. Future studies should include data from SAC and consider the possibility of effect modification by age when examining intervention effectiveness. Despite the possibility of residual confounding due to changing mosquito densities and other ecological factors, there was no decrease in malaria prevalence after universal net distribution and no decrease in malaria associated with increasing community net use. Current surveillance methods using single time-point cross-sectional surveillance of children under five using microscopy alone may fail to capture the impact of ITN use on malaria prevalence in a community. The findings from this study imply that ITN distribution alone may not be sufficient to decrease P. falciparum transmission in Malawi and other methods of prevention are required.
v3-fos-license
2019-11-07T14:15:34.641Z
2019-10-15T00:00:00.000
208193286
{ "extfieldsofstudy": [ "Economics" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "http://article.sciencepublishinggroup.com/pdf/10.11648.j.ss.20190805.20.pdf", "pdf_hash": "072017a628e39dffe688232dedc6898e0193c7eb", "pdf_src": "MergedPDFExtraction", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41133", "s2fieldsofstudy": [ "Medicine" ], "sha1": "e5b55d77a8009da4dbae81274cbf092a59e36da7", "year": 2019 }
pes2o/s2orc
Influence of the Internet on Health Seeking Behaviors of Youths in Ekiti State, Nigeria The use of online resources to locate health-related information is known to be increasing among Nigeria youths; sadly, not enough studies that investigates the influence of the internet on health seeking behaviors of Nigeria youths have been done. This study therefore investigates the influence of the internet on health seeking behaviors of youths in Ekiti-state, Nigeria, examines the extent to which the internet provides answers to health related questions among the youths, determines the perception of Nigeria youths on internet’s influence on health seeking among them, and ultimately, this study finds out whether the use of internet increases or decreases self-medication among Nigerian youths. A standardized nine-question survey on Internet use and health seeking behavior was given to 300 youths. A review of the literature is also included. It was discovered that out of 300 responses received, 203 youths (67.7%) reported ever consulting the internet to find health information. 194 (64.6%) youths consult the internet for answers to health problems before thinking of consulting a doctor or a caregiver. A large number of the youths (93.1%) follow the online physician advice more closely by practicing self-medication. A total of 191 (94%) youths submitted that the internet influences their behavior of health seeking. Conclusively, the tests of hypothesis show a significant relationship between the use of internet and health seeking behaviors of youths also between the use of internet and self-medication among youths in EkitiState, Nigeria. Introduction The world has shifted drastically to the digital form of seeking information and this in no doubt has altered the view of health care, information pursuing and looking for the best way to healthiness amongst people specifically youths. The Internet is a universal set of connections that helps revolutionizing interpersonal relations with the accessibility and expanding recognition and reception of virtual social networks [1]. This allows the provision of sophisticated and resourceful platforms for diverse communication and broadcast of unswervingly and transparent information not minding the geographical location [2][3]. The Internet and World Wide Web have also helped individuals get necessary, fundamental and crucial information for persons trying to acquire information about their healthiness and wellbeing. The worldwide intercommunication serves as a way of putting together information prior to, during and following an engagement with the physician to get predictable information such as prescriptions, self-diagnosis, and causes as well as care and side effect [4]. A similar study done by [5] stated that the utilization of the internet is predominantly noteworthy as having access to information is an essential management approach to taking charge of one's health and youths are not left out. Health information seeking pattern is principally troubled with taking pragmatic verdict to deal with health issues and seek out medical care and attention within the available resources from an individual viewpoint. Health information seeking is a familiarize fact, hence effort to sway individuals to investigate around precaution requires an indulgent of their zeal for such conduct. Behaviors of Youths in Ekiti State, Nigeria Youth from diverse tribes, culture as well as country agrees to and access all sorts of information from the internet. There are also diverse patterns of the access and usage of the internet by the diverse groups of this populace depending on their biological developments that involves physical, emotional, social, and pubertal maturation, socialization as well as peer groups [6][7]. Similarly, recent exploration has piloted that the differences in the usage of internet amongst gender has become inconclusive as some stage in teenage years [8]. Some research has established boys (58%) to be more recurrent users of the internet compared with girls (44%), while other research observed no considerable sexual category dissimilarity in internet usage [9]. Data have also revealed that only 20% of African students reported staying an average of over 3 hours per day online compared with 42% and 40% of Chinese and US students correspondingly [10]. The rate at which information spread and the potential implications for humanity, individuals and the public are overwhelming. As such, some countries have argued in support of censoring and controlled access, and some countries have argued in support of uncontrolled and uncensored access [11]. Nevertheless, in spite of the fact that right to use of the internet is much more restricted than in either the United States or China, addiction to the use of internet is in fact more prevailing in Africa [12]. Consequently, studies on where youths acquire information pertaining to their healthiness and if they use resources and information gotten online have become the main study spotlight in recent epoch [13]. Furthermore, on all the online platforms, studies have shown that of all who access the internet daily, youths have a greater number of those seeking information and Nigerian youths are not left out [14]. Nigeria is the most populous African country and has a crawling populace of young people who have access and use mobile phones, with everincreasing access to the internet [15]. Access to the Internet and its use in Nigeria are on the increase with the introduction of telecoms companies providing expanding and wide access through mobile phones [16]. This expansion comes with advanced coverage to diverse kinds of information which includes those involving to wellbeing, ailment and disease conditions [17]. The behavior of seeking information concerning health online comes with its downsides as well as advantages. A bit of its advantage include aptness and an extensive array of information on precise and diverse aliment, wellbeing and disease conditions. With this process, health information becomes readily accessible and obtainable in a manner that patients' understanding becomes broadened and pertinent for more involvement in therapeutic and restorative relationships. It could also advance the creation of more knowledgeable decisions and conformity with medications. Nonetheless, access to online health information also raises debates regarding the superiority, dependability, and applicability of the vast capacity of health information amid unlike social groups [18]. This study therefore investigates the use of the internet among youths in Ekiti state. Particular focus is on the use of the internet as a source of health information, the extent to which the internet provides answers to health related questions among the youths, determines the perception of Nigeria youths on internet's influence on health information seeking among them. This study also finds out whether the use of internet for health information seeking increases or reduces self-medication among Nigerian youths. Objectives The general objective of this study is to investigate the influence of the internet on health seeking behaviors of youths while the specific objectives are: To investigate the impact of internet on youth's decision to consult a physician when ill. To find out whether the use of internet increases or decreases self-medication among youths in Ekiti -State. HYPOTHESIS 1. Ho: there is no significant relationship between internet consultation and health seeking behaviors of youths in Ekiti-State, Nigeria. H1: there is a significant relationship between internet consultation and health seeking behaviors of youths in Ekiti -State, Nigeria. HYPOTHESIS 2. Ho: there is no significant relationship between internet consultation and self-medication among youths in Ekiti-State, Nigeria. H1: there is a significant relationship between internet consultation and self-medication among youths in Ekiti -State, Nigeria. Method The study describes the relationship between internet use and health seeking behavior of youth (aged 14-24 years) in Ekiti-state. The study population of this study consists of 300 youths in Ekiti State, who fall between 14 and 24 years of age. Purposive sampling technique was utilized in this study because only the concerned age group were involved. Structured questionnaire was used for the study and consisted of both open and closed ended questions; with carefully constructed research questions titled 'the influence of internet on health seeking behavior of youths in Ekiti-State Nigeria'. The data generated from the respondents was utilised such that the research objectives for the study were addressed. The statistical method employed is descriptive in nature using simple percentage. The informed consent of the youths was obtained in writing. Completed Questionnaires were collected at the spot by the researchers. Correlation coefficient model was used to identify significant predictors with Level of significance taken at 0.01. Table 1 illustrates the percentage sex distribution of respondents. For this study, 50% of respondents were males while 50% were females. have consulted the internet to find answers to medical problems or health related issues at one point or the other. 97 respondents (32.3%) have never consulted the internet to find answers to health related issues. This finding is similar to the study on youth's participation and active use of online health information reflected in [19] in Ireland where about 66% of the youths use the internet to explore health information on a particular ailment, social health fitness, and nutrition information. A study from [20] also indicated that 43.4% of the students especially youths in Islamabad used the Internet in search for health information. On the contrary, a study in India by [21] points a low (14%) usage of the internet for health information seeking, in spite of the number of students especially youths using the platform for other things. Infrastructural developments, accessibility of excellent Internet service, and rights of the mobile as well as Computers with the Internet access accounted for momentous instability in terms of access. This is fundamental to improving Internet access, particularly amid rural dwellers. Beyond the structural barriers and advantages, the superiority of health information, competence, and dependability remains vital to the individual and the society. This study reveals in table 2 that more females (71.3%) consult the internet for health issues than males (64%). This implies that females are more likely to practice selfmedication than males. Figure 2 illustrates that 86.7% of respondents found diagnosis of ailment through the internet while 5.9% of respondents did not find the correct diagnosis of their ailments. 7.4% of respondent didn't get the clear diagnosis of their ailment via the internet because they found multiple possible ailments associated with the symptoms they had and they were unable to treat themselves until they later consulted a physician. Result and Discussion From the research findings in table 3, a total of 173 (85.2%) of respondents who have ever consulted the internet for health related issues found prescriptions and treatment of ailments on the internet. Only 30 (14.8%) respondents were unable to get prescriptions and treatment via the internet. Table 4 reveals that 64.6% of respondents prefer to consult the internet for health issues than consult a doctor or caregiver. 25.1% of respondents prefer to consult a doctor or caregiver than to consult the internet, while 10.3% are not sure of what they would do when they have health related issues. Table 6 reveals that 93.1% of respondents who have ever consulted the internet for health related issues, practice selfmedication. 2.5% of respondents do no practice selfmedication at all while 4.4% of respondents sometimes practice self-medication. This is similar to a study by [22] which proved that the use of the internet has increased Self-Medication amongst youths especially in Lebanon. 3 reveals that (45%) of respondent who have never consulted the internet for health information did so because they could not afford it, 3.3% get health related information through sources, 30.8% do not have access to the internet, 5.5% are not comfortable searching the internet for information about their health, while 15.4% do not trust the information on the internet. On the contrary, health-careseeking behavior among young people in Lebanon is partially affected by their socioeconomic status fee of health services, accessibility to health services which include concerns about privacy, humiliation in disclosing health issues, nonexistence of medical cover, low understanding of accessible services as well as lack of confidence in health practitioners [22]. Table 7 indicates that only 11% of respondents feel that the health centers should be the first point of call whenever they need medical attention, 5.7% of respondents feel the hospital should be the first point of call whenever they need medical attention, 17.7% feel that pharmacy stores should be the first point of call whenever they need medical attention, while majority (65.6%) of the respondents feel that the internet should be the first point of call whenever they need medical attention. Table 8 reveals that almost all the respondents (94.0%) admitted that the internet has influenced their health seeking behaviors. This may indicate great dependence on internet for diagnosis and prescriptions among the youths. This calls for urgent attention in order to reduce the prevalence of selfmedication among youths. Table 9 illustrates that there is a significant relationship between internet consultation and self-medication among youths in Ekiti-State, Nigeria. We therefore reject null hypothesis. Table 10 illustrates that there is a significant relationship between internet consultation and health seeking behaviors of youths in Ekiti-state, Nigeria. We therefore reject null hypothesis. Conclusion This study found that access to online health information is prevalent among youths in Ekiti-state. Even though getting medical information from the internet may attract some benefits for youths, internet use among them evidently results to an alarming increase in self-medication; which could in turn put a burden on physicians and other caregivers or increase youth mortality when the situation goes out of hand. Studies have shown that self-medication Behaviors of Youths in Ekiti State, Nigeria is a global phenomenon and it is a common behavior amongst youth. One major risk of self-medication is the advent of the struggle with microorganisms in human pathogens in countries that are under development all over the world where the use of antibiotics is frequent used and accessible with no proper and adequate recommendation. Another risk of self-medication is serious reaction; which is coupled with irrational, absurd and unreasonable use of medication because the human system may develop resistance to specific medicines. Other risks attached to self-medication include hypersensitivity of drug, withdrawal signs, reaction to some components and impermanent cloaking of ailment, which can interrupt right finding and diagnosis. In addition, the risk of selfmedication can also result to death from overdose of medication, as well as impending health challenge [23]. Recommendation It is recommended that more studies should be carried out to further investigate the influence of Internet on health seeking behaviors of youths in Nigeria. An urgent sensitization/orientation programme which would serve as a positive intervention should be put in places in schools, youths' recreation centers and public places in other to improve the understanding of the impact of this mode of information-seeking behavior on health outcomes of young people.
v3-fos-license
2023-08-25T15:23:34.002Z
2023-08-01T00:00:00.000
261113141
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/2072-6643/15/16/3644/pdf?version=1692436925", "pdf_hash": "c1d25af9f3f28ebaef3391ef4f52c58168ec9e9d", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41134", "s2fieldsofstudy": [ "Medicine" ], "sha1": "066d0712c510e39d8a39c60aee2277be37175d1f", "year": 2023 }
pes2o/s2orc
Differences in Nutrient Intake and Diet Quality among Non-Hispanic Black Adults by Place of Birth and Length of Time in the United States Prior research suggests that migrating to the United States (US) can negatively affect the diets and health of immigrants. There is limited information on how relocating to the US affects the diets of Black-identifying immigrants. To address this gap, this study examined differences in nutrient intake and diet quality among non-Hispanic Black adults by place of birth and length of time in the US. Cross-sectional data from the National Health and Nutrition Examination Survey (2005–2016) were analyzed. Approximately 6508 non-Hispanic Black adults were categorized into three groups: foreign-born (FB) living in the US <10 years (n = 167), FB living in the US ≥ 10 years (n = 493), and US-born (n = 5848). Multivariable-adjusted logistic and linear regression models were evaluated to identify differences in nutrient intake and diet quality (as measured by the Healthy Eating Index (HEI) of 2015) across the three groups when controlling for socio-demographics. Compared to US-born adults, both FB groups had significantly higher HEI-2015 scores and higher odds of meeting dietary recommendations for several nutrients: saturated fat, sodium, and cholesterol. There were no differences in nutrient intake between the two FB groups; however, FB (<10 years) adults had better diet quality than FB (≥10 years) ones. Place of birth and length of time in the US were associated with dietary intake among non-Hispanic Black adults. More research is needed to improve understanding of dietary acculturation among Black-identifying immigrants in the US. Introduction A healthy diet and lifestyle are essential to chronic disease prevention [1]. While dietary preferences and habits can vary substantially between people with different cultural backgrounds, most Americans' diets exceed the recommended intake for saturated fats, sodium, added sugars, and refined grains [2]. Poor diet is a known risk factor for several chronic diseases, including obesity, cardiovascular disease (CVD), strokes, cancer, and type 2 diabetes [3]. Recognizing how poor diet quality and nutrient intake affect the health status of racial/ethnic minorities is an important public health priority in the United States (US) [4]. Despite recently documented improvements to the quality of Americans' diets, not every subpopulation has benefitted [5]. Non-Hispanic Black adults have experienced the least improvement among all racial/ethnic groups [5]. Furthermore, previous studies found that Black adults have less-favorable nutrient intakes, lower adherence to dietary guidelines, and poorer dietary quality compared to their White counterparts [5,6]. These nutritional inequities have great potential to further exacerbate disparities in chronic disease risk by race/ethnicity in the US [4]. Unfortunately, the field's understanding of the diets of Black-identifying populations in the US is limited in scope. Currently, there is limited understanding of differences in dietary practices and preferences among Black adults given their culture and lived experiences. The recent immigration wave of people who self-identify as non-Hispanic Black (i.e., people from African, Caribbean, Central American, or South American nations) underscores the need to expand understanding of the diets of Black adults and children in the US [7,8]. Between 2000 and 2013, the number of Black immigrants in the US increased by 56%, with migration from Africa increasing by 137% [7]. In 2017, there were an estimated four million Caribbean immigrants living in the US [8]. Moving to the US can result in dietary acculturation, which entails changes to an individual's traditional diet that result in alignment with the typical American diet [9]. In general, dietary acculturation has been found to have detrimental effects on the diets of immigrants, which consequently can increase the risk of diet-related chronic diseases among immigrant populations [9]. For example, a prior study found that adapting to the US lifestyle was associated with the loss of cultural culinary preferences and increased the consumption of unhealthy foods among immigrants despite improvements in their socioeconomic status [10]. Several studies have linked acculturation measures to changes in dietary intake in several immigrant populations, including Puerto Rican, South Asian, and Filipino adults [11][12][13]. Overall, findings from the literature support the hypothesis that relocating to the US can result in significant declines in diet quality. Given the scarcity of scientific research on the diets of Black-identifying immigrant populations and the growing number of Black immigrants in the US, there is a need to study the differences in nutrient intake and diet quality by place of birth and length of time in the US among Black adults. A prior study reported disparities in diet quality between US-born and foreign-born Black adults, with the former having poorer diet quality [14]. However, the study did not examine diet in relation to the 2015-2020 Dietary Guidelines for Americans (DGAs). Thus, this study aimed to examine differences in nutrient intake and diet quality between US-born and foreign-born (henceforth, FB) non-Hispanic Black adults who participated in the National Health and Nutrition Examination Survey (NHANES). In addition, this study evaluated the role of length of time in the US by examining differences between FB Black adults who migrated to the US fewer than 10 years ago and those who migrated more than 10 years ago. It was hypothesized that FB Black adults (<10 years) would have better diet quality than US-born Black adults; however, FB Black adults (≥10 years) would have diet profiles similar to US-born Black adults. When comparing the FB groups, FB Black adults (<10 years) were expected to have better diet quality than FB Black adults (≥10 years). Data Source Cross-sectional data collected from participants of the Centers for Disease Control and Prevention's National Health and Nutrition Examination Survey (NHANES) cycles in 2005-2016 were obtained and analyzed. NHANES collects data from a multistage, stratified probability-cluster sample of the non-institutionalized U.S. population [15]. Data on nutrition and health are collected from participants by conducting a series of intervieweradministered questionnaires and physical examinations [15]. A total of 60,936 adults and children participated in the six selected cycles. Individuals who did not self-identify as non-Hispanic Black and were less than <20 years of age (n = 53,863) were excluded from this study, which left 7073 non-Hispanic Black adults. Participants with missing day one 24 h recall data were also excluded (n = 565). Thus, the analytical sample for this study comprised 6508 non-Hispanic Black adults. Measures representing place of birth (US vs. other) and length of time in the U.S. were used to categorize participants into three distinct groups: FB Black adults who migrated fewer than 10 years ago (n = 167; 3.0%), FB Black adults who migrated more than 10 years ago (n = 493; 7.2%), and US-born Black adults (n = 5848; 89.8%). NHANES collects self-reported information about place of birth and length of time in the US [15]. These two measures are often used as proxy measures of acculturation in studies on the health and health behaviors of immigrant populations in the US [11][12][13][14]. Since NHANES does provide separate race and ethnicity data, Black-identifying Hispanics could not be separated from other Hispanic adults. Therefore, the current study only included non-Hispanic Black adults. The National Center for Health Statistics Institutional Review Board (IRB) approved NHANES, and all participants provided written informed consent [15]. The IRB at the University of Illinois at Urbana-Champaign deemed this research exempt. Nutrient Intake Nutrient intake data were examined for all participants included in the analytical sample. The dietary intake interview of NHANES, titled "What We Eat in America", was conducted in partnership with the U.S. Department of Agriculture using a computerized data collection instrument [16]. Each participant was eligible for two days of 24 h recall; the first day was conducted in-person during the initial NHANES interview, while the second day was conducted over the telephone approximately 3-10 days later [16]. As stated above, 6508 non-Hispanic Black adults participating in NHANES 2005-2016 had complete dietary data for the first day; 4867 (69%) had complete data for the second day. Only data from the first day were analyzed since 31% of the sample did not complete the second 24 h dietary recall. Measures examined included total energy (kcal per day), protein (grams per day), carbohydrates (grams per day), total sugar (grams per day), dietary fiber (grams per day), total fat (grams per day), saturated fat (grams per day), cholesterol (milligrams per day), and sodium (milligrams per day). To identify participants who met recommendations for nutrient intake, participants' consumption levels for each nutrient were compared to the recommended level of intake mentioned in the 2015-2020 Dietary Guidelines for Americans (DGAs) [17]. According to the DGAs, the following are the recommended intake range(s): 20-35% of energy from total fat, 10-35% of energy from protein, 45-65% of energy from carbohydrates, 14 g/1000 kcal/day from fiber, <10% of energy from saturated fat, and <2300 mg/day of sodium [17]. The 2015-2020 DGAs do not have a recommended consumption amount for total sugars and cholesterol. However, the World Health Organization (WHO) recommends that <5% of energy intake should come from added sugar. Thus, total sugar intake was compared to this recommendation to identify the proportion of FB Black adults and U.S.-born Black adults who had a total sugar intake amount of <5% of energy intake [18]. As for the daily recommendation for cholesterol intake, the goal of <300 mg/day was utilized. This goal was included in the prior iteration of the DGAs (2010-2015) [17]. Diet Quality The Healthy Eating Index (HEI) is a diet quality index that measures an individual or population's dietary alignment with the DGAs [19,20]. It can be used to assess the conformance of any meal or group of foods to the diet recommendations outlined in the DGAs [19,20]. For the current study, study participants' (n = 6508) day one 24 h recall data were analyzed using the simple HEI scoring algorithm to generate HEI-2015 total and component scores [21]. Since the simple HEI scoring algorithm was used, the HEI-2015 scores presented in this study do not represent usual intake (i.e., long-term intake). Rather, they represent an estimation of how each participant's consumption on day one of the dietary interview aligned with the DGAs. The HEI-2015 total score ranges from 0 to 100, with 100 indicating perfect alignment. It consists of 13 components, including total fruits, whole fruits, total vegetables, greens and beans, total protein foods, seafood and plant proteins, whole grains, dairy, fatty acids, refined grains, sodium, added sugars, and saturated fats. Total fruits, whole fruits, total protein foods, total vegetables, seafood and plant proteins, and greens and beans all contribute five points each to the total score. The other dietary components all contribute ten points to the total score [19]. Refined grains, sodium, added sugars, and saturated fats are considered measures of moderation; higher consumption of these foods will lower the HEI total score. All others are considered measures of adequacy, so higher consumption of these items will increase HEI total score. Other Measures In addition to nutrient intake and diet quality, the following measures were examined: current age (years), sex (male vs. female), education level (<high school diploma, high school diploma or equivalent, some college, or ≥college degree), marital status (married vs. other), number of people living in the household, poverty-to-income ratio (PIR), and day of the week the 24 h recall interview was completed. These socio-demographic measures were obtained via the interviewer-administered questionnaires. NHANES investigators estimated each participant's PIR from their self-reported annual income. The PIR represents the ratio of an individual's annual household income to the federal poverty level for their household size the year the NHANES interview was conducted [22]. Days of the week for the recall interview were categorized to compare participants who had their interviews on the weekend (Friday-Sunday) to participants who had interviews on a weekday (Monday-Thursday). Statistical Analysis To examine the characteristics of the analytical sample, descriptive statistics were calculated (i.e., weighted means and frequencies). Analysis of variance (ANOVA) and chi-square tests were used to identify differences in socio-demographic measures across the three groups representing acculturation: FB Black adults (<10 years), FB Black adults (≥10 years), and U.S.-born Black adults. The weighted mean intake of each nutrient among the three groups and the weighted percentage of participants in the groups who met the dietary recommendations for each nutrient were calculated. Logistic regression was used to determine if the odds of meeting dietary recommendations for nutrient intake were significantly different among the three acculturation groups. Models were run to compare (1) both groups of FB Black adults to US-born Black adults and (2) FB Black adults (<10 years) to FB Black adults (≥10 years). Linear regressions were used to examine the association differences in HEI-2015 total scores between the three acculturation groups and both groups of FB Black adults. The unadjusted model included only the variables representing the three groups. The adjusted model also included the variables of age, sex, education level, marital status, PIR, number of household members, and day of the week of the 24 h recall interview. These socio-demographic variables were included because prior research has shown they are associated with dietary intake [23]. Confidence intervals that did not include the null value of 1.0 and had p values < 0.05 were considered statistically significant. All analyses were conducted by using SAS version 9.4 [24]. Since NHANES employs a complex sampling scheme, appropriate sampling weights were applied to the descriptive statistics and regression analyses. Results Descriptive statistics stratified by the three groups are presented in Table 1. Among the 6508 non-Hispanic Black adults, the mean age was 44.6, 44.3% were male, and 17.6% had ≥college degree. Most of the study participants (65.8%) reported a marital status other than "married". Participants had three household members on average and a poverty-to-income ratio of 2.3. Significant demographic differences were observed across the three groups for every measure of interest except the PIRs. A higher percentage of FB Black adults (≥10 years) had ≥college degree compared to the FB Black adults (<10 years) and US-born Black adults, and a higher percentage of FB Black adults (<10 years) reported being married compared to the other two groups. Descriptive information on nutrient intake is displayed in Table 2. Less than 3% of participants in all three groups met the intake recommendations for dietary fiber, and less than 4% met recommendations for total sugar intake. Less than 16% of participants in all three groups met the intake recommendation for sodium. While ≥60% of foreign-born adults met intake recommendations for saturated fat, only 43% of US-born adults met the saturated fat recommendation. Results from the logistic regression models examining associations between foreignborn status, length of time in the US, and odds of meeting recommendations for nutrient intake are displayed in Discussion This study aimed to determine if acculturation, place of birth, and length of time in the US are associated with nutrient intake and diet quality among non-Hispanic Black adults who participated in NHANES. It was hypothesized that FB Black adults who migrated to America fewer than 10 years ago would have better diet quality than US-born Black adults and FB Black adults who migrated 10 years ago or more. Overall, the findings from this study supported the hypothesis that FB Black adults (<10 years) had better diet quality than US-born Black adults. However, FB Black adults who migrated more than 10 years ago also had better diet quality than US-born Black adults. This finding does not align with the hypothesis that FB Black adults who have been in the US for more than 10 years would have diets similar to US-born Black adults. It appears that FB Black adults, regardless of their length of time in the US, had better diets than US-born Black adults. When comparing the FB groups, the odds of meeting nutrient recommendations were similar between the groups; however, the estimates from the linear regression model revealed that FB Black adults (<10 years) had slightly higher diet quality scores than FB Black adults (≥10 years). Unlike studies that focused on Latino and Asian immigrants [11][12][13][25][26][27], FB Black adults had better diet quality than US-born ones, regardless of the year they migrated to America. This finding aligns with results from a study by Brown et al., which reported that being foreign-born is associated with significantly higher diet quality scores (as measured by the Alternative HEI-2010 and DASH scores) and greater intake of healthier foods (e.g., fruits, vegetables) among Black adults in the US [14]. Brown et al. also concluded that diet quality did vary significantly by length of time in the US among FB Black adults. Thus, it is possible that length of time in the US is not associated with dietary intake among foreignborn adults who self-identify as non-Hispanic Black in the same manner as immigrants from other regions of the world. Evidence from qualitative studies provides more in-depth information on the relationship between the measures that represent acculturation and dietary intake among immigrants of African descent. Paxton et al. found that West-African immigrants living in New York, NY, reported strong efforts to maintain their traditional diets over time, which typically comprised fruits, vegetables, and grains [28]. However, they found it difficult to maintain this diet in their new environment. The participants did see evidence of dietary acculturation among their children [28], which aligns with findings from a study by Jakub et al. [29]. Jakub et al. discovered that the children of African immigrants had diets closer in profile to American youth and were more influenced by their peers and environment [29]. It is possible that Black adults who migrate to the US try hard to maintain their traditional diets over time, and the effects of dietary acculturation are more evident in their children. Given the limited number of quantitative studies on this topic, additional research is needed to confirm these findings and connect behavioral factors (e.g., cooking practices, food-purchasing habits, food preferences, etc.) to dietary outcomes among FB Black adults and their children. As previously mentioned, prior studies have linked measures that represent acculturation, such as length of time in the US, to poorer dietary quality among immigrant populations [11][12][13][25][26][27]. A study by Thomson et al. reported that acculturation was associated with poorer dietary quality and higher body mass indexes among Mexican immigrants in the US [25]. It is likely that acculturation influences diet and health differently across immigrant populations in the US. Greater emphasis and study should be devoted to assessing these differences and their connection to racial/ethnic disparities in dietary behavior and chronic disease risk. Overall, it is important to note that all three groups had large proportions of individuals who were not meeting national nutrient recommendations. For example, a small percentage of participants in all three groups met recommendations for intake of dietary fiber, total sugars, and sodium. These findings align with evidence from population-based studies of nutrient intake and adherence to dietary recommendations that focused on non-Hispanic Black adults [5,6,23,30]. A study by Thompson et al., which examined differences in nutrient intake between non-Hispanic White and Black men living in the U.S., found that less than 5% of men met the recommendations for dietary fiber and total sugar intake [30]. Furthermore, a recent "What We Eat in America" assessment of usual intake among non-Hispanic Black adults reported that most Black adults in the US surpass national recommendations for sodium intake [31]. Meeting national recommendations for nutrient intake is important, as scientific evidence indicates strong associations between saturated fat, dietary cholesterol, sodium, and CVD [32]. Since Black Americans experience high prevalence rates of many CVD risk factors (e.g., obesity, metabolic syndrome, type 2 diabetes, and high blood pressure), it is important that the field identifies factors that influence dietary intake, such as acculturation [7,[33][34][35]. Strengths and Limitations This study has strengths and limitations. Use of the nationwide NHANES dataset was a strength because it included a large, diverse sample of non-Hispanic Black adults. In addition, use of HEI-2015 was a strength because it directly measured how an individual's diet aligned with the DGAs. Key limitations included the low sample size for FB Black adults (<10 years), which might have affected ability to observe statistically significant findings for some dietary measures. This study employed a cross-sectional design, so causal associations could not be studied. Because a significant number of study participants had missing data for the second 24 h recall interview, only data from the first recall interview were analyzed. Thus, HEI-2015 scores reflecting usual intake were not calculated. All findings on nutrient intake and diet quality solely reflect the consumption reported by participants on the first day of the dietary interview. As previously mentioned, the inability to examine Black-identifying Hispanic adults was a limitation. Individuals included in the analytical sample solely reflect non-Hispanic Black adults living in the US. Future studies should include Black Hispanic adults, which likely includes individuals from Latin America and the Caribbean with diverse cultural backgrounds. The primary independent variables (i.e., foreign-born status and length of time in the US) were a major limitation of this study for two key reasons. First, these variables are only proxy measures of acculturation. Although used in prior research, they do not capture the full extent and experience of acculturation among immigrant populations [14]. Future studies should use a validated acculturation scale tailored to the target population of interest. Second, these variables only permit a simplistic comparison of foreign-born to USborn Black-identifying adults, which does not capture the generational effects associated with immigration. Studies have found intergenerational differences in dietary change among West-African immigrants, with first-generation West-African adults exhibiting more dietary acculturation compared to their immigrant parents [36]. NHANES data do not provide data to determine their generational status. In addition, the data source does not have information on ancestry, cultural beliefs, family dynamics (e.g., gender roles, cooking behaviors), or relevant environmental factors (e.g., urban/rural status, access to healthy food retailers, etc.). Having this information would have facilitated a more in-depth analysis of dietary differences between foreign-born and US-born Black adults that accounted for the complexity of these associations and the historical diversity of these groups. Future studies should consider these limitations and conduct qualitative and quantitative research that addresses these gaps in knowledge. Conclusions In summary, FB Black adults had higher odds of meeting several nutrient recommendations and had better diet quality compared to US-born Black adults, regardless of their length of time in the US. Understanding the similarities and differences among these groups is valuable for developing tailored dietary and lifestyle interventions and decreasing the risk of diet-related chronic diseases among non-Hispanic Black adults in the US. The lived experience of Black-identifying adults that migrate to US should be studied in relation to dietary intake. Although length of time in the US appears to not be a salient factor, there may be other factors that may be relevant the dietary behaviors of foreign-born Black adults: stress, underemployment, racial discrimination, economic expectations from family/community back home. Overall, this study contributes to the bodies of knowledge about the diets of immigrant populations and differences in the diets of adult immigrants who self-identify as Black in the US. This study provides valuable knowledge to the field on diet quality and nutrient intake among non-Hispanic Black immigrants. The results may be useful to nutrition educators and practitioners working to improve the health of this minority population. Additional studies are needed to explore the importance of factors contributing to changes in diet due to acculturation and their overall impact on the health and health behaviors of immigrants who self-identify as non-Hispanic Black in the US. Conflicts of Interest: The authors declare no conflict of interest.
v3-fos-license
2021-04-13T13:31:14.496Z
2021-04-13T00:00:00.000
233215878
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.frontiersin.org/articles/10.3389/fpsyg.2021.661628/pdf", "pdf_hash": "3fa949c5264370e82a3ab0c88304f1efa799a458", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41135", "s2fieldsofstudy": [ "Business", "Psychology" ], "sha1": "3fa949c5264370e82a3ab0c88304f1efa799a458", "year": 2021 }
pes2o/s2orc
Adaptive Managers as Emerging Leaders During the COVID-19 Crisis The coronavirus disease 2019 (COVID-19) has taken the world by surprise and has impacted the lives of many, including the business sector and its stakeholders. Although studies investigating the impact of COVID-19 on the organizational structure, job design, and employee well-being have been on the rise, fewer studies examined the role of leadership and what it takes to be an effective leader during such times. This study integrates social cognitive theory and conservation of resources theory to argue for the importance of adaptive personality in the emergence of effective leaders during crisis times, utilizing the crisis of COVID-19 as the context for the study. We argue that managers with an adaptive personality tend to have increased self-efficacy levels to lead during a crisis, resulting in increased motivation to lead during the COVID-19 crisis. Furthermore, managers with increased motivation to lead during the COVID-19 crisis are argued to have enhanced adaptive performance, thereby suggesting a serial mediation model where crisis leader self-efficacy and motivation to lead during the COVID-19 crisis act as explanatory mechanisms of the relationship between the adaptive personality and performance of the manager. In order to test our hypotheses, we collected data from 116 full-time managers in Saudi Arabia during the COVID-19 crisis and used hierarchical linear regression as the method of analysis. The findings support all of the hypotheses. A discussion of the results, contributions, limitations, and future directions is included. INTRODUCTION Since the initial widespread of coronavirus disease 2019 beginning in early 2020, the world has been experiencing unprecedented times of disruptions and disorder ranging from economic losses, unemployment, and organizational and job-design overhauls all the way to health issues and increased mortality rates (Chong et al., 2020;Gallup, 2020;Guyot and Sawhill, 2020). According to the World Health Organization (2020), as of December 28, 2020, there has been a total number of 79,673,754 confirmed cases of COVID-19, including 1,761,381 deaths. Numerous studies have attempted to examine the impact of such a pandemic on the workplace and the employees (e.g., Caldas et al., 2020;Trougakos et al., 2020). For instance, a recent study by Fu et al. (2021) found that the anxiety levels of the employees are affected by the reported number of COVID-19 cases and the acceleration and velocity at which the reported number is changing, thus affecting the employees' work functioning (engagement, performance, and emotional exhaustion). However, fewer studies have looked at what constitutes effective leadership in the workplace during such a crisis and its potential antecedents (e.g., Hu et al., 2020;Yuan et al., 2021). Leaders play an important role in the workplace due to their capacity to influence the environment by providing employees with the necessary resources to overcome their job demands or mitigating potential resource loss (Bakker and Demerouti, 2017). For instance, a study by Fernet et al. (2015) found that transformational leadership is related to fewer follower job demands (e.g., emotional, physical, and cognitive demands) and increased job resources (e.g., quality of relationships, participation in decision-making, and job recognition), which indirectly lead to the followers having positive work attitudes and increased job performance. As a result, having an effective leader is especially crucial in times of massive resource loss and increased demands, such as the case with the COVID-19 crisis. Due to the unexpected and disorderly nature of crises, having flexibility and readiness to change as a manager is of utmost importance as such circumstances are characterized by constrained rationality, ambiguity, time pressure, and life and death stakes (Parry, 1990;Mumford et al., 2007;Sommer et al., 2016). In other words, managers who demonstrate adaptive performance (i.e., effective handling of emergencies and work stress, creative problem solving, constant learning, and interpersonal adaptability; Pulakos et al., 2000) are necessary to provide the most suitable resources and adjust the department/team's structure, job design, and targets in order to coincide with the COVID-19 crisis. According to the social cognitive theory (SCT; Bandura, 1986), the personal factors of the individual (i.e., cognitive, affective, and biological factors) affect their behavioral patterns. Therefore, a manager who demonstrates adaptive performance is more likely to have an adaptive personality. Although the concept of adaptivity has been thoroughly discussed in the literature (Judge et al., 1999;Kilcullen, 2004;Ployhart and Bliese, 2006;Hirschi et al., 2015;Rudolph et al., 2017), there is little research that has conceptualized it as a personality trait rather than a skill, motivation, or capacity. A recent study by Fuller et al. (2018) operationalized the concept of adaptive personality and defined it as "a predisposed willingness to change oneself in response to the needs and demands of a change in the environment. Individuals with adaptive personalities focus upon maintaining a good fit with their environment, so they are mindful of changes that occur and are ready to modify thought and behavior patterns to accommodate the new situation" (p. 12). Those with adaptive personalities tend to be calm during stressful situations and possess the personal resources needed to confidently embrace change and make the best out of it (Fuller et al., 2018). Scholars call for research that empirically validates constructs of adaptivity as a personal trait (Baard et al., 2014). In addition to emphasizing the role of personal factors in influencing the individual's behaviors, SCT also sheds light on the critical role of self-efficacy. Self-efficacy refers to the belief the individual holds regarding their capability to achieve the desired results (Bandura, 1999). It is based on the level of self-efficacy that individuals choose which challenges to undertake and how much energy to invest in overcoming them (Locke and Latham, 1990;Bandura, 1991). One such form of self-efficacy is the leaders' efficacy to lead during a crisis (i.e., crisis leader self-efficacy; Hadley et al., 2011). According to the Conservation of Resources (COR) theory, people are motivated to obtain, retain, and protect their resources (Hobfoll, 1989). As a result, individuals are more likely to be motivated to take on opportunities for resource gain or protection from resource loss when they perceive they can do so (resource investment principle; Hobfoll et al., 2018). In the context of crisis management, Hadley et al. (2011) call for research by proposing a theoretical framework in which crisis leader self-efficacy and motivation to lead during a crisis serve as two explanatory mechanisms of the relationship between the leader's characteristics and performance during a crisis. This study attempts to answer the mentioned calls for research and gaps by integrating SCT and COR theory in the context of the COVID-19 crisis, thereby offering multiple contributions to the crisis management literature (James et al., 2011). First, this study examines three antecedents of effective leadership during the COVID-19 crisis (i.e., leader adaptive performance). Second, it extends previous literature arguing that personality plays a role in predicting adaptive performance by empirically testing a newly developed measure of adaptive personality utilizing a sample of full-time managers in Saudi Arabia during the COVID-19 crisis (Huang et al., 2014;Park and Park, 2019). Third, the study examines crisis leader self-efficacy and motivation to lead during the COVID-19 crisis as two explanatory mechanisms through which the manager's adaptive personality affects his/her adaptive performance during the pandemic (see Figure 1). THEORETICAL BACKGROUND Introduced by Bandura (1986), SCT is a learning theory that states that individuals acquire new behaviors through observational learning and that the individuals' personal factors, the behavior itself, and the environment affect and are affected by each other, a concept known as triadic reciprocal causation (Bandura, 1999). Unlike other social learning theories, SCT emphasizes the role of personal agency such that people are producers as well as products of their environment (Bandura, 1999). Simply put, people are self-reactors who are able to motivate, regulate, and guide their behaviors instead of solely being controlled/shaped by the imposed environment. Perceived self-efficacy is considered as one of the core self-regulatory mechanisms through which someone is motivated to engage in a certain behavior or not (Bandura, 1999)-having the belief that an individual is able to produce the desired results influences their decision-making, perception of threats and challenges, and vulnerability to the imposed environment (Bandura, 1999). More specifically, those with high self-efficacy tend to be more motivated to engage in behaviors that enhance their well-being, provide them with more resources, and/or protect their current ones (Hobfoll et al., 2018). The COR theory, a motivational theory introduced by Hobfoll (1989), defines resources as "those objects, personal characteristics, conditions, or energies that are valued by the individual or that serve as a means for the attainment of these objects, personal characteristics, conditions, or energies" (p. 516). Although resources are usually thought of in terms of money, time, or objects, Hobfoll (1989) emphasizes the importance of personal characteristics, such as personality traits and skills, as invaluable resources in dealing with stressors. According to the COR theory, stressful situations are characterized by (1) perceived threat toward one's current resources, (2) loss of one's current resources, and/or (3) failure to gain additional resources following significant effort or investment (Hobfoll, 1989;Hobfoll et al., 2018). When individuals are faced with such stressful situations, they tend to act in one of two ways depending on their current pool of resources. If the individual has the necessary resources to deal with the stressful situation, they tend to utilize their current pool of resources to offset the resource loss. Moreover, if the stressful situation imposes circumstances of huge resource loss, such as in the case of a crisis, individuals are more likely to utilize their resources to also gain additional resources in the process as resource gains become more salient/important in such contexts (gain paradox principle; Hobfoll, 1989;Hobfoll et al., 2018). On the other hand, if the individual lacks the necessary resources to deal with the stressful situation, they tend to be more vulnerable to it and enter a defensive, aggressive, and potentially irrational state by engaging in behaviors of withdrawal or self-protection as a last resort (desperation principle; Hobfoll, 1989;Hobfoll et al., 2018). The drawn insights from SCT and COR theory can be illustrated in the type of behaviors managers engage in when dealing with stressful situations in the workplace, such as those of a crisis. In their theoretical framework of leader development and performance, Chan and Drasgow (2001) discussed the concepts of self-efficacy and personal resources as two important characteristics in influencing a leader's motivation to lead in a certain context and their performance as a result. Hadley et al. (2011) built upon the work of Chan and Drasgow (2001) and apply it to the context of crisis; more specifically, they introduce and develop a measure for the concept of crisis leader selfefficacy, which refers to the efficacy beliefs the leader holds about themselves regarding information assessment and decisionmaking in public health and safety crisis. The information assessment aspect of crisis leader self-efficacy involves the leader's beliefs regarding their capability to determine the flow of information during a crisis, collect and identify data needed for crisis resolution, and prevent/reduce errors and biases (Hadley et al., 2011, p. 634). On the other hand, crisis decision-making involves the leader's beliefs regarding their capability to generate response options and utilize the gathered data to evaluate, recommend, and choose the best course of action during a crisis (Hadley et al., 2011, p. 634). The authors argue that leaders with high self-efficacy to lead in a crisis are more likely to be motivated to lead and perform better during a crisis. Furthermore, they argue that crisis leader self-efficacy can be predicted by the leader's characteristics, such as individual differences, general leadership background, crisis training, and procedural preparedness (Hadley et al., 2011). Adaptive Personality and Crisis Leader Self-Efficacy Although the positive impact of proactive personality regarding self-initiated constructive change has been thoroughly discussed in the literature (Fuller and Marler, 2009;Spitzmuller et al., 2015), adaptivity is considered as a crucial, initial step when faced with situations requiring organizational change (Strauss et al., 2015), such as in the case of the COVID-19 crisis. Strauss et al. (2015) argue that adaptivity is crucial for subsequent proactivity as it creates critical resources during instances of organizational change by acquiring knowledge of and adjusting to changes in stakeholders' goals and strategy, enhancing one's self-efficacy to cope with such change, and maintaining positive relationships. Fuller et al. (2018) distinguish adaptive personality from proactive personality and their polar opposites by proposing the Change-Control Circumplex Model, which is based on two axes: control orientation and change orientation. Whereas control orientation refers to the tendency to the preference to feel in control of one's changing environment (Rothbaum et al., 1982), change orientation refers to the tendency to approach or avoid change (Fuller et al., 2018). Although both proactive and adaptive personalities are characterized by approaching change, proactive personality emphasizes primary control (change the environment to fit one's needs) while adaptive personality emphasizes secondary control (accommodate to environmental conditions; Fuller et al., 2018). Adaptive individuals tend to be present-oriented, flexible, quick learners, optimistic regarding change, and willing to accommodate change and try better ways of doing things (Fuller et al., 2018). Drawing insight from SCT, we argue that the flexible and accommodating nature of adaptive personality regarding embracing change is more likely to welcome numerous experiences of adaptation as imposed by the environment during the individual's lifetime (Bandura, 1986). Namely, instead of resisting change, such as in the case of passive or changeresistant individuals, adaptive individuals are more likely to make the necessary adjustments to fit into their environment when needed (Fuller et al., 2018). Thus, such a constant tendency to engage in behaviors of adaptation is argued to result in more learning experiences in dealing with various forms of change. Given the urgent, ambiguous, and dynamic nature of crises (Pearson and Clair, 1998;Boin et al., 2005;Mumford et al., 2007), being able to quickly adapt to the imposed situation is a critical resource for any leader (Hadley et al., 2011). Therefore, managers with an adaptive personality are more likely to have the confidence to lead during a crisis as they tend to have the necessary experience to back it up. In other words, we argue that adaptive managers are more likely to believe in their capacity to accurately assess the available information at the time of the crisis and make/recommend the necessary adjustments. Thus, we hypothesize the following: Hypothesis 1. Adaptive personality will be positively related to crisis leader self-efficacy. Crisis Leader Self-Efficacy and Leader Motivation During the COVID-19 Crisis The COVID-19 crisis has established a "new normal" for almost everyone due to its unprecedented far-reaching impact and the needed joint and collective effort by nations, governments, communities, and industries to overcome it (Maragakis, 2020;Mull, 2020;Solomon, 2020). The rapid spread of COVID-19 has only emphasized the need for evolving, adaptive countermeasures to keep up with such volatility, resulting in an environment of ambiguity, complexity, and dynamism that has affected numerous sectors (Chong et al., 2020;Djeebet, 2020;Evans, 2020;Lim-Lange, 2020). The question then becomes "what would increase managers' motivation to lead during the COVID-19 crisis?" Chan and Drasgow (2001) introduced the construct of motivation to lead and defined it as a "construct that affects a leader's or leader-to-be's decisions to assume leadership training, roles, and responsibilities and that affect his or her intensity of effort at leading and persistence as a leader" (p. 482). One core mechanism discussed by the authors regarding enhancing one's motivation to lead is one's beliefs of self-efficacy (Mitchell and Beach, 1976;Mitchell, 1980;Bandura, 1986;Chan and Drasgow, 2001). Drawing insight from the COR theory, personal resources such as crisis leader self-efficacy are more likely to play a vital role in how managers respond to crises such as COVID-19 (Hobfoll, 1989;Hadley et al., 2011). More specifically, we argue that managers tend to be faced with two options in terms of reacting to the COVID-19 crisis: (1) withdraw from the leadership role and/or responsibilities in a last attempt to save their current resources or (2) utilize their current resources to offset the resource loss associated with COVID-19 and probably compensate for the loss. The COR theory suggests that one's decision to withdraw or tackle a stressful situation will depend on one current pool of resources and its relevance to the situation (Hobfoll et al., 2018). A previous study empirically demonstrated that self-efficacy is positively related to motivation (Çetin and Aşkun, 2018). Therefore, we argue that a manager who is confident in their capability to lead during times of crisis is more likely to be motivated to lead during the COVID-19 crisis instead of vulnerably suffering the losses as they see themselves having the necessary resources to turn the tide in their favor. Thus, we hypothesize the following: Hypothesis 2. Crisis leader self-efficacy will be positively related to motivation to lead during the COVID-19 crisis. Furthermore, integrating the SCT and COR theory, we argue for adaptive managers' potential to be motivated to lead during the COVID-19 crisis due to their beliefs of self-efficacy to lead during a crisis. More specifically, due to the tendency of adaptive managers to welcome change and modify their ways when needed (Fuller et al., 2018), they are more likely to have accumulated a wealth of knowledge and experience in adapting to situations of ambiguity, complexity, and dynamism, resulting in them experiencing high levels of self-efficacy to lead in such situations, including crises. As a result, such high levels of crisis leader self-efficacy from years of experience are argued to enhance the manager's motivation to lead during an actual crisis, such as that of COVID-19. Thus, we hypothesize the following: Hypothesis 3. Crisis leader self-efficacy will mediate the relationship between adaptive personality and motivation to lead during the COVID-19 crisis. Adaptive Performance as a Form of Effective Leadership During the COVID-19 Crisis The COVID-19 crisis has brought about numerous changes to the structure of many organizations as well as their job designs (Foss, 2020a,b;Seetharaman, 2020). Given the sudden, imposed nature of the COVID-19 crisis, one effective form of action that managers are apt to take in response to the accompanied change in job requirements is to demonstrate adaptive performance (Allworth and Hesketh, 1999;Griffin et al., 2007;Jundt et al., 2015). Adaptive performance has been defined as "task-performance-directed behaviors individuals enact in response to or anticipation of changes relevant to job-related tasks" (Jundt et al., 2015, pp. 54-55). In the workplace, adaptive performance tends to be exhibited when individuals need to adjust their knowledge, skills, and abilities to "adopt new roles, acquire new skills, or... modify existing work behaviors" (Chan, 2000, p. 2) such that they are able to maintain their level of performance or reduce any performance loss during instances of change. Furthermore, adaptive performance can be both anticipatory and/or reactive such that it demonstrates not only behaviors of learning and preparation for anticipated changes but also react to ones that have already occurred (Jundt et al., 2015). Adaptive performance also includes cognitive and/or skillbased adaptations as well as interpersonal and structural ones as long as the individual and the organization can minimize the losses associated with change and reap its benefits when possible (Jundt et al., 2015). Managers who exhibit adaptive performance tend to (1) handle emergencies and stress by remaining calm during times of difficulty and ambiguity while quickly analyzing options for dealing with such times, (2) engage in creative problem-solving by employing and generating new, unique ideas, (3) always be on the lookout for information that will enhance their learning and improve their work methods, and (4) demonstrate interpersonal flexibility by welcoming other people's views and cooperating with them (Pulakos et al., 2000;Charbonnier-Voirin et al., 2010). These types of behaviors are more likely to minimize the resource loss associated with the COVID-19 crisis, rendering these behaviors an effective form of leadership during such a time. Therefore, managers who are motivated to lead during the COVID-19 crisis are more likely to dedicate their effort and time in a way that enhances their well-being, the well-being of their team, and the success of the organization as a whole; in other words, they are more likely to engage in adaptive performance as a behavioral manifestation of such motivation. Pulakos et al. (2002) investigated the taxonomy of adaptive performance using supervisor ratings of their employees' performance and found that self-efficacy and motivation are significant predictors of adaptive performance. Thus, in the context of COVID-19, we hypothesize the following: Hypothesis 4. Motivation to lead the during the COVID-19 crisis will be positively related to adaptive performance during the COVID-19 crisis. Building on the previous arguments and integrating the SCT and COR theory (Bandura, 1986;Hobfoll, 1989), managers are more likely to engage in adaptive performance if they believe in their capability to make a change in response to a situation; otherwise, it will seem like an unworthy investment of energy and time, which is also seen as a source of loss (Halbesleben and Buckley, 2004;Hobfoll et al., 2018). This also relates to Lazarus and Folkman (1984) concept of secondary appraisal, which states that the type of coping strategy an individual implements depends on the individual's appraisal of whether they have the necessary resources and ability to cope with the situation. Therefore, managers that have crisis leader self-efficacy are more likely to be motivated to lead during the COVID-19 and demonstrate such capability to reduce the losses associated with such a crisis by engaging in adaptive performance. Furthermore, such managers are more likely to have such high beliefs of self-efficacy and motivation to lead during the COVID-19 crisis because of their past experience in dealing with similar ambiguous, dynamic, and/or challenging situations, which is the case for managers with adaptive personality (Pulakos et al., 2002). Thus, we hypothesize the following: Hypothesis 5. Motivation to lead during the COVID-19 crisis will mediate the relationship between crisis leader self-efficacy and adaptive performance during the COVID-19 crisis. Hypothesis 6. Crisis leader self-efficacy and motivation to lead during the COVID-19 crisis will sequentially mediate the relationship between adaptive personality and adaptive performance during the COVID-19 crisis. Participants Online surveys were randomly distributed among full-time managers in public, private, and charitable sectors in Saudi Arabia through multiple channels (e.g., social media outlets, training courses, and executive MBA courses) with instructions emphasizing the targeted population. Furthermore, the data were collected during the summer of 2020 (May-August) to reflect the targeted context of the COVID-19 crisis. We asked every participant to state whether they currently work in a full-time managerial position or not at the time of taking the survey, thereby filtering out those who do not. An initial sample size of around 196 was collected. Utilizing the listwise-deletion method of missing data and deleting responses that failed the attention checks (e.g., "we appreciate your attention, please choose "strongly disagree" for this item"), the final sample size was 116. This method was used because the authors expect the data to be missing completely at random and have sufficient statistical power (Newman, 2014). The sample size adheres to the recommended ratio of 15 observations per independent variable and the preferred sample size of 90 observations to run the analysis in this study, as suggested by Hair et al. (2018). Furthermore, to have a power of 0.80 (i.e., 1-β), resulting in limited a possibility of a type 2 error of 0.20 (i.e., β), with an anticipated medium effect size of.15 at an α equal to 0.05, we collected a sample size that is larger than the minimum recommended sample size of 97 based on Cohen (1992). Thirtytwo percent of the respondents were female and their ages ranged from 25 to 74 years old with an average of 43 years. Respondents' work experience ranged from 2 to 45 years with an average of 18 years. The sample characteristics and basic descriptive analysis are provided in Table 1. Measurement In addition to utilizing the English versions of the used measures, we created Arabic versions of all the used measures Frontiers in Psychology | www.frontiersin.org following Brislin (1980) back-translation procedures. All items were measured on a five-point Likert-type scale where 1 = strongly disagree and 5 = strongly agree. Adaptive personality was measured using the 14-item scale developed by Fuller et al. (2018). We asked the participants to indicate to what extent they agree or disagree with a set of statements regarding their trait characteristics in general. An example item includes "I am flexible when it comes to making changes." Crisis leader self-efficacy was measured using the eight-item scale developed by Hadley et al. (2011). We asked the participants to indicate to what extent they agree or disagree with a set of statements regarding their selfefficacy to lead during a crisis. Item number 3 of the original scale was removed due to it having a low factor loading. An example item includes "I can anticipate the political and interpersonal ramifications of my decisions and actions." Motivation to lead during the COVID-19 crisis was measured using an adapted version of the eight-item scale of Chan and Drasgow (2001) general measure of motivation to lead, which is similar to that of Hadley et al. (2011) in order to reflect the context of the COVID-19 crisis. In doing so, we reduced the total number of items from 27 in the original scale to the eight items that are most relevant to the COVID-19 crisis. We asked the participants to indicate to what extent they agree or disagree with a set of statements regarding their motivation to lead during the COVID-19 crisis. An example item includes "I am the type of person who likes to be in charge of others." Adaptive performance was measured using the 19-item scale developed by Charbonnier-Voirin et al. (2010) based on Pulakos et al. (2000) conceptualization of adaptive performance. We asked the participants to indicate to what extent they agree or disagree with a set of statements regarding their performance during the COVID-19 crisis. An example item includes "I quickly take effective action to solve the problem." Gender, age, and organizational tenure were used as control variables as these variables have been found to be associated with an individual's adaptive performance (Pulakos et al., 2000). Analysis Hierarchical multiple regression analysis was used to assess the direct effect among adaptive personality, crisis leader selfefficacy, motivation to lead during the COVID-19 crisis, and adaptive performance during the COVID-19 crisis. To assess the mediation effect, a test was conducted via the PROCESS macro (v3.5) using SPSS 27 software with the bootstrap sampling method (sample size = 5,000), as recommended by Hayes (2013). The bootstrap sampling method was used to generate asymmetric confidence intervals (CIs) for the mediating effect. RESULTS Harman's single factor test (Harman, 1967) was conducted to check for the existence of Common Method Bias (CMB). For this test, a substantial amount of CMB is present if a single factor emerges from the factor analysis, or one general factor accounts for most of the covariance among the variables (Podsakoff et al., 2012). Principal component analysis with varimax rotation on the questionnaire items revealed the existence of 14 distinctive factors with eigenvalues >1.0. These factors accounted for 70.88% of the total variance. Moreover, the first (and largest) factor accounted for 31.19% of the total variance, which is significantly <50% (i.e., the minimum threshold for influential CMB as per Harman's single factor test; Podsakoff et al., 2012). Since more than one factor emerged and no general factor accounted for the majority of the total variance, concerns of CMB were minimized and CMB was less likely to have significantly confounded the results of this study (Podsakoff et al., 2003). Also, the correlations among the study variables were examined to detect if they showed any sign of inflation (Spector, 2006). The correlations among the observable variables were within the acceptable range except for adaptive personality and performance, which is justifiable because they are closely related constructs, yet distinctive. This empirical evidence together with the consistency of the findings with the theoretical argument and previous research should alleviate any concerns related to CMB. Table 2 provides the means, standard deviations, correlation coefficients, and reliabilities of the study variables. All the internal consistency reliabilities of the study variables were acceptable for research purposes (above 0.70; Hair et al., 2018). Adaptive personality was found to be positively correlated with crisis leader self-efficacy (r = 0.58, p < 0.01), motivation to lead during the COVID-19 crisis (r = 0.62, p < 0.01), and adaptive performance during the COVID-19 crisis (r = 0.78, p < 0.01). Similarly, crisis leader self-efficacy was positively correlated with motivation to lead during the COVID-19 crisis (r = 0.61, p < 0.01) and adaptive performance during the COVID-19 crisis (r = 0.64, p < 0.01), respectively. Lastly, motivation to lead during the COVID-19 crisis was positively correlated with adaptive performance during the COVID-19 crisis (r = 0.65, p < 0.01). Table 3 summarizes the regression results for hypotheses 1, 2, and 4. All of the models were not susceptible to multicollinearity as they had tolerance values well above 0.2 and Variance Inflation Factors (VIF) well below 5 (Bowerman and O'Connell, 1990). Hypothesis 1 was supported as adaptive personality positively predicted crisis leader self-efficacy in Model 2 (b = 0.62, p < 0.01). Hypothesis 2 was also supported as crisis leader self-efficacy positively predicted motivation to lead during the COVID-19 crisis in Model 4 (b = 0.75, p < 0.01). Lastly, hypothesis 4 was supported as motivation to lead during the COVID-19 crisis positively predicted adaptive performance during the COVID-19 crisis in Model 7 (b = 0.47, p < 0.01). To test hypotheses 3, 5, and 6, Hayes (2013) PROCESS addon was utilized. The results indicate that the indirect effect of adaptive personality on motivation to lead during the COVID-19 crisis through crisis leader self-efficacy was statistically significant (b = 0.29, SE = 0.09, 95% BCa CI [0.14, 0.49]), supporting hypothesis 3. Furthermore, the results show that the indirect effect of crisis leader self-efficacy on adaptive performance through motivation to lead during the COVID-19 crisis was statistically significant (b = 0.23, SE = 0.06, 95% BCa CI [0.11, 0.35]), supporting hypothesis 5. Lastly, the results show that the indirect effect of adaptive personality on adaptive performance through crisis leader self-efficacy and motivation to lead during the COVID-19 crisis was statistically significant (b = 0.22, SE = 0.06, 95% BCa CI [0.10, 0.36]), supporting the full serial mediation as argued for in hypothesis 6 (see Figure 2). DISCUSSION This study investigates the role personality plays in the emergence of effective leaders during the COVID-19 crisis. More specifically, it examines the effect of the newly developed construct of adaptive personality on full-time managers' adaptive performance in Saudi Arabia during the COVID-19 crisis. Furthermore, this study examines crisis leader self-efficacy and motivation to lead during the COVID-19 crisis as two sequential, explanatory mechanisms between adaptive personality and adaptive performance during the COVID-19 crisis based on Hadley et al. (2011) theoretical framework. The findings indicate that managers with an adaptive personality are more likely to have increased levels of self-efficacy to lead during the times of a crisis, which supports previous research that has emphasized the importance of personality in the development of one's confidence to perform (Larson and Borgen, 2006;Fuller and Marler, 2009;Li et al., 2017). The findings also indicate that crisis leader selfefficacy was found to be significantly related to motivation to lead during the COVID-19 crisis, suggesting that managers who have high beliefs regarding their capability to lead in any crisis are more likely to be motivated to lead during the COVID-19 crisis. Furthermore, those managers were found to be more likely to manifest such motivation by demonstrating adaptive performance given its relevance during times of much needed adaptivity due to the sudden, imposed organizational changes (Jundt et al., 2015;Strauss et al., 2015;Park and Park, 2019). Theoretical Implications This study has multiple theoretical contributions. First, it contributes to the scholars' work on adaptivity such as that of Hirschi et al. (2015) and Rudolph et al. (2017) by finding support for the reliability and predictive validity of adaptive personality, a newly developed construct by Fuller et al. (2018) and Baard et al. (2014). More specifically, the construct of adaptive personality in this study follows and provides empirical evidence for the conceptual framework discussed in Rudolph et al. (2017) based on the career construction model of adaptation (Savickas, 2005(Savickas, , 2013Savickas et al., 2009;Savickas and Porfeli, 2012) such that adaptivity as a trait (adaptive personality) tends to result in adaptation results (adaptive performance) through adapting responses (e.g., self-efficacy). Second, drawing insight from the SCT and COR theory, this study finds support for the role of individual differences in influencing the leader's behavior through self-efficacy and motivation to lead as argued for by the theoretical framework developed by Chan and Drasgow (2001). Taking the context of crisis into consideration, this study, therefore, provides empirical evidence for Hadley et al. (2011) adopted theoretical framework, based on Chan and Drasgow (2001) framework, such that individual characteristics tend to affect one's crisis leader selfefficacy, motivation to lead during a crisis, and, as a result, their performance during the crisis. Third, this study contributes to the crisis management literature by investigating the role of individual differences in influencing one's coping outcomes during the COVID-19 crisis in Saudi Arabia, thereby expanding the findings to new contexts. For instance, a study by Zacher and Rudolph (2021) collected data from 979 individuals in Germany and found that individual differences in life satisfaction, positive affect, and negative affect result in different types and levels of coping strategies during the COVID-19 crisis (e.g., controllability appraisals, positive reframing, using emotional support, self-blame). Furthermore, utilizing a sample of 408 doctors and nurses in Wuhan City, China, another study by Yi-Feng Chen et al. (2021) found that proactive personality tends to influence one's performance, resilience, and thriving through strengths use during the COVID-19 crisis. Such findings from multiple countries emphasize the importance of individual differences and its persistence in coping with the COVID-19 crisis. Practical Implications This study also offers multiple practical implications regarding crisis management, especially during the current times of the COVID-19 crisis. First, it recommends that organizations recruit and hire managers with adaptive personality due to their increased adaptive performance during crises such as that of COVID-19, with their increased motivation and confidence to lead during a crisis. Second, although personality traits are relatively stable, they are not completely static (Robins et al., 2001;Damian et al., 2019); therefore, current managers should be assigned to attend training programs that enhance their adaptivity and adaptive behaviors to be better able to handle the COVID-19 crisis and other similar situations (Aguinis and Kraiger, 2009). Third, organizations should provide a culture of adaptivity for the adaptive managers to thrive in and eliminate any factors that might hinder the manifestation of their motivation as adaptive performance (Schein, 2010). Limitations and Future Directions This study has limitations like any other. First, due to the nature of the studied constructs, the data collection was based on a self-report design where the managers responded to statements regarding their own personality, beliefs, motivation, and performance, which might raise small issues of CMB (Podsakoff et al., 2012). Although (1) a study by Fuller et al. (2016) indicate that CMB needs to be present in high levels before it becomes influential in single-source studies and (2) the results of this study regarding the Harman single factor test, correlational analysis, and VIFs mitigate any concerns relating to CMB (Harman, 1967;Kock, 2015), future research should further control for CMB when attempting to replicate the findings of the study using other techniques (e.g., the correlational marker technique, the CFA marker technique; Lindell and Whitney, 2001;Williams et al., 2010, respectively). Second, due to the selfreport nature of the study, subjective measures of the leaders' adaptive performance were collected. Although this might raise some concerns regarding the validity of the outcome, Janssen (2001) noted that subjective measures of performance are also as effective as other-rated performance measures as objective measures of performance are more likely to result in idiosyncratic interpretations and are likely to vary across different raters. Third, this study utilized a cross-sectional design to collect the data by collecting all the data at a single point in time, which might raise concerns regarding the hypothesized relationships' temporal precedence (Bowen and Wiersema, 1999). Thus, future research should utilize a longitudinal or a time-wave design when replicating this study through collecting the data at multiple points in time (Ployhart and Vandenberg, 2010). The findings of this study also shed light on potential future research avenues. First, although the importance of adaptive performance during crises has been noted (Pulakos et al., 2000), future research would benefit from examining how the adaptive performance of managers, in the context of crises, relates to other more distal forms of performance and leadership effectiveness measures (e.g., creative performance, contextual performance, role performance, department performance; Katz and Kahn, 1978;Campbell, 1990;Borman and Motowidlo, 1993;Harari et al., 2016). Another future research avenue involves investigating other explanatory mechanisms through which a manager's adaptive personality affects their adaptive performance. For instance, the concept of resilience can act as such explanatory mechanism as those who are resilient tend to have basic abilities to adapt to adverse events based on their individual, unit, family, and community resources (Masten et al., 2009;Britt et al., 2016). In other words, adaptive managers are more likely to have increased levels of individual resources and, as a result, resilience as they tend to have faced and adapted to challenging situations throughout their life, rendering them more able to adapt to such situations in the future, and thus, demonstrate it through adaptive performance. Lastly, future research can investigate the influence of the COVID-19 crisis as a moderator of the investigated relationships in this study, and how such a crisis might have a different, unique impact compared to other types/forms of crises (e.g., due to lockdown or isolation) on outcomes such as utilization of online resources and/or teleworking (Belzunegui-Eraso and Erro-Garcés, 2020; Contreras et al., 2020). CONCLUSION In this paper, we investigate a newly developed measure of adaptive personality as a potential antecedent to what constitutes effective leadership during the unprecedented COVID-19 crisis. More specifically, we investigate crisis leader self-efficacy and motivation to lead during the COVID-19 crisis as two mediators that help explain the relationship between adaptive personality and adaptive performance during the crisis. The results suggest that adaptive managers are more likely to be more confident in themselves to lead during a crisis and, thus, be more motivated to lead during the actual COVID-19 crisis. As a result, they are more likely to invest more time and energy to adapt to the situation at hand through behaviors such as effective handling of emergencies and work stress, creative problem solving, constant learning, and interpersonal adaptability, rendering such managers as an invaluable asset to any department or organization. DATA AVAILABILITY STATEMENT The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation. ETHICS STATEMENT The studies involving human participants were reviewed and approved by Deanship of Scientific Research, King Abdulaziz University. The patients/participants provided their written informed consent to participate in this study. AUTHOR CONTRIBUTIONS ABaj: conceptualization, wrote-original draft, wrote-review and editing, and visualization. SBaj: methodology, formal analysis, investigation, wrote-review and editing, funding acquisition, and project administration. MA, ABas, and SBas: funding acquisition, resources, and conceptualization. All authors contributed to the article and approved the submitted version.
v3-fos-license
2016-05-04T20:20:58.661Z
2008-12-01T00:00:00.000
3222930
{ "extfieldsofstudy": [ "Psychology", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "http://www.scielo.br/pdf/clin/v63n6/12.pdf", "pdf_hash": "55c0c9c341ae4a4b3b395918a5a601083515e857", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41137", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "sha1": "55c0c9c341ae4a4b3b395918a5a601083515e857", "year": 2008 }
pes2o/s2orc
Prevalence of Sexual Dysfunction and its Associated Factors in Women Aged 40–65 Years with 11 Years or More of Formal Education: A Population-Based Household Survey OBJECTIVE To evaluate the prevalence of sexual dysfunction and its associated factors in middle-aged women with 11 years or more of formal education. METHODS A cross-sectional, population-based study was carried out using an anonymous, self-response questionnaire. A total of 315 Brazilian-born women, 40–65 years of age with 11 years or more of schooling, participated in the study. The instrument used in the evaluation was based on the Short Personal Experiences Questionnaire. Sexual dysfunction was calculated from the mean score of sexual responsiveness (pleasure in sexual activities, excitation and orgasm), frequency of sexual activities and libido. Sociodemographic and clinical factors were evaluated. Poisson multiple regression analysis was carried out and the prevalence ratios with respective 95% confidence intervals (95%CI) were calculated. RESULTS The prevalence of sexual dysfunction was 35.9% among our study population. Multiple regression analysis showed that sexual dysfunction was positively associated with older age (prevalence ratios=1.04; 95%CI:1.01–1.07) and with the presence of hot flashes (prevalence ratios=1.37; 95%CI:1.04–1.80). Having a sexual partner (PR=0.47; 95%CI:0.34–0.65) and feeling well or excellent (prevalence ratios= 0.68; 95%CI: 0.52–0.88) were factors associated with lower sexual dysfunction scores. CONCLUSIONS Sexual dysfunction was present in more than one-third of women that were 40–65 years of age with 11 years or more of formal education. Within that age group, older age and hot flashes were associated with higher sexual dysfunction scores, whereas feeling well and having a sexual partner were associated with better sexuality. INTRODUCTION Female sexual function is complex and affected by physical, psychological and social factors. 1 The prevalence of female sexual dysfunction is high, ranging from 43% to 88%, 2,3 and it may significantly affect self-esteem and quality of life. Even sexual dysfunction of short duration can create frustration and anguish. When chronic, it may lead to anxiety and depression, harm relationships, and cause problems in other aspects of life. 1,4 Moreover, the clinical effects of sexual dysfunction can be augmented by the intensity of the full range of climacteric symptomatology. 5 Diseases and factors such as aging, arterial hypertension, smoking and pelvic surgery have been associated with female sexual dysfunction. 6 Personality, 7 lifestyle and culture-dependent variables should also be taken into consideration. 8 Although the epidemiology of male and female sexual function has been investigated in depth, the majority of studies continue to be confined to Europe, the United States (US) and Australasia. 9 Few studies have been carried out on the sexuality of climacteric women in Latin American populations, [10][11][12] and data on this subject in women with high school or university educations is particularly sparse. The aim of this study was to collect information on the prevalence of sexual dysfunction and its associated factors in Brazilian women of 40 to 65 years of age with eleven or more years of formal education. Sample size The target population was the female population of Belo Horizonte in the state of Minas Gerais, Brazil, aged 40-65 years, with at least 11 years of formal education, which consisted of 44,313 women in the year 2000. The necessary minimum sample size for a similar study was calculated to be 377 women, based on the assumption that 43% of the female population experiences sexual dysfunction, with an expected difference of 5% between the sample and the general population and a type I error (α) of 0.05. 13 In the present study, which was limited to women who answered all five questions used to calculate the sexual dysfunction score, the sample size was recalculated to evaluate any possible loss of precision. A sample of 315 women is expected to result in an absolute difference of 5.5%. Subjects This cross-sectional, population-based study was carried out using a self-response questionnaire that was anonymously completed by participants at home between May and September 2005 in the city of Belo Horizonte, Minas Gerais, Brazil. For the purposes of this study, the municipality of Belo Horizonte was stratified into nine regions. From these regions, 18 weighted areas (WA) were randomly selected. The weighted area consisted of a geographical area that was considered as the primary sampling unit (PSU). Each WA consisted of various census sectors. 14 In each selected WA, five census sectors were randomly chosen (secondary sampling unit). Next, five corners of these census sectors were randomly selected for visitation. The variables of the sampling plan, strata and PSU were included in the data analysis. The city of Belo Horizonte consists of 62 weighted areas (WA), containing a total of 2,563 census sectors. 15 All sectors were included in the randomization process. Research assistants initiated the selection of women at each of the five randomly selected corners in each sector, guided by maps of the location. They went to each household and verified whether there were any Brazilian-born women of 40-65 years of age living in the home and whether they had at least 11 years of formal education. If there were eligible women living at that address, they were invited to participate in the study. If they agreed to participate, a questionnaire was left to be answered and a date was scheduled for the completed questionnaire to be collected. If the eligible women were not at home, they were contacted by telephone and, if they agreed to participate, the questionnaire was delivered to their home. The principal investigator telephoned the participants and confirmed completion of the questionnaires. Completed questionnaires were collected by a messenger, placed in an unlabeled envelope, and put into a sealed post-box to be delivered to the principal investigator. When questionnaires were delivered to the principal investigator, the weighted area and the education level (11 years of schooling) were noted to homogenize the response. A total of 420 questionnaires were distributed. Forty-two women (10%) refused to participate in the study. The reasons given by the women for not participating were: lack of time, that they did not feel comfortable answering the questions, or that their husbands did not want them to participate in the study. Thus, 378 questionnaires were filled out and delivered to principal investigator; of these, 315 (83.3%) contained answers to all questions used to calculate the sexual dysfunction score. The delivered questionnaires that returned incomplete were not taken into consideration. Thus, 315 middle-aged women took part in the present study. The questionnaire used in the study consisted of two parts. In the first part, the participants answered questions regarding their sociodemographic characteristics (age, marital status, ethnic group, income, schooling and paid employment), clinical characteristics (previous surgical history, body mass index, depression, arterial hypertension, diabetes, urinary incontinence, history of cancer, hot flashes, nervousness, insomnia, hormone therapy and self-perception of state of health), reproductive characteristics (menopausal status, number of pregnancies) and behavioral characteristics (physical activity, smoking and presence of sexual partner). Next, they answered questions about their sexuality. The study protocol was conducted in accordance with the Helsinki Declaration and approved by the Internal Review Board of the institution. Evaluation of sexual dysfunction The instrument used to evaluate sexual dysfunction was based on the Short Personal Experiences Questionnaire (SPEQ). 16 The original version of this instrument was made available by investigators at the University of Melbourne, Australia. The questionnaire was independently translated from its original English into Brazilian Portuguese by two translators fluent in both languages. Next, the two versions were compared and a final version was obtained with the approval of both translators. This version was then tested in a group of 50 Brazilian-born women of 40-65 years of age with 11 years or more of schooling. To ensure cultural equivalence, all questions that generated doubt were once again adapted and tested until the questionnaire was completely comprehensible to all women in the pilot group. A final version of the questionnaire in Brazilian Portuguese was thus obtained. All of the questions referred to the month prior to the interview. Dependent variable The variable sexual dysfunction was calculated from the mean of the sum of the scores of 1) sexual responsiveness, which evaluated pleasure in sexual activities (graded from 1 to 6, where 1 reflected an absence of pleasure and 6 maximum pleasure), excitation (1-6) and orgasm (1-6); 2) frequency of sexual activities (1=never, 2=less than once a week; 3=once or twice a week, 4=several times a week, and 5=once a day or more); and 3) libido (1=it never took place, 2=it took place less than once a week, 3=it took place once or twice a week, 4=it took place several times a week, 5= it took place once a day or more). A score ≤ 7 was considered indicative of sexual dysfunction and a score > 7 was considered indicative of no sexual dysfunction. 16 Independent variables Age was dichotomized into < 50 years or ≥ 50 years of age. Menopausal status was classified as premenopausal, perimenopausal or postmenopausal. Menopausal status was defined as premenopausal when the women had regular menstrual cycles or a menstrual cycle similar to the pattern that had been predominant throughout their reproductive life. Women were considered to be in perimenopause if they had menstrual cycles during the previous 12 months, but with some alterations in their menstrual pattern. Women were considered postmenopausal if their last menstrual period was at least 12 months prior to the interview. 17 In women who been to hysterectomy, menopausal status was classified as follows: women aged 40-44 years who had regular menstruation prior to hysterectomy were considered to be premenopausal; women aged 45-48 years who had irregular menstruation prior to hysterectomy were considered to be in perimenopause; and women of more than 48 years of age or as well as those who had undergone hysterectomy with bilateral oophorectomy were considered to be postmenopausal, based on a previous report that the mean age at menopause in Latin America is 48.6 years. 18 Marital status was dichotomized into married/ living with a partner, or other; ethnic group into white or non-white; schooling into 11 years or more than 11 years of formal education; family income into ≤ US $1300 or > US $1300 per month; physical activity into none/< 3 times a week or ≥ 3 times a week; and number of pregnancies into > 2 or ≤ 2. Body mass index (BMI) was dichotomized into < 25 or ≥ 25 kg/m 2 . Paid employment was dichotomized into none/≤ 20 hours per week or > 20 hours per week. Smoking was dichotomized into never smoked or current/past smoker. The presence or absence (yes or no) of the following variables was also investigated: depression, arterial hypertension, diabetes, urinary incontinence, history of cancer, hot flashes, nervousness, insomnia and the presence of a sexual partner in the previous month. Hormone therapy was dichotomized into never used or current/past user. Self-perception of the state of health was classified as terrible/poor/average or good/ excellent. Statistical analysis A bivariate analysis was performed to evaluate sexual dysfunction as a function of the independent variables. The chi-square test was applied followed by Yates' correction or Fisher's exact test. 19 Finally, Poisson multiple regression analysis was applied to the model to calculate the prevalence ratio (PR) and the respective 95% confidence intervals (95%CI). 20 The backward criterion strategy was used to select the variables. 21 For this analysis, the strata and the cluster/smallest geographical unit of the sampling plan were used. Stata software version 7.0 (Stata Corporation, College Station, Texas, USA) was used for the analysis. The criterion used for the inclusion of independent variables in multiple regression analysis consisted of a p-value of <0.25 in bivariate analysis or in a simple logistic regression. P-values ≤ 0.05 were considered statistically significant. RESULTS In this sample, 46.7% of participants were 50 years of age or more; 65.9% were married or living with a partner; 70.8% reported having a sexual partner; 50.8% stated that they had more than 11 years of schooling; 59.5% had a family income ≤ US$1300; 51% had more than two children; and 33.5% reported practicing physical activity regularly, three or more times a week (data not presented as a table). The prevalence of sexual dysfunction was 35.9%. The prevalence of sexual dysfunction among premenopausal women was 24.2%, perimenopausal women 37.3% and postmenopausal women 45.3%. The 315 women who answered the five questions comprising the sexual dysfunction variable were compared to the 63 who failed to answer these questions. Women who were unmarried or not living with a partner (p<0.001) and those who reported hypertension (p=0.002) or urinary incontinence (p=0.036) were more likely to answer all questions contained in the section on sexual dysfunction. DISCUSSION The objective of this study was to evaluate the prevalence of sexual dysfunction and to identify its associated factors in middle-aged, Brazilian-born women. The profile of the population of the present study should be emphasized in view of the scarcity of population-based studies on female sexuality, particularly among middle-aged women. In the present study, overall sexual dysfunction was evaluated in women with or without a sexual partner. In a study carried out in a US population, 22 Kinsey (1948) showed that the search for sexual pleasure involves the mind and entire body of individuals, not only their genitals. Given women's greater longevity, it is very probable that many will grow old alone; however this does not imply the end of their sexuality or loss of their need for intimacy, touch and affection. 23 The use of an internationally recognized questionnaire, the SPEQ, 16 which was adapted for use in Brazil, is another factor that deserves particular mention. As direct interviews on sexual activity during menopause may lead to constraints in responses, 24 it was important for the questionnaire to be self-response and anonymous. Various measures were taken to ensure that the women selected would feel at ease to respond honestly to the questions. For instance, research assistants were selected who had a similar profile to that of the women in the study sample in order to encourage empathy, and reassurance was given to participants that all answers would remain confidential. In the present study, the prevalence of sexual dysfunction was 35.9%, increasing from the pre-to the postmenopause. These data are in agreement with findings from other studies carried out in different populations. In a prospective observational study, Dennerstein et al 3,25 also found that from the beginning to the latter phase of the menopausal transition, the proportion of women with sexual dysfunction increased from 42% to 88%. In a study carried out in Chile, Castelo-Branco et al. 26 found that 51.3% of sexually active women presented sexual dysfunction. Abdo et al. 27 found at least one type of sexual dysfunction in 57.4% of women 41 years of age or older. Nevertheless, in our sample, multiple analysis showed no association between menopausal status and sexual dysfunction. One possible explanation for this finding may be that in women with a higher educational level, the percentage affected by sexual dysfunction is lower. 27 Sexual dysfunction was not significantly associated with age, which was dichotomized into < 50 years and > 50 years for the bivariate analysis; however, in the multiple regression analysis, when the variable was considered in a continuous manner, the correlation between sexual dysfunction and increasing age was significant. In a previous study 13 carried out in this same sample population, albeit with a more general evaluation of sexuality, a decline in sexuality as a function of increased chronological age was also found. A longitudinal cohort study carried out in Australia reported a significant negative effect of aging on the frequency of sexual activity, as well as on aspects of responsiveness (sexual excitation, pleasure and orgasm). 3,25 The Women's International Study of Health and Sexuality, a multinational, cross-sectional study carried out in Europe and in the United States, reported a decline in all aspects of sexual function with aging. 28 In a longitudinal study, Ford et al 29 also reported that an increase in age was associated with poorer sexuality. Castelo-Branco et al 26 evaluated 534 healthy, middle-aged Chilean women and found that the prevalence of sexual dysfunction increased from 22% in the 40-44-year old age-group to 66% in the 60-64 year olds. Hot flushes, depression, nervousness and insomnia were associated with scores ≤ 7 for sexual dysfunction in the present analysis. These results are in agreement with those of previous studies in which depression was associated with difficulties in vaginal lubrication in women in some regions of the world, whereas stress was associated with inability to achieve orgasm. 30 Other studies have also reported that psychological symptoms, stress and emotional problems may be related to a decline in sexual activity. 31 It was also found that psychological symptoms were common in climacteric women and were associated with hot flashes. This observation connects psychological symptoms with the menopausal transition and suggests that their cause may be biological. 32 Hot flashes are known to constitute a triggering factor for insomnia, which, in turn, leads to a reduction in energy and to depression, which has a negative effect on sexual function. 33 The women in the present sample who reported leading a sedentary lifestyle or who had a poor perception of their own health were more likely to be affected by sexual dysfunction. Other studies have reported similar findings. Greendale et al 34 reported that an increase in physical activity and satisfaction with life were related to improved sexual function. Mental health, 30,35 emotional well-being and positive self-image were associated with improved sexual function. 30,35,36 On the other hand, additional studies have found correlations between chronic diseases and an increase in sexual dysfunction 28,37 due to circulatory problems, 38 neurological function, hormone balance or systemic health. 39 It has been found that both an increase in systolic arterial pressure and the administration of beta blockers to treat hypertension may be detrimental to sexual function. 40 The present analysis shows that arterial hypertension was associated with the occurrence of sexual dysfunction, although the use of medication for hypertension was not evaluated in this study. Diabetes was another disease associated with sexual dysfunction. This finding is in agreement with other studies that reported that women with diabetes had a higher prevalence of sexual dysfunction compared to non-diabetic women. 41,42 In the present study, women who had a sexual partner were less likely to have sexual dysfunction. This is consistent with other studies on middle-aged women conducted in different geographical locations, emphasizing the relevance of the sexual partner as a protective factor against sexual dysfunction. 13,36,43 Nevertheless, previous studies have also described the importance of feelings for the partner with respect to sexuality. 44 These data suggest the need to carry out further studies to investigate, in addition to the simple presence of a partner, the quality of relationship of these couples. This study should be interpreted within the context of its limitations. The questionnaires that were returned incomplete were not taken into consideration in the evaluation of sexual dysfunction. The group composed of unmarried women or those not living with a partner and who reported hypertension or urinary incontinence, was associated with answering the entire block of questions concerning sexual dysfunction. Women with these characteristics who may have sexual dysfunction may be more motivated to answer all the questions. The data, which include information on hypertension, psychological symptoms and the use of hormone therapy, were obtained by participant self-report. Although recall bias cannot be ruled out, previous studies using sef-reports suggest a high validity of health information, indicating that women with higher education levels provide more reliable data. 45 CONCLUSIONS Sexual dysfunction was found in more than one-third of women that were 40-65 years of age with 11 years or more of formal education. Informing women, giving them the tools necessary to change their own lifestyle, and treating climacteric symptoms and existing comorbidities may lead to improved sexuality and, consequently, to a better quality of life. Future studies are required to evaluate the effects of hormones and other drugs on the sexuality of climacteric women.
v3-fos-license
2014-10-01T00:00:00.000Z
2006-01-05T00:00:00.000
13284146
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://bmcgastroenterol.biomedcentral.com/track/pdf/10.1186/1471-230X-6-1", "pdf_hash": "021cc60e473fea278f8a44465b0c841805faea13", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41138", "s2fieldsofstudy": [ "Medicine" ], "sha1": "021cc60e473fea278f8a44465b0c841805faea13", "year": 2006 }
pes2o/s2orc
Development of bile duct bezoars following cholecystectomy caused by choledochoduodenal fistula formation: a case report Background The formation of bile duct bezoars is a rare event. Its occurrence when there is no history of choledochoenteric anastomosis or duodenal diverticulum constitutes an extremely scarce finding. Case presentation We present a case of obstructive jaundice, caused by the concretion of enteric material (bezoars) in the common bile duct following choledochoduodenal fistula development. Six years after cholecystectomy, a 60-year-old female presented with abdominal pain and jaundice. Endoscopic retrograde cholangiopancreatography demonstrated multiple filling defects in her biliary tract. The size of the obstructing objects necessitated surgical retrieval of the stones. A histological assessment of the objects revealed fibrinoid materials with some cellular debris. Post-operative T-tube cholangiography (9 days after the operation) illustrated an open bile duct without any filling defects. Surprisingly, a relatively long choledochoduodenal fistula was detected. The fistula formation was assumed to have led to the development of the bile duct bezoar. Conclusion Bezoar formation within the bile duct should be taken into consideration as a differential diagnosis, which can alter treatment modalities from surgery to less invasive methods such as more intra-ERCP efforts. Suspicions of the presence of bezoars are strengthened by the detection of a biliary enteric fistula through endoscopic retrograde cholangiopancreatography. Furthermore, patients at a higher risk of fistula formation should undergo a thorough ERCP in case there is a biliodigestive fistula having developed spontaneously. Background The recurrence of obstructive jaundice after cholecystectomy is estimated to occur in one to seven per cent of all cholecystectomy cases [1][2][3][4]. Symptoms may be due to retained stones which were unrecognized at the time of the initial operation, the development of a bile duct stricture or the presence of a long cystic duct remnant. Further-more, parasitic infections of the hepatobiliary system, such as fascioliasis and ascariasis, account for other rare causes of recurrent bile duct obstructions, which may mimic the choledocholithiasis picture [5][6][7][8][9]. We present another cause of cholestatic jaundice, not commonly considered in differential diagnoses: the obstruction of the extrahepatic bile ducts by the concretions of fibrinoid materials (biliary bezoars), attributed to the existence of a fistula between the duodenum and the bile duct. Case presentation The patient is a 60-year-old Iranian female referred to our hospital with several-weeks history of abdominal pain, severe pruritus, yellow sclera, light-colored stool, and nausea and vomiting. In addition to fever and chills, she reported intermittent episodes of sustained abdominal pain in the past two months, which worsened by meals and radiating to the right-scapular area. Six years previously, she had undergone cholecystectomy without choledochoenteric anastomosis for symptomatic cholelithiasis. The patient was a known case of rheumatoid arthritis and diabetes mellitus, her medications consisting of insulin, prednisolone, chloroquine and methotroxate. On admission, the patient had a temperature of 38.5 C and was noted to have icteric sclera. A right upper quadrant scar was seen on her abdomen. The patient's abdomen was soft and non-distended with normal bowel sounds, albeit tender to deep palpation in the epigastrium and the right upper quadrant. There was no evidence of rebound tenderness or peritoneal signs. Laboratory data on admission were notable for an elevated total bilirubin of 7.1 mg/dl (nl 0.2-1.1); direct bilirubin of 4.7 mg/dl (nl up to 0.5); alkaline phosphatase of 1217 IU/L (nl 64-306); SGOT of 78 U/L (nl 5-40); and Obstructing materials Figure 1 Obstructing materials. Bezoar material in the shape of a CBD cast (solid arrow) and multiple stones (open arrow) removed from the dilated common bile duct by surgery. Abdominal ultrasound demonstrated the dilation of the biliary tree with visible choledocholithiasis. Abdominal CT-scan revealed a dilated biliary tract with air entrapment in the lateral side of the right lobe of the liver. The patient subsequently underwent endoscopic retrograde cholangiopancreatography (ERCP). The ERCP illustrated multiple stones, the prominent one of which was a 2.5-cm mass. Despite attempts for lithotripsy and sphincterotomy, however, ERCP failed to retrieve all the materials. Therefore, the remaining stone-like objects and materials were successfully removed by means of common bile duct (CBD) exploration (Figure 1). Intra-operative findings included multiple stones and a 2.5-cm nonhomogenous mass, impacted in a dilated CBD and having a distorted anatomy due to adhesions, without any signs of previous anastomoses. A drainage T-tube was inserted through CBD. Nine days after the surgery, a post-operative T-tube cholangiography accidentally detected a long choledochoduodenal fistula (Figure 2). Since the whole management process from ERCP to T-tube cholangiography did not take more than 3 weeks, both the endoscopist and the surgeon must have failed to detect the fistula because of its overlap with the duodenum and the distorted anatomy. A histological assessment of the objects removed revealed fibrinoid materials with some cellular debris. The patient remained well during the one-year follow-up. Discussion This report describes a case of cholestatic jaundice caused by concretions of fibrinoid materials obstructing the extrahepatic bile ducts (biliary bezoars). It is noteworthy that there is a paucity of information regarding the formation of the biliary tract bezoar. The first case report of bezoar-induced obstructive jaundice dates back to 1989, in which Seyrig JA et al. reported a case of cholestasis caused by an intradiverticular bezoar [10]. The second case report (1993) is about a brown pigment gallstone, having formed around a phytobezoar without spontaneous biliary enteric fistula [11]. Lamotte M et al. (1995) described the development of a biliary phytobezoar 15 years after surgical cholecystogastrostomy [12]. In another study, gallstones containing surgical suturing material were found in a number of patients who had undergone a previous cholecystectomy [13]. The formation of the fistula, detected accidentally through post-operative cholangiography, is believed to be the cause of bezoar development in the present study. Conclusion Induced by intra-enteric materials transmitted through long fistulae, bezoar formation should be considered as an extremely rare differential diagnosis for obstructive jaundice since it is not normally associated with such relatively common predisposing factors as choledochoenteric anastomosis or duodenal diverticulum The ability of ERCP in detecting fistulae in a case of obstructive jaundice helps less invasive measures (more intra-ERCP views and efforts) outweigh surgery. The detection of biliodigestive fistulae in obstructive jaundice patients requires a thorough ERCP. Moreover, when the macroscopic characteristics of the stones resemble those of bezoar material, it is expedient that surgeons allocate more time and effort to detecting and ligating the possible fistula. We hope that this case report of fistula formation with subsequent bezoar production and obstructive jaundice will raise the awareness of gastroenterologists and surgeons about such an interesting phenomenon.
v3-fos-license
2018-12-01T17:11:18.001Z
2015-07-03T00:00:00.000
54061613
{ "extfieldsofstudy": [ "Psychology" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=57775", "pdf_hash": "f01fb06e843b01c7299598666048e8018d629c3a", "pdf_src": "ScienceParseMerged", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41140", "s2fieldsofstudy": [ "Education", "Psychology" ], "sha1": "f01fb06e843b01c7299598666048e8018d629c3a", "year": 2015 }
pes2o/s2orc
Effect of the Motor Game “ Exchequer Points ” on the Topological Structure of the Relational Space : Case of 7-8 Aged Tunisians Pupils The relational structure of the space for the child in order to ensure an efficient interaction with its environment is crucial. A didactic approach has as base that motor game could improve systemic and sensory-motor function of the body. The acquisition of a topological and spatial language represents the perspective of this study, in particular based on the action of a game created about it and we have called “Exchequer points”. The effects of this game on the topological relationships were studied among students (N = 44) of second primary school during the school year 2014-2015. The average age of the participants is 7.3 years. ANOVA model for repeated measures was used for data analysis. Results showed that after the learning program based on the motor game, the children of the experimental group (N = 22) significantly improved their topological relationships assessment. On the contrary, the children of the control group (N = 22) did not show significant differences between the preand post-measurement. Introduction Space is the physical, perceptual, conceptual or representative in which real or represented objects, mobile or immobile, animated ornot animated, are located and moved actively or passively, in a system of spatio-temporal relations (Fayasse & Thibaut, 2003).For this purpose, whether in motor activities or even in the activities of daily life the child's motor skills require precise knowledge of the position and orientation of the body and objects in the space where he acts.In a spatio-temporal reference system, dimensional and relational aspects of space need to be accurately defined to promote motor cognition and so appropriate behaviour (Wade & Swanston, 2001).Consequently, an essential role of spatial perception is to provide access to information for the organization of the action (Milner & Goodale, 1995).In this context, the surrounding world can only be understood in the mode of a possible overall action (Gibson, 1979). This process led to a functional definition of spatial perception (Previc, 1998).The perceived space is discontinuous, particularly structured according to the body's capacity for action that dependent on object properties and perceived benchmarks that they defined it (Neggers & Bekkering, 2000). In this study we have chosen to use the game in its functional spatial dimension, to arrive at a rational and efficient structuring of childhood environment.Indeed, the child's actuation in a fun location space exploration would be our interest.It is a complex situation that he has to find each time the suitable solution tailored to the proposed variables through perceptual mechanisms of space navigation.This capacity is then based on complex mechanisms which, if they do not develop properly, make complicated or impossible processing visuospatial information, that joined what (Paoletti, 1999) was defined as motor education.So, there is an approach that emphasizes the use of driving experiences on a daily basis by the child as a key to self-knowledge and a move towards the objective and rational thought.This educational approach is in line with the idea that the driving experience or well-structured games allow children to discover general and disciplinary concepts (Paoletti, 1999). Visual Perception of Space In the context of visual and spatial apprehension of space, the existence of several visual processing steps is consistent with the model of Marr (1982), which proposes an organization of perception in several successive stages building.The visual percept from the retinal image, very informative, to the object in three identified dimensions.This model is supported by the hierarchical and anatomical patterning visual cortical areas, which carry out treatment of more complex visual information (Felleman & Van Essen, 1991;Young, 2000).The visual information mechanisms processing are described as being more likely inferential, based on Bayesian principles of integration (Rao & Ballard, 1999).The most useful information would not be present at the retinal level, but within the visual information processing system (Scannell & Young, 1999): internal knowledge about how the world is structured contributes to process incoming visual signals.This framework allows considering treatment of spatial information is also based on internal knowledge: representations of the body and the body's capacity for action.Computation approaches to motor intentional indeed postulate that the movement and its effects on the environment would be decisive for the structuring of sensory-motor invariants.The co-occurrence of motor and sensory signals during motor production would indeed build internal representations of expected sensory consequences of intentional acts engines (Mossio & Taraborelli, 2008).This knowledge allows later to give a motor direction or a driving intention to intentional motor acts observed (Bidet-Ildéi, Orliaguet, & Coello, 2011).This sensory-motor knowledge would also perceive the space in relation to the organization of potentials actions.Visual information would then be processed in the cortex by the dorsal route Ungerleider & Mishkin (1982).This occipito-parietal pathway is specialized in visuospatial processing information (path of "where?"), as opposed to the occipito-temporal channel (path "what?").It is considered by Jeannerod (1994Jeannerod ( , 2003) ) as a "path of action".The processing of visuospatial information would be primarily located in the posterior parietal regions of the right cerebral cortex (Aleman et al., 2002).It is therefore crucial to the spatial location and visual guidance movement in space, and selectively responds to the spatial aspects of stimuli such as the direction or speed (Ungerleider, Courtney, & Haxby, 1998).According to this view, the visual system would be divided into two independent subsystems, one ventral, dedicated to the vision for the collection, and the other back, dedicated to the vision for action (Goodale & Milner, 1992;Milner & Goodale, 1995). Spatial Cognitive Map and Spatial Representation Visual perception then alienates the skills and basic facilities necessary for an autonomous locomotion, effective and safe in the physical and social environment.It allows us to become aware of our position and update information from the environment.Indeed, according to many authors (Gärling, 1995;Gärling, Book, & Lindberg, 1984;Russell & Ward, 1982), an awareness of the acquired and stored environment plays an important role on people accessing the skill to plan and accomplish their movements. A significant amount of research has been conducted on the spatial aspects of mental models.As a result of this research, it was shown that individuals are able to build a spatial mental model that not only contains benchmarks, but also information on the properties and geometric relationships of this space called a "cognitive map" (Doré & Mercier, 1992).This connotation refers as defined by Tolman (1948), an internal representation of the surrounding space.Thus, the spatial representation is then based on three prerequisites: Cognitive processing environment (individual knowledge of a specific environment), spatial abstraction (ability to manipulate abstract concepts such as spatial topological relations) and the ability to represent space in the form of cognitive maps (Pierre & Soppelsa, 1998).In fact, it glosses two distinct spatial representations: a general representation (card type), and a representation paths (card road) illustrating that there are different strategies for knowledge of space.The strategy used in the card type and card road has specific features whose origins are found in different frames of reference used: egocentric (type road) and allocentric (card type). The Spatial Egocentric and Allocentric References The egocentric reference constitutes one of the bases of the organization of the oriented behaviour extracorporeal space (Jeannerod, 2006;Jeannerod & Biguer, 1987).In other words, the perception of the spatial position of objects to which movements are directed can be determined with respect to some or all of our body.To this end, locating the ability points of the body grows together with the ability to use his body to move and to guide.The human body is a fundamental axis system in orientation phenomena.It will be noted that the lateral axis (or median) which refers to the symmetrical sides of the body, the front axle will be given by the different functions of the body (the look direction in particular) and the vertical axis (cephalo-caudal) expressed by gravity, which may be detected in a standing position.Different terminology in the literature can thus be used to state the origin of egocentric coding.The egocentric encoding of an object can be retinoids-or eye-centered, cerebro-centered (Karn et al., 1997), trunk-centered (Darling & Miller, 1995), referred to a specific body segment to the task as requested shoulder (Soechting et al., 1990), or referred to the viewing direction.At each of these tasks corresponds an egocentric reference frame with different origin body (Ghafouri et al., 2002). Instead, the card type of representation is based on knowledge of the topographical properties of the environment, such as the location of objects relative to a fixed coordinate allocentric system.It is therefore an untied representation of the position of the individual.His role is predominant in the browser's ability to determine the direction of places outside of his field of vision or establish spatial relationships between places whose links have not previously been explored physically.The provision of a card-type description is often hierarchical type, that is to say, the space is divided into separate areas, and each area is described one after another.The card type, unlike the typical representation road, addresses a reorganization of information such as detours, shortcuts, etc… (O'Keefe & Nadel, 1976).Moreover, the superiority of the representation of the card type on the representation of the road type is confirming at an experimental level.Certainly, much research has been aimed at finding whether spatial representations built on the basics of navigation (e.g. from a road perspective type) have the same metric properties that representations using a map token (card-type perspective).To this end, Noordzij & Postma (2005) have established a bird flight distance comparison task.This experiment required participants to adopt an aerial perspective in agreement with the card type description and not consistent with the description type of road.The results show the superiority of the card type description on the type road description.This methodology has been reused in subsequent research as Péruch, Chabanne, Nese, Thinus-Blanc & Denis (2006) set up an experiment in which participants are asked to mentally compare the distances between different objects.In a first experiment, the two distances compared have the same starting point, when in a second experiment the starting points were different.In both cases, a greater number of correct answers and a shorter response time are observed from a perspective map type.From these results it was reported that the estimation of distances is easier when spatial representations were constructed from a description type map that from a road description type of a complex space. The ability to use multiple reference systems and switch from one to another in accordance with the required tasks and information, contained signs the acquisition of a stable reference system.According Wohlwill (1981), the kind of reference system used by an individual depends on various situational factors such as the presence or absence of salient landmarks, the demand for a particular task and probably personal experience.The ability of the child following the acquisition of a permanent referential to orient spatially relates to analytical skills and understanding between the marks visually perceived, as well as the ability to stay focused after change of position markers or his own body (Barisnikov & Pizzo, 2007).So it is right to point out the importance of the game engine in all situations and motor behaviour that could offer for the child to best explore its spatial potential.In fact, it allows children to use their creativity while developing their imagination, dexterity, and physical, cognitive and emotional potential.Education professionals and child development agree that the game provides the child with movement experiences, creativity, and friendship in a way that emphasizes fun (Lester & Russell, 2011).In addition, the game is important for the harmonious development of the brain (Ginsburg, 2007).The game also promotes a significant increase in physical activity (Tucker, 2008).A survey in the United States (Healthy Schools for Healthy Kids) among 500 teachers and 800 parents' indicates that 90% of teachers and 86% of parents believe that active children learn better and behave better in class.Ginsburg (2007) points out that the game is considered as a great way to increase the level of motor activities for children, and that is the joy of childhood.In accordance with the theoretical basis presented above we have developed a quasi-experimental research to explore the effect of a playful driving education program designed for the development of thematic concepts related to topological representations of space and which are transverse to other academic skills with potential positive influence on children. Method This study is about a game that we created and called "Exchequer points".The game aimed the child's motor learning taking into account internal and external factors that influence the acquisition of actions and spatial displacements representations, programming, organizing trips, place and time, retroactive and proactive feedback, attention and memory.Certs, these skills could support following a transfer of learning, the development of transversal skills useful in other educational tasks such as handwriting, geometry etc… The motor education facilitates, similarly, the development of school disciplinary skills including not only math, native language, but also awakened in fine arts and music (Wauters-Krings, 2009). The game is a grid drawn, on the floor in a primary school playground, with white paint, measured 12 meter by 12 meter with49 black points on each intersection.It takes place on a large square divided into 36 squares other form of chess.This game would mainly target the spatial organization through actuating the children's ability to be in the area to determine the position one occupies in relation to benchmarks and a coordinate system and matching correctly different movements for different topological possible relations and described by the various proposed variants.Spatial orientation is associated with the perception and spatial structure is associated with abstraction and reasoning. Through the experiment on the principles of the game "points Exchequer" we will test, during the first term of school year 2014-2015, the development of spatial relationships of the child regardless of the reference system. Participants Forty four students had voluntary participated in this study.They are schooled in both mixed classes of second year of primary school each one containing twenty two student.The classes belong to the same public school and with two different teachers.But we chose two teachers with the same basic training and with the same number of years' experience.To this end, we arrange to work with an experimental group and a control group. The average age of the participants is 7.3 years.These children attend a public primary school.Their middle parent socio-cultural level is defined by the father's job.All these participants are considered normal and welladjusted to schooling.They are all in the classes corresponding to their chronological age and are average students for all school subjects.Their parents were informed and give their agreement signature about the participation of their children in the experiment research and they have the opportunity at any time to withdraw their children from it.The results of this research guarantee anonymity and confidentiality and the parents may be aware of their children's skills assessment. Procedure This study is therefore divided into three parts.First, we conducted a pre-test on two groups of children (2 nd year of primary school) to verify the homogeneity of the sample and a test assessing their topological relationships.Secondly, and for 12 weeks, with two sessions of 50 minutes/week, we submitted on one hand, the experimental group learning with a program based on the game and on the other, the control group with a conventional learning.The third and final section is devoted to a re-test evaluating the topological relations of the two groups of students. The test done for the children to assess the topological relationships is: RTD (Topological and Directional Relation Test by Barry (2010).In fact, for the test, the child is interviewed individually in a room of his school in which he is sitting comfortably at a table facing the examiner.The child should then follow a presentation by 9 points initially listed on a sheet "refer" which is laid flat on the table and oriented in portrait, try to recognize the place of a single item on a separate sheet "stimulus".After passing reference to right over, the examiner inverts reference sheet and stimulus leaves and starts executing it, refer to the left. 3 types of responses are recorded: -Correct answer -Mirroring answer: the child shows the symmetrical point to the expected relative to a vertical axis passing through the centre of the sheet. -Error: any response not corresponding to a right answer or a mirror.The correct answer is separated from the total number of errors and the number of mirrors depending on the side of the sheet "referent" so it has 4 results, two for the referent right (total errors and total mirrors) and two for the referent left (total errors and total mirrors).Using the calibration tables, the corrector converts the raw results in cumulative percentages to compare the results of the participant with those of children in the same age.Having collected the data obtained, we subject them to analysis of variance (ANOVA). Data Analysis For each measurements taken before and after training, ANOVA was performed with the factor "type of learning" (motor game learning and traditional learning) as a variable factor inter and "period" (pre-test and post-test) as a variable intra.The factorial model was 2 × 2 (2 groups × 2 measurements).Post-hoc comparisons were made with the Sidak test and the level of significance was set at α = 0.05. Results In general, the results shown in Table 1 indicate that the two groups were homogeneous for all parameters (no significant difference between the two groups before learning).In addition, the experimental group shows significant differences between the before and after training and for all tested parameters.Furthermore, significant differences were recorded between the control group and the experimental group at the after training (except the number of errors left and % Cumulative errors left).Moreover, the Table 1 specify that the progress (Δ = before − After) recorded by the experimental group is significantly different from the control group at all settings (except the number of errors left and % Cumulative errors left).The data obtained in the test assessment are analysed among the answers of the participants. Number of Mirrors on the Right As regarding the number of mirrors on the right in the results of the test assessment, Figure 1 showed on one Discussion This study aimed at evaluating the effects of the use of a motor education program based on the game and seeking exploration and location of the space from predefined benchmarks on topological skills of the student. The results of the study show that students mainly in the experimental group performed better than those who followed a traditional learning.In fact, we found that the learning effect is significant and at all levels of the test (number and percentage cumulative mirrors and errors right and left). Consistent with many studies, moving is a motor and sensory experience in connection with the memory, for understanding the spatial environment organization (Viader, Eustace, & Lechevalier, 2000;Bidet-Ildéi, Orliaguet, & Coello, 2011).To be represented, the space must be experienced as moving in us, we simultaneously change our perception of the environment."The Exchequer points" offers for the student the opportunity to navigate in the game space while seeking his sensory-motor system based in particular on building repositories and egocentric allocentrics that working together or separately, conduct ongoing updating of its own "mapping" extracorporeal and directional competence (left-right).The purpose of these systems is to allow the taking of benchmarks and the construction of a "space of places" in which objects are identified and located as the target of the action (Paillard, 1991). Many other researches (Thommen & Rimbert, 2005;Pêcheux, 1990;Pradet et al., 1982) were then agreed that the spatial language can only develop through mental and cognitive representations related to the arrangement of the environment.Spelke & Shusterman (2005) assume that the acquisition of a natural spatial language has to be combined with some different areas of our knowledge of space. However, the game changes the zone of proximal development.Apprenticeships located in this area are oriented towards a level of cognitive development processes that the child has not yet acquired, but which becomes accessible with a pair support, a parent or a teacher.Thus, among Vygotsky, learning by the game is a good learning because it precedes the development (Rivière, 1990).In this perspective, it seems legitimate to ask once again this question: A pedagogy which has the ambition to bring children beyond the game in their learning shouldn't be started particularly from the game? The research findings, although of a limited validity and generalization due to the small size of the sample, showed that motor game "exchequer points" can play an important role in the learning of fundamental concepts relating to the topological structure of the relational space and which are also offered for cross thematic and interdisciplinary teaching in school education skills like Mathematics, fine art education, handwriting and eventually other disciplines. Figure 6 .Figure 7 .Figure 8 . Figure 6.Cumulative Percentage mirrors on the left before and after learning the control group and the experimental group.NS: not significant (p > 0.05); ***Significantly different at p < 0.001. Table 1 . Means and standard deviations of the parameters of the study before and after training for both groups.
v3-fos-license
2018-07-10T00:14:25.507Z
2018-08-19T00:00:00.000
49480647
{ "extfieldsofstudy": [ "Chemistry", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/19336918.2018.1491496?needAccess=true", "pdf_hash": "74d979bfeb22972f238975611169c771bec2e6c6", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41142", "s2fieldsofstudy": [ "Biology" ], "sha1": "74d979bfeb22972f238975611169c771bec2e6c6", "year": 2018 }
pes2o/s2orc
Inositol polyphosphate-4-phosphatase type II plays critical roles in the modulation of cadherin-mediated adhesion dynamics of pancreatic ductal adenocarcinomas ABSTRACT The inositol polyphosphate-4-phosphatase type II (INPP4B) has been mostly proposed to act as a tumor suppressor whose expression is frequently dysregulated in numerous human cancers. To date, little is unveiled about whether and how INPP4B will exert its tumor suppressive function on the turnover of cadherin-based cell-cell adhesion system in pancreatic ductal adenocarcinomas (PDACs) in vitro. Here we provide the evidence that INPP4B manipulates cadherin switch in certain PDAC cell lines through a phosphorylated AKT-inactivation manner. The knockdown of INPP4B in AsPC-1 results in a more invasive phenotype, and overexpression of it in PANC-1 leads to partial reversion of mesenchymal status and impediment of in vitro invasion but not migration. More importantly, E-cadherin (Ecad) is enriched in the early and sorting endosomes containing INPP4B by which its recycling rather than degradation is enabled. Immunohistochemical analysis of 39 operatively resected PDAC specimens reveals it is poorly differentiated, non-cohesive ones in which the INPP4B and Ecad are partially or completely compromised in expression. We therefore identify INPP4B as an tumor suppressor in PDAC which attenuates AKT activation and participates in preservation of Ecad in endocytic pool and cellular membrane. Introduction Epithelial-mesenchymal transition (EMT) is a developmental progress that endows epithelial cell with partial loss of epithelial features and gain of mesenchymal phenotype due to being transcriptionally reprogrammed. The classical event frequently seen in cancer-associated EMT is characterized by the cadherin switch between E-and N-cadherin (Ecad, Ncad), under certain circumstances, together with the varying levels of vimentin (Vim) [1]. This mesenchymal state facilitates destabilization of cellcell adhesion complex and change of cell polarity, resulting in increased invasiveness of tumor cells [2]. The advances achieved recently shed new light on the pivotal role of phosphatidylinositol-3,4-bisphosphate (PI (3,4)P 2 ) metabolism and signaling in the regulation of cancerous cellular events: motility and invasiveness [3,4]. Mounting evidences have established that PI(3,4) P 2 dominates a distinct by-pass of the PI3K/AKT pathway. Signaling transduced by PI(3,4)P 2 can be modulated by inositol polyphosphate 4-phosphastases type II (INPP4B), which preferentially hydrolyzes PI(3,4)P 2 to phosphatidylinositol-3-phosphate (PI(3)P) [5]. This dephosphorylation compromises the activation of AKT, which unveils the tumor-suppressing function of INPP4B in the context of some malignancies [6,7]. The model proposed by a relevant research indicates that INPP4B promotes the serum and glucocorticoid-regulated kinase 3 (SGK3)-mediated degradation of metastasis suppressor N-myc downstream regulated 1 (NDRG1) in breast cancer harboring oncogenic mutation in PIK3CA [8]. Additionally, NDRG1 could recruit to sorting/recycling endosomes and participate in recycling of Ecad [9]. We accordingly hypothesize that INPP4B could play a role in modulating the turnover of cadherins, thereby inducing stabilization of cell-cell contact and suppression of cell invasion. To our knowledge, the contribution of INPP4B in pancreatic ductal adenocarcinoma (PDAC) progression has not been investigated so far, while accumulating data manifest the existence of EMT-related switch between Ecad and Ncad in pancreatic cancer development [10]. Herein, we have presented the demonstration that INPP4B controls EMT marker turnover via rectifying AKT activation in the metastatic cascade of certain PDAC cell lines, and examined the association of INPP4B and Ecad expression with various clinicopathological parameters in human pancreatic tumor samples. We also demonstrate INPP4B is highly involved in endocytic and recycling dynamics of Ecad. INPP4B expression correlates with EMT markers of PANC-1 and aspc-1 in vitro We observed very low expression of INPP4B at both protein (Figure 1(a)) and mRNA levels (Figure 1(b)) in PANC-1 and SW1990 compared to AsPC-1 and BxPC-3. We selected AsPC-1 for knockdown use due to its highest INPP4B level among, while the PANC-1 was chosen for overexpression assay owing to its minimal INPP4B level. INPP4B upregulation in PANC-1 significantly downregulated the mesenchymal marker Ncad (1.28-fold reduction of transcript and 1.85-fold reduction of protein; P < 0.05), and upregulated the Vim (1.55-fold increase of transcript and 2.09-fold increase of protein; P < 0.05) and epithelial maker Ecad (1.43-fold increase of transcript and 1.59-fold increase of protein; P < 0.05) compared with that of the control (Figure 1(c,d), Figure S1A). Knockdown of target gene in AsPC-1 significantly decreased the Vim (1.75-fold reduction of transcript and 1.52-fold reduction of protein; P < 0.05) and Ecad (2.08-fold reduction of transcript and 1.61-fold reduction of protein; P < 0.05), with elevated Ncad level (4.12-fold increase of transcript and 3.31-fold increase of protein; P < 0.05) relative to the control (Figure 1(e,f), Figure S1B). These findings were further supported by the varying intensities of target protein staining when INPP4B expression was modified in immunofluorescence analysis (Figure 1(g)). The Ecad staining was originally located in cell membrane and peripheral region ( Figure 1(g), arrowheads). When it was upregulated by INPP4B, both the membranous and cytoplasmic expressions were simultaneously altered (Figure 1(g), boxed areas). Although cytoplasmic expression of Ncad was also detected when cadherin switch was evident in transduced cells, few membranous expression was obviously observed, and perhaps the lack of mature cell-cell junctions could account for this phenomenon. INPP4B regulates cadherin switch through inhibiting phosphorylation of AKT In order to confirm our hypothesis that INPP4B acted as a tumor suppressor in PDAC cells through rectifying PI3K/AKT signaling, phosphorylation status of AKT was tested in Ad-INPP4B and INPP4B-Rnai group by western blot. We subsequently realized that INPP4B predominantly mediated dephosphorylation of phospho-AKT (p-AKT) at Ser473 residues. In particular, the INPP4B overexpression in PANC-1 contributed more to Ser473 dephosphorylation than to Thr308 (1.22-fold decrease vs. 1.09-fold decrease), and INPP4B knockdown in AsPC-1 only resulted in Ser473 phosphorylation and activation of AKT (6.04fold increase; All P < 0.05; Figure 2(a)). To determine whether INPP4B controlled EMT-related protein turnover via inhibiting AKT activation, Ad-INPP4B infected PANC-1 was incubated in SC79 containing medium, and the INPP4B-siRNA transfected AsPC-1 was subjected to MK-2206 treatment. Immunoblot analysis showed that the INPP4B overexpression-induced reduction of Ncad, as well as increase of Ecad, was attenuated by SC79 which reactivated AKT mostly at Ser473 residue in PANC-1(1.87-fold increase vs. 1.18fold increase). In addition, the regulatory effect of INPP4B knockdown on cadherins in AsPC-1 was also impaired by MK-2206 which mainly restrained the Ser473 phosphorylation (15.92-fold reduction vs. 1.42fold reduction; All P < 0.05; Figure 2(b)). These data confirmed that INPP4B regulated cadherin expression in an AKT-dependent manner. However, INPP4Bmediated Vim expression alteration was also significantly disrupted by both the treatments, which denoted AKT signaling participated in this process but inversely regulated it versus Ncad. INPP4B is involved in the process of partial mesenchymal-epithelial transition (MET) or remodeling of cadherin-mediated adhesions We evaluated the transcript changes of ZEB1, SNAIL1, SNAIL2, TWIST1 and TWIST2, which were considered as putative EMT-associated transcription factors, in cells with altered expression of INPP4B. RT-qPCR analysis indicated that the expression of ZEB1 and SNAIL2 were notably downregulated in Ad-INPP4B group (All P < 0.05; Figure 3(a)), whereas INPP4B interference in AsPC-1 resulted in a significant increase of ZEB1, SNAIL1 and TWIST1 (All P < 0.05; Figure 3(b)). We failed to amplify the ZEB2 fragment from these cell lines using two pairs of primers, while it was positive in TPC-1 and A498 cell lines, which suggested the scarce expression of ZEB2 in PANC-1 and AsPC-1 (data not shown). We therefore concluded that certain EMT-associated transcription factors were differentially controlled by INPP4B in PANC-1 and AsPC-1. However, the morphological characteristics of the transduced cell lines revealed by phase contrast images were not markedly different to the mock controls. Specially, mock control PANC-1 cells displayed a mesenchymal-like morphology, and Ad-INPP4B infected PANC-1 cells were still elongated in shape (Figure 3(c)). Mock control AsPC-1 cells were observed to form tight colonies. After INPP4B knockdown, cells were more scattered but without evident shape change (Figure 3(d)). Accordingly, INPP4B could only modulate cell-cell contact or aid in partial MET, and the complete reversion of mesenchymal phenotype and gain of epithelial features was hardly observed. INPP4B contributes to the manipulation of pancreatic cancer in vitro invasiveness After 48 h of invasion, there were about 50 ± 6 cells per high power field (HPF) in Ad-INPP4B group and 34 ± 7 cells in INPP4B-Rnai group invaded through the Matrigel and Transwell membrane, comparing to 82 ± 7 in Con PANC-1 group and 16 ± 5 in Con AsPC-1 group (P < 0.05; Figure 4(a)). In terms of wound closing, the cell migration was allowed for 48 h and recorded in Figure 4(b). PANC-1 cells overexpressing INPP4B migrated at the similar rate as the control cells did (P > 0.05; Figure 4(b) left panel, Figure S1C), and the interference of INPP4B in AsPC-1 led to an acceleration in wound healing progress (P < 0.05; Figure 4(b) right panel, Figure S1C). These results implied that INPP4B manipulated the invasive behavior of pancreatic carcinoma in vitro. Ecad is stabilized by INPP4B in cycloheximide (chx)treated PANC-1 PANC-1 overexpressing INPP4B showed a less decrease in Ecad protein levels as compared to mock infected and uninfected cells (P < 0.05; Figure 5(a)) when treated with a protein translation inhibitor, CHX. This illustrated Ecad could be stabilized by INPP4B. RG108 upregulates INPP4B and ecad in PANC-1 Treatment of PANC-1 with RG108 heightened INPP4B expression approximately 3.09-fold in transcript level and 1.84-fold in protein level relative to control. There was also a 5.11-fold increase in Ecad transcript level and 2.18-fold increase in protein level (All P < 0.05; Figure 5(b,c)), more than we anticipated, which could be attributed to the upregulation of INPP4B and the revision of intrinsic methylation of CDH1 gene promoter in PANC-1 [11]. An increase of both the INPP4B and Ecad was also detected in DMSO-treated cells, although the statistical significance of this observation was borderline (0.01 < P < 0.05), which we could reasonably ascribe to the epigenetic impact of DMSO on cells in vitro [12]. Upregulated INPP4B expression after RG108 treatment is not initiated by its promoter region demethylation RG108 can suppress DNA methyltransferase (DNMT) as well as increase INPP4B expression in PANC-1. Thus, we investigated whether the INPP4B downregulation in PANC-1 was due to abnormal promoter demethylation or not. Four CpG enriched or CpG island containing regions spanning the promoter region and exon 1 in the INPP4B gene were amplified and evaluated for methylation level respectively ( Figure S2). The methylation scale of almost 85% of the total CpG units evaluated was below 20%. The highest methylation level (100%) was only found at the 7th CpG sites ( Figure 5(d)). When treated with RG108, though the methylation levels of certain CpG units declined (All P < 0.05; Figure 5(e)), the average methylation percentage across total CpG sites evaluated was not significantly reduced (P > 0.05; Figure 5(f)). Therefore, RG108 upregulated INPP4B expression without apparently inducing DNA demethylation of its promoter region. It might be ascribed to the intrinsic hypomethylation of overall INPP4B promoter region in PANC-1. INPP4B actively participates in the recycling of ecad In order to avoid false-positive result from INPP4B overexpression in double-labeling immunofluorescence of Ecad and INPP4B, the PANC-1 cells were pretreated with RG108 rather than Ad-INPP4B to discern the possible co-localization. Confocal laser scanning analysis revealed that punctate staining for INPP4B was observed at the cytomembrane and peri-membranal region (Figure 6(a), middle column). INPP4B was detected co-localized with recycling Ecad as time elapsed after calcium chelation (co-localizing green and red pixels in composite plan were pseudocolored in yellow). In detail, as the cells spread, structures positive for both the proteins resolved into tightly clustered morphology (Figure 6(a); boxed areas) and gradually diffused freely along the membrane if no mature cell-cell adhesive structures took shape (Figure 6(a); arrows). Clusters of the two proteins laterally moved to the cell junctions in a constrained fashion if two cells reached confluence (Figure 6(a); arrowheads). INPP4B increases ecad at early and recycling endosomes and suppresses its late endosomal lodging Western blot of equal amount of proteins from resulting cellular fractions showed that the two target proteins were mainly precipitated from Rab5 and 11 positive parts (Figure 6(b)). Rab5 was loaded as marker for early endosomes (EE), Rab11 for recycling endosomes (RE) and Rab7 for late endosomes (LE) [13,14]. When normalized by Rab5 and 11 in EE and RE fractions, the early and sorting pools of Ecad were enriched by INPP4B overexpression comparing to that of the control. We also normalized the corresponding Ecad to Rab7 in LE fraction and observed that the degradation pools of Ecad in Ad-INPP4B cells decreased. Moreover, the Rab11/Rab5 ratio in EE and RE fractions was elevated in Ad-INPP4B cells which could be interpreted as INPP4B overexpression promoted transport and recycling of membrane protein (All P < 0.05; Figure 6(c)). HM fractions were strongly positive for both the proteins in Ad-INPP4B group, which might in Relevance of target protein expression to tumor grade (g), lymph node metastasis status (h) and TNM stage (i) of tissues was expounded by Mann-Whitney U test. N0 indicates no lymph node metastasis. N1 indicates regional lymph node metastasis. Scale bar in images = 100 mm. Results show mean ± SD (n = 3). N/S and * indicate P > 0.05 and P < 0.05. part ascribe to the upregulated expression of them on cytomembrane when taking into account the immunofluorescence result. INPP4B and ecad expression in clinically resected pancreatic tumor specimens Positively stained INPP4B and Ecad were mainly located in the cytoplasm of cancerous ductal cells (Figure 7(c-f)). We did observe some cancer cells with more cytoplasmic expression than membranous expression (Figure 7(e)), and cells with only membranous loss of expression (Figure 7(f)), which would indicate malposition of Ecad. The median age at diagnosis of the 39 patients was 57.3 ± 7.2 years. Correlations of the INPP4B and Ecad staining index with pathological parameters were listed in Figure 7(g-i). The staining of INPP4B expression inversely correlated with tumor grade and lymph node metastasis status, and a similar trend was also noted for that of Ecad (All P < 0.05; Figure 7(g,h)). A Spearman rank correlation analysis indicated a positive correlation between the two proteins in PDAC (r = 0.52, P < 0.05). However, TNM stage is a macroscopic and anatomy-dependent system, for instance the involvement of celiac axis, which would not truly or fully reflect the cancerous behavior of certain sets of pancreatic tumors, and thus we found no significant relevance of target protein expression to TNM stage (P > 0.05; Figure 7(i)). Discussion INPP4B is initially identified as a tumor suppressor due to its negatively regulatory effects on PI3K/AKT pathway which outlines a network that steers important biological processes such as cell adhesion and migration, all of which are disrupted in several types of cancers [15]. Cadherins are a class of type-1 transmembrane glycoproteins that function as calcium-dependent junction to bind cells together. Loss of Ecad in intercellular contacts is believed to be the first key step in development of cancer-related EMT [16]. In this report, overexpression of INPP4B in PANC-1 resulted in a raised level of epithelial maker Ecad and a decrease in mesenchymal marker Ncad through dephosphorylating p-AKT mainly at Ser473 residues, which antagonized in vitro invasion without affecting their overall migratory capacity. On the other hand, the silencing of target gene in AsPC-1 cells promoted cadherin switch from Ecad to Ncad depending on p-AKT upregulation and led to increased invasion and migration behavior. These data indicated INPP4B was an upstream regulator of both Ecad and Ncad in these pancreatic cancer cells. AKT activation depends on the availability of PI(3,4)P 2 and phosphatidylinositol-3,4,5-trisphosphate (PI(3,4,5)P 3 ). The INPP4B substrate PI(3,4)P 2 contributes predominantly to Ser473 phosphorylation, whereas PI(3,4,5)P 3 , substrate of phosphatase and tensin homolog (PTEN), contributes mostly to Thr308 phosphorylation [17]. We speculated that INPP4B hydrolyzed PI(3,4)P 2 to antagonize the hyperactivation of oncogenic PI3K/AKT signaling, which could provide an explanation for our findings in PDAC. However, the reciprocal feedback between PTEN, SHIP and INPP4B should be considered in future. The role of EMT in the progression of epithelial carcinomas and their metastatic dissemination was debated, because in most cancers the full conversion of epithelial/mesenchymal state rarely existed [1,18]. Unlike many previous works, Vim was oppositely regulated versus Ncad by INPP4B in PANC-1 and AsPC-1 cells. The limited variation of Vim in our study could not reach that far as overexpression do, which we thought would be a physiological response to INPP4B alternation rather than pathologic effect and required further investigation. This unconventional case was also noticed in angiomyolipoma cells and interpreted by TSC2/mTOR signaling deficiency [19]. Additional approaches were utilized to verify the potential link between INPP4B and EMT as a pathophysiological process, including light microscopy, confocal fluorescence imaging and RT-qPCR analysis of EMT-associated transcription factors. The morphologic characteristic of PDAC cells observed under phase contrast microscope was not significantly changed when INPP4B expression altered, while the fluorescence intensity of EMT-related protein staining was apparently influenced respectively. The scanty cytoplasm and large nucleus of PANC-1 and AsPC-1 made it hard to discern whether the cell shape seen was really unchanged or not. It was ever reported that the nuclear factors of SNAIL, ZEB and TWIST families exerted a strong transcriptional control on EMT, among which the SNAIL, ZEB1, and ZEB2 repressed Ecad expression by binding directly to the E-box sequences of the CDH1 promoter [20]. Our data suggested only certain ones of them were robustly regulated by INPP4B in PDAC cells. Altogether, these findings confirmed a link between the expression of INPP4B and partial reversion of mesenchymal status or just the cadherin switch in PANC-1 and AsPC-1. The model we proposed was not a perfect EMT/MET, but at least a transient regulation of cell-cell adhesion structures. Perhaps inhibition of cell proliferation or proteases that degraded extracellular matrix was involved in INPP4B-oriented suppression of PANC-1 invasion but not migration. The deterministic indicator role of Ecad in differentiation and invasiveness of various PDAC cell lines, including differentiated ones (Capan-1, Capan-2 and DAN-G), intermediate state (Hs 766T) and undifferentiated cells (MIA PaCa-2), was disclosed by a previous investigation [21]. Meanwhile, PANC-1, started from a human undifferentiated pancreatic carcinoma [22], was proved relatively minimal in INPP4B level among the four PDAC cell lines we selected. We also detected the depressed amount of INPP4B within the moderately differentiated cell line SW1990 [23]. The higher level of target protein was confirmed in BxPC-3 (originated from a moderately well differentiated foci without evidence of metastasis) and AsPC-1 (the ascitic tumor cells of a moderately well differentiated PDAC sufferer with no metastasis to internal organs of nude mice in vivo tumorigenicity test) [24,25]. In view of this finding, we sought to characterize the co-expression of INPP4B and Ecad in surgically resected pancreatic cancer tissues if existed, and correlate them respectively with lymph node metastasis status and tumor differentiation grade. We conclusively demonstrated that it was poorly differentiated, non-cohesive ones in which the INPP4B and Ecad were partially or completely compromised in their expression. Meanwhile, a heterogeneity in staining extent of tumor cells was found in a given individual, and the presence of a transitional phenotypic state might account for this phenomenon. Ecad has a half life of 5 h-10 h [26]. It is internalization by endosomes and subsequent lysosomal or proteasomal degradation or recirculation back to the cell surface that takes the center stage in the metabolism of Ecad. This vesicular to and fro movement among cell surface and endosomes provides a mechanism for manipulating the availability of Ecad for cell-cell junction in tumor progression [27]. Interestingly, several studies report that PI(3)P is mainly aggregated in EE while PI(3,5)P 2 at late endocytic compartments [28]. Aside from the given notion that INPP4B favorably mediated the turnover of PI(3,4)P 2 , we observed a greater effect on Ecad protein (1.6-fold increase) than on mRNA level (1.43-fold increase) when INPP4B was upregulated in PANC-1. So we hypothesized and aimed to provide the evidence that INPP4B could be an Ecad recycling participator recruiting to EE and RE. For this purpose, live cell labeling and recycling assay was conducted and intense stains for both the two proteins were seen at cell membrane and peripheral region. The EE was a dynamic compartment encompassing thin tubules (approximately 60 nm diameter) and large vesicles (around 300-400 nm diameter). Some separated tubular structures resolved into RE and distributed close to the centrosome for latter transport depending on microtubules [29]. Coincidently, subcellular location of INPP4B was predicted to be centrosome, cell membrane and intracellular compartment according to the Human Protein Atlas [30], which further substantiated our findings. As showed by boxed areas of Figure1(g), Ecad staining transformed into a filamentous appearance when INPP4B was overexpressed. Besides, functions of Vim contributed to the construction of cytoskeleton architecture within cells by interacting with microtubules [31]. These data allowed us to suggest that Vim should be motivated and recruited as a candidate for cross-bridging microtubules and Ecad containing vesicles if INPP4B was upregulated [32,33]. To date, little evidence of INPP4B inactivating mutations or deletions in human cancers has been provided [8]. Treatment of thyroid cancer cell lines with 5-Aza-2ʹ-deoxycytidine (5-Aza-CdR), which inhibited DNA methylation, led to INPP4B upregulation [34]. RG108 was a DNMT inhibitor and was proved to be efficient in reactivation of epigenetically silenced tumor suppressor genes without indications for cytotoxicity comparing to 5-Aza-CdR [35,36]. PDAC cell lines often exhibited higher CpG island hypermethylation frequency than clinically resected PDAC tissues [37]. It was postulated that INPP4B loss in PANC-1 would be due to aberrant gene promoter methylation. We then compared the methylation status of INPP4B promoter region in control and RG108 treated cells by MassARRAY methylation analysis but the result refuted our hypothesis. Therefore, INPP4B upregulation in RG108 treated cells was potentially indirect, mediated via the increase of yet to be defined transcription factors. RG108 treatment enabled the immunostaining of both the proteins in PANC-1 to monitor the recycling dynamic of Ecad relating to INPP4B, and might therefore propose an effective therapeutic agent for PDAC of INPP4B/Ecad loss or downregulated phenotype. The relatively moderate reduction of Ecad in CHXtreated Ad-INPP4B group than in control denoted that INPP4B directly or indirectly stabilized the Ecad pool due to increasing its longevity within cells, perhaps via endocytosis and recycling pathway. Afterwards, a discontinuous sucrose density ultracentrifugation was employed to reveal the endosomal protein burden of both the INPP4B and Ecad in PANC-1. The result manifested INPP4B co-localized with Ecad in EE and RE which were bound to renew the integral membrane proteins, but not in LE destined for degradation. One of the causative mechanisms underlying the Ecad downregulation in PANC-1 may be that INPP4B loss increases the PI(3,4)P 2 pool and prolongs AKT2 signaling on the endocytic membranes, followed by vesicular transport of protective cargoes to LE for lysosomal or proteasomal degradation [9,38]. However, we did not further purify RE from target fractions via immunoisolation using Rab11 antibody bound to immunomagnetic, which confined the level of evidence. AsPC-1 was originally prepared from the floating cells in cancerous ascites [25], which would mimic the circulating tumor cells (CTCs) in bloodstream to some extent. Comparatively speaking the anti-oncogenic feature of INPP4B was the case in primary tumor, however, it might be upregulated in a subgroup of CTCs and dormant micrometastases, which promoted Ecad re-expression in aiding anchoring and proliferation. A significant proportion of CTC platforms use antibodies conjugated immunomagnetic beads or nanoparticles to enrich CTCs basing on the expression of cancer-specific antigens [39]. If INPP4B is absent on benign blood cells and expressed by epithelial tumor-derived cells on cytomembrane when travelling [40], it will help in capturing a subpopulation of pre-colonizing CTCs. To this end, core needle biopsy of metastatic foci and blood sample collection for CTCs enrichment are in demand for further investigation. This study provides the first evidence that INPP4B oppositely regulates the expressions of Ecad and Ncad in a subset of PDAC cell lines in an AKT dependent manner. It functions as a tumor suppressor that inhibits AKT activation and suppresses in vitro invasiveness of PANC-1 and AsPC-1. Promoter region hypermethylation is not a mechanism for the silencing of INPP4B in PANC-1. Expression deficiency of Ecad and INPP4B are detected in some of the high-grade or non-cohesive human PDAC specimens. Moreover, INPP4B is involved in endocytic trafficking pathway, inter alia the EE and RE in which cargo protein Ecad 'revives'. The view that INPP4B are implicated in cell-cell adhesion renewal may aid in the development of novel INPP4B-based signature for prognosis and eventually a more refined anticancer treatment. Cell culture and transduction The human PDAC cell lines, AsPC-1, BxPC-3, PANC-1 and SW1990, were purchased from the National Infrastructure of Cell Line Resource (Beijing, China). All cells were maintained in DMEM (HyClone) or RPMI (HyClone) supplemented with 10% FBS (Gibco), 100 U/ml penicillin and streptomycin (Gibco), and incubated at 37 ℃ in a humidified atmosphere containing 5% CO 2 . For overexpression of INPP4B, cells were infected by a recombinant adenovirus Ad-INPP4B (Genechem) at the indicated multiplicity of infection. For knockdown purpose, a siRNA targeting the coding region of INPP4B (Gen-Bank Accession Number: NM_001101669) was chemically synthesized (GenePharma) and transfected into cells with Lipofectamine® 2000 Reagent (Invitrogen). Mock controls were transduced by vacant adenovirus or a non-targeting siRNA. Cells above were harvested 24 and 48 h post treatment for RT-qPCR and immunoblotting separately. RT-qPCR The cDNA was synthesized using Revert Aid RT Kit (Thermo Fisher Scientific) according to manufacturer's instructions. A total of 1 μg RNA was used for cDNA synthesis. RT-qPCR was performed on a Master cycler Ep Realplex (Eppendorf) using Fast SYBR® Green Master Mix (Invitrogen) with primers described in the supplemental material. PCR cycling conditions included initial denaturation at 95°C, followed by 35 cycles of denaturation for 15 s at 95°C, annealing for 30 s at 60°C-64°C and extension for 30 s at 72°C. The comparative cycle threshold (2 −ΔΔCt ) method mapped the relative expression of target gene. GAPDH was used as housekeeping gene for all data normalization. Chemicals treatment Cells transduced by Ad-INPP4B or siRNA were grown in the presence or absence of AKT activator SC79 (5 μM; ApexBio) or inhibitor MK-2206 (10 μM; ApexBio) for 24 h before harvested for western blot analysis. To inhibit protein translation, Ad-INPP4B and mock infected PANC-1 cells were treated with 10 μM CHX (Amresco) at 48 h post infection, and probed for INPP4B and Ecad via western blotting at 7 h post treatment. We then exposed PANC-1 to 100 μM RG108 (ApexBio) for 72 h through diluting the stock solution (RG108/DMSO 50 mM) in culture medium. For control purpose, PANC-1 was exposed to 0.2% DMSO only. Transcript and protein of INPP4B and Ecad in control versus RG108-treated PANC-1 cells were measured subsequently. Massarray quantitative methylation analysis DNA samples from the control and RG108-treated PANC-1 cells were subjected to bisulfite modification using EZ DNA Methylation Gold Kit (Zymo Research). Six pairs of primers for methylation analysis were designed using Methprimer (http://www.uro gene.org/methprimer/) and listed in the supplemental material. Sequenom MassARRAY platform (CapitalBio; Beijing, China) was used to analyze INPP4B methylation quantitatively (Gen-Bank Accession Number: NM_001101669). Mass spectra were obtained via MassARRAY Compact MALDI-TOF (Sequenom) and methylation ratios were generated using the Epityper software (Sequenom). Immunofluorescence microscopy To further substantiate the findings of immunoblotting, all cells were re-plated on fibronectin-coated coverslips and cultured for 24 h. They were then fixed with 4% paraformaldehyde and permeabilized with 0.5% Triton X-100. Coverslips were washed with PBS and then incubated with specific first antibodies respectively, followed by Alexa Fluor 594-conjugated anti-rabbit secondary antibody (Thermo Scientific, #A11012) labeling. The cells were then reacted with DAPI for nuclear staining. For live cell labeling and recycling experiment, RG108-treated PANC-1 cells were labeled with mouse antibody against Ecad (CST, #5296) on ice and subjected immediately to calcium chelation by addition of 0.02% EDTA. They were then restored back to complete medium at 37 ℃ and 5% CO 2 environment. Once anchored on coverslip, cells were fixed at different time points and probed for INPP4B with rabbit antibody (CST, #14,543), followed by Alexa Fluor 488 and 594-conjugated secondary antibodies (CST, #4412, #8890) labeling. All fluorescence images were acquired under an alpha Plan-Fluarobjective lens of Zeiss LSM 510 confocal microscope. Endosome purification The PANC-1 cells were scraped off the plate and homogenized in the sucrose solution (8.5% sucrose, 3 mM imidazole, with protease inhibitor cocktail). After removing of nuclei and cell debris, postnucelar supernatant (PNS) was subjected to sucrose gradient centrifugation. Target cell fractions were collected as previously described [9,34]. Fractions at the 25%-35% interphase contained RE and EE, and LE were recovered from uppermost portion of 25% phase. Heavy membranes (HM), recovered from lowest interphase, were consisted of Golgi apparatus, endoplasmic reticulum and plasma membrane. Proteins from the organelle fractions above were precipitated with methanol/chloroform for western blot analysis. Transwell invasion assay and wound healing assay Transwell inserts (Corning) with 8 μm pore polycarbonate membrane were coated with Matrigel (BD Biosciences). Cells suspended in medium containing 2% FBS were added to the upper chamber of Transwell system, and migration was allowed to proceed towards the lower section filled with complete medium for 48 h at 37 ℃. Invasion was determined by counting those cells that traversed the cell-permeable membrane. For wound healing assay, cells were grown in a 6-well plate containing complete medium until a monolayer was formed. After scraping with a 10 μl tip, cells were incubated in low serum medium (2% FBS) for 48 h. The 0 h images were taken right after wound creation by an inverted epi-fluorescence microscope. Wound healing percentage of transduced cells was measured by Image J software and normalized to controls. Tissue collection and immunohistochemistry analysis The study included 39 PDAC suffers who were diagnosed and received surgical treatment in the 2nd Department of Hepatobiliary Surgery, Chinese PLA General Hospital, whose archival paraffin-embedded pathological materials were available for immunohistochemical analysis. They were re-staged postoperatively according to the American Joint Committee on Cancer (AJCC) 2010 staging system. Clinical parameters included lymph node metastasis status, tumor differentiation grade and TNM stage. No patients underwent chemotherapy or radiation therapy prior to surgery. 5 μm sections were immunostained essentially and examined by senior pathologist who was blind to clinical data of the patients. A staining index was determined by adding together the scores for staining intensity (0: no color; 1: light yellow; 2: light brown; 3: brown) and percentage of positively-stained pancreatic tumor cells (0: < 5%; 1: 5-25%; 2: 26-50%; 3: 51-75%; 4: > 75%) apart from the fiber composition. For co-expression and prognosis analysis, correlations between scores of INPP4B and Ecad, as well as pathological parameters, were analyzed. Statistical analysis Each data point was displayed as the mean ± standard deviation of at least three independent biological experiments. Statistical tests used included the Student's t-test, Mann-Whitney U test and Spearman rank correlation analysis. A P-value < 0.05 was considered to have statistical significance.
v3-fos-license
2022-10-13T15:43:25.285Z
2022-10-08T00:00:00.000
252855096
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/1424-2818/14/10/849/pdf?version=1665226605", "pdf_hash": "97d4d2a64ae1bf2b8c3d7536fe778ce9ed406559", "pdf_src": "ScienceParsePlus", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41143", "s2fieldsofstudy": [ "Environmental Science" ], "sha1": "9dfb304488d2abd433ebf42e725fd8f41e817603", "year": 2022 }
pes2o/s2orc
Complementary Sampling Methods to Improve the Monitoring of Coastal Lagoons : Monitoring the ecological status of marine coastal lagoons requires the integration of multiple indices. However, the efficacy of monitoring programs is complicated by the diverse array of habitats that conform coastal lagoons. In this study, we compared four sampling methods (25-m and 50-m beach seines, beam trawl and Riley push net) in the Ria Formosa coastal lagoon (South Portugal) for assessing fish assemblage and diversity. We compared species richness and assemblage structure with species accumulation curves and multivariate analysis, and assessed diversity patterns using taxonomic, phylogenetic and functional diversity indices. Variation in fish assemblage structure was mostly explained by gear type, and almost all diversity metrics varied not only according to sampling method but also depending on habitat characteristics and season. Based on operational costs and diversity patterns captured by each gear, we conclude that the combined use of 25-m beach seine and beam trawl is the preferred approach. The proposed methodology will provide the data necessary for assessment of ecological status of coastal lagoons. Introduction Estuaries and sheltered lagoons contain some of the most productive coastal marine habitats such as seagrass beds and salt marshes [1,2]. These ecosystems are essential feeding and nursery grounds for juvenile fishes and invertebrates, including many species with commercial and recreational value [3][4][5][6][7]. However, their long-term monitoring is particularly challenging due to the heterogeneous spatial distribution of species across different habitat types, coupled with seasonal shifts in assemblage structure and diversity [8,9]. Yet, their effective monitoring is fundamental, as an increasing number of anthropogenic threats (e.g., pollution, habitat loss, sea level rise, and overfishing) are degrading the state of these valuable ecosystems [10,11]. Further, in the context of the Water Framework Directive (WFD), member states must establish monitoring programs to provide information on long-term changes for each water body type (lakes, rivers, transitional waters, coastal waters) [12]. Fishes are an important biological component of coastal lagoons, performing key ecological roles that underpin ecosystem productivity and resilience (e.g., top-down control of prey populations, sediment reworking, and nutrient recycling) [13][14][15]. Due to their importance, changes in fish assemblage metrics (e.g., diversity, composition and abundance) are part of the quality elements for the assessment of the ecological status of different water bodies under the WFD. Until recently, the assessment of fish assemblages in coastal lagoons has mainly focused on traditional taxonomic diversity indices (TD, e.g., species richness, Shannon-Wiener diversity, and Pielou's evenness) [16]. However, these metrics treat all the species equally without considering their potential contribution to a range of ecosystem functions. In contrast, functional (FD) [17] and phylogenetic (PD) [18] diversity metrics incorporate information on species ecological similarities based on traits (i.e., morphological, ecological, physiological, and behavioural characteristics) and evolutionary information [19,20]. There is growing empirical evidence, in both terrestrial and marine environments, of the importance of these often-overlooked biodiversity components for ecosystem functioning and resilience [21][22][23]. Given the heterogeneity of habitats and seasonal variability in coastal lagoons, sampling their fish assemblages with the appropriate gear or combination of gears is essential for obtaining reliable data for the application of fish-based ecological indicators [24][25][26]. There are a variety of fishing gears for sampling coastal lagoons, so understanding their biases and whether they can replace or complement each other is crucial [27,28]. Beam trawls are fishing gears commonly used not only for commercial fishing activities but also for regular sampling and monitoring of fish communities in estuaries [3,24,29]. This active gear is suitable for sampling large numbers of fish, especially demersal species [30]. Other methods such as beach seines and push nets require different physical characteristics of the study site, since both need access from the shore to shallow waters. Trawls and seines frequently have variable catch efficiency, due to either differences in gear design or gear avoidance by certain fish groups [31]. This sampling bias, together with the fact that estuaries contain highly variable hydrographic and spatial-temporal characteristics, justify the need of multi-method surveys to capture an accurate representation of fish assemblage structure and diversity. Yet, few studies have examined to what extent the inherent bias associated with different sampling methods might undermine the ability of monitoring programs to detect shifts in important but often overlooked functional and phylogenetic diversity components [32]. In the present study, we compared four commonly used sampling methods for coastal lagoon systems-beach seines of 25-m and 50-m, beam trawl and push net-to assess their complementarity in fish assemblage metrics (composition and structure, taxonomic diversity, functional diversity and phylogenetic diversity). Specifically, we hypothesize that these four gears capture different representations of species assemblage and diversity patterns. Other studies have compared different fishing gears to determine their efficiency in sampling species composition and abundance [27][28][29]33], but this study is the first to assess differences among this particular combination of gears using less conventional phylogenetic and functional diversity indices. The results of this work provide useful information for management agencies aiming at identifying which method or combination of methods is more suited to track changes in the ecological status of these valuable but highly threatened ecosystems. Study Site and Sampling Stations The Ria Formosa is a tidal lagoon extending about 55 km along the south coast of Portugal, consisting of salt marshes, subtidal channels and tidal flats covering a surface area of approximately 170 km 2 , up to 6 m in depth [4,9]. With tidal elevations of 1.30 and 2.80 m at mean neap tide and spring tide, respectively, the minimum and maximum areas covered by water during spring tides are 14.1 and 63.1 km 2 [34]. Since it is located between the sea and the land, this lagoon has distinctive biological, ecological and hydrodynamic features, with a variety of different habitats that can be distinguished based on substrate type (sand, gravel or fine mud), depth, vegetation and distance to the sea [4]. The lagoon has extensive patches of seagrass (Cymodocea nodosa, Zostera marina and Zostera noltii) providing shelter from predators and adverse weather conditions, as well as food sources [9]. Approximately 90% of the area is a natural park corresponding to category V in the International Union for Conservation of Nature (IUCN) classification of protected areas [35,36] and has high ichthyological diversity compared with other equivalent ecosystems [37,38]. This area was integrated as a Natura 2000 site under the Birds and Habitats European Directives [39] and it was also declared a RAMSAR site for conservation of wetlands in 1981 [40]. Nevertheless, several economic activities take place (e.g., fishing, harvesting of bait, aquaculture, tourism, shipping, airport activity), putting pressure on this vulnerable ecosystem [41]. In order to sample the fish fauna in the different habitats present in the Ria Formosa, a variety of sampling methods were tested, and the choice of sampling stations was based on a stratified sampling strategy. Sampling stations were chosen by first dividing the lagoon into three areas with very distinct hydrodynamic characteristics: (1) areas of strong coastal influence, near the inlets and with a strong hydrodynamic regime influenced mostly by coastal waters (closer to barrier islands); (2) interior areas with salt marshes, which correspond to shallow vegetated areas greatly affected by the terrestrial environment and usually dry at low tide; and (3) main and secondary channels, that represent the deepest parts of the Ria Formosa. For these three main areas, 59 sampling stations were chosen based on the types of habitats present and the sampling gear to be used ( Figure 1). Habitats were classified according to the presence or absence of vegetation (VEG/UNVEG). Seasons were defined as autumn (AU, from October to December), winter (W, January to March), spring (SP, April to June) and summer (S, July to August). Monthly samples were collected from September 2000 to April 2002, with 37 stations chosen for the 25-m beach seine, 4 stations for the 50-m beach seine, 12 stations used for the beam trawl, and 6 stations for the Riley push net ( Table 1). The selection of sampling stations was based on visual surveys of the area and existing aerial photographs and maps [4]. Due to logistic problems, 92% of the target sampling was achieved for the beam trawl, Riley push net and 50-m beach seine, and 84% for the 25-m beach seine. For the latter sampling gear, the main factors affecting sampling were the short period for sampling (could only be done at low tide), the large number of locations, the distances between them, and occasionally, problems such as the net getting stuck, requiring repeating the sample. The two beach seine nets were deployed at low tides in the margins of the main channels only when the amplitude of the tide was less than 2 m, and during a period of 2 h before and 2 h after the low tide to reduce variability in tidal amplitude between sampling events. One of the beach seine nets was 25-m long, 3.5 high in the middle and was made of 9-mm mesh netting. The net was towed by a boat from one end and by two people on shore from the other end, resulting in an average sampled area of 1087 m 2 (based on GPS measurements). The 50-m beach seine was 3.5 m high in the middle and was made of 13 mm mesh [42]. For the setting of the net, one end was held on shore while the net was set in a circle; no towing took place. Based on GPS measurements, the average area sampled was 295 m 2 , but since 3 replicates were taken at each station and the samples pooled across replicates, the average total area sampled was 885 m 2 for each station. Beam Trawl A beam trawl 2.6 m wide and 0.45 m high at the mouth was used at low tides in the main channels. The cod end was 10 m long and made of 9-mm mesh netting. Tows of 300 m were performed at 1 knot, resulting in a swept area per tow of 780 m 2 . Riley Push Net The Riley push net was 1.5 m wide and 0.5 m high at the mouth, with a cod end split into two "trouser" legs of 2-mm fine mesh netting. Three 30-m samples were taken at each sampling station within interior areas of the Ria by one person who stood between the two legs of the cod end, resulting in a total swept area of 135 m 2 . Processing of Fish Collected The same method of processing samples was applied for all gears. Invertebrates and species such as seahorses (Hippocampus hippocampus and H. guttulatus) were caught and released alive, while the other fish species were placed immediately after capture in boxes with an ice flurry to minimize suffering and transported to the laboratory for processing. At the laboratory, the fish were sorted, identified to the lowest taxonomic level possible and counted. Total length was measured to the nearest mm. Detailed information about the sampling gears and habitats sampled together with photographs are available in Erzini et al., (2002) [4] (report available upon request). Assemblage Composition and Structure A Euler diagram was constructed in order to compare the overlap in species composition caught by beach seine (25-m and 50-m included in the same group), beam trawl and Riley push net. To estimate the relationship between the number of species observed and the sampling effort as a measure of sampling efficiency of the different gears used, species accumulation curves were computed via sample-based rarefaction [43]. The method calculates the expected species richness for each sample under random order from 1000 permutations of the data, and sampled area was used to standardize effort between sampling methods. For very high sampling efforts, the curves would eventually reach an asymptote matching the assemblage richness available to the method chosen and the more concave-downward the curve, the better sampled the community [44]. A multivariate regression tree (MRT) was constructed to explore the relative influence of sampling gear, season and habitat in explaining fish assemblage structure. MRT is a statistical technique that combines multivariate regression and constrained clustering, since it forms clusters based on a measure of species dissimilarity that are defined by a set of predictor variables [45]. In this case, fish abundance data were partitioned successively in two subsets by selecting one of the factors (gear, season or habitat) that maximizes the homogeneity of the resulting clusters [46]. Each cluster defines a species assemblage and is determined by the associated explanatory factors (gear, season or habitat); this procedure is graphically represented by a tree with nodes where the groups are split and terminal nodes define the final clusters. Optimal tree size was chosen by minimization of cross-validated relative error (CVRE) and smallest tree size. MRT analysis was chosen due to its ability to deal with categorical variables and highorder interactions among explanatory variables [45]. Discriminant species were identified within the tree as species that contribute most for the explained variance at each node. Indicator values (IndVal) were calculated for each species in each terminal group by multiplying a measure of specificity ( , mean abundance of species i in the sites of group j compared to all groups in the study) and a measure of fidelity ( , relative frequency of occurrence of species i in the sites of group j) [47]. This index ranges from 0 (no occurrences of the species within a group) to 1 (the species occurs at all sites within the group and does not occur at any other site). Species with high index values (≥ 0.2) for a cluster were considered indicator species [48]. Species abundance was converted to densities (numbers per sampled area) and standardized by dividing species density at each sampling station by the total density for all species at that same station. Diversity Indices For the traditional taxonomic diversity analysis, the following metrics were computed for each sample of each gear: Shannon-Wiener diversity index (H), Pielou's measure of evenness (J) and species richness (S). Species richness represents counts of the number of species (S). The Shannon-Wiener index is defined as . Abundance data was converted to densities (number per sampled area) before calculation of the indices and square root transformation used to balance the contribution of dominant and rare species. As a metric of phylogenetic diversity, the index of taxonomic distinctness (∆*) was chosen and is defined as the average taxonomic path length between two individuals chosen at random from the sample, traced through a standard Linnean classification tree, conditional on them being from different species [49]. It is calculated by dividing taxonomic diversity (∆) by the Simpson index and is not affected by the evenness properties of the species abundance matrix. This phylogenetic diversity index takes the form: ∆ * = ΣΣ (ΣΣ ) , where ω represents taxonomic distances among taxa, x are species abundances and the double summation goes over species i and j [50]. Taxonomic distances were calculated among taxa at variable step lengths based on an aggregation matrix by species, genus, family and order. Species abundance matrices were transformed to square root density data and ∆* calculated for each sample. The functional diversity (FD) was measured by the Rao index of functional diversity which represents the probability that, choosing two random species from the sample, they have different trait values or trait categories [51]. The Rao index combines matrixes of species abundances with matrixes of species dissimilarities based on traits' differences among species [52] and is defined as = ∑ ∑ , where S is the number of species, pi and pj are the proportion of ith and jth species and dij the dissimilarity between species i and j. Following Bosch et al., (2017,2021) [32,53], maximum body length, trophic breadth and trophic level were included as continuous (scaled between 0 and 1), while trophic group, water column position, preferred substrate and body shape were considered as categorical traits, using fuzzy coding for traits with more than 2 categories (Table 2). Information on traits was collected from the published literature and Fishbase [54], and when species specific attributes were not available, values from species within the same genus and geographic range were used. The Rao index was calculated using the Macro excel file "FunctDiv.exl" [51] on density data (numbers per sampled area), first for each trait and then averaged for each sample across all traits together. We tested for differences in diversity indices between sampling gears, season, habitat characteristics (vegetated and unvegetated), and main area using generalized linear models (GLM), applied for each diversity index independently. Second order interactions were included between gear-vegetation, and gear-season. The area consisted of 3 factors: inner, channels and outer areas (I, C and O in Figure 1), and there was no interaction included between gear-area since not all gears were used for all the three areas. The following error distributions were fitted: Poisson distribution for species richness (S) with log link function; normal distribution to species diversity (H), species evenness (J) and functional diversity (FD) (identity link function); and gamma distribution to phylogenetical diversity (PD) (identity link function). All analyses were carried out using R statistical software version 3.6.0 [56]. The packages 'vegan' [57] and 'biodiversityR' [58] were used for computing species accumulation curves and calculation of diversity indices (except for the Rao index of functional diversity). The multivariate regression tree (MRT) and species indicator values were calculated with the packages 'mvpart' and 'MVPARTwrap' [59]). GLMs were conducted with the 'stats' package from base R for model evaluation and residuals analysis. Assemblage Structure During this study, a total of 255,345 fish belonging to 106 species within 33 families (103 teleosts and 3 chondrichthyes) were captured. Atherina spp., Sardina pilchardus, Gobius niger and Pomatoschistus microps represented 70% (range of 42 to 94%) of the total catches across gears (Supplementary Materials, Table S1). In terms of species composition, there was considerable overlap between the different gears (40 species; Figure 2). Beach seines and beam trawl shared the highest number of species (23 species), while beam trawlpush net and beach seines-push net shared only 2 and 3 species, respectively. Beach seines accounted for 29 species that were not caught by the other gears, mainly species of the Sparidae, Labridae and Triglidae families. The beam trawl caught eight species that none of the other gears captured, with Soleidae being the most representative family (>85% abundance). Two species were only caught by Riley push net, Dentex macrophthalmus and Parablennius sanguinolentus, with forty-five species caught in common with the other gears. Overall, only the 25-m BS curve was close to reaching the asymptote, having by far the highest cumulative sampled area ( Figure 3). Nevertheless, for comparable sampling efforts (e.g., less than 0.05 km 2 ), both the 50-m beach seine and beam trawl caught more species than the 25-m beach seine. The species accumulation curve for the Push net is difficult to distinguish from the others due to low sampling effort. The best multivariate regression tree explaining variation in fish assemblage structure consisted of four nodes and five leaves (terminal nodes; Figure 4). This model explained 27.22% of the variation in species abundances. The first split in the tree explained 15.87% of the variability and separated the fish assemblage according to two combinations of fishing gears-beam trawl-push net, and 25-m and 50-m beach seine, for which the main species contributing to explaining this split belonged to the genus Atherina spp. The second node separated the assemblage between the other pair of sampling gears (push net and beam trawl). For the 25-m BS-50-m BS cluster (right side of node 1), the species assemblage was divided according to season, with Summer (S) containing a distinct assemblage when compared with the rest of the seasons. On the left side of the main node, the last split between Habitat types explained 4.57% of variation, with habitats containing vegetation (VEG) clearly distinct from non-vegetated (UNVEG). Overall, five distinct assemblages were identified, each one with a different set of indicator species. The first and second clusters were defined by samples obtained only with the Riley push net; the first group (I) is dominated by the common sand goby (Pomatoschistus microps) in sandy/muddy grounds; and the second group (II) contains three species of pipefish (Syngnathus typhle, S. abaster and Nerophis ophidion), Baillon's wrasse (Symphodus bailloni), rock goby (Gobius paganellus) and the two-spotted clingfish (Diplecogaster bimaculata) in seagrass habitats. The third assemblage (III) is defined by species captured with beam trawl and the analysis did not separate at the habitat level, with toadfish (Halobatrachus didactylus), two goby species (black goby, Gobius niger, and sand goby, Pomatoschistus minutus), two seahorse species (Hippocampus guttulatus and H. hippocampus), the grey wrasse (Symphodus cinereus), the small red scorpionfish (Scorpaena notata), and the flatfish Arnoglossus thori representing the indicator species for this assemblage. The last two groups (IV and V) were classified by samples collected both with the 25-m and 50-m beach seines, and the only split was defined by season. The summer assemblage was dominated by five sea bream species (black seabream, Spondyliosoma cantharus, Senegal seabream, Diplodus bellottii, White seabream, D. sargus, two-banded seabream, D. vulgaris, and gilt-head seabream, Sparus aurata), European pilchard (Sardina pilchardus), striped red mullet (Mullus surmuletus) and the European bass (Dicentrarchus labrax). In the other seasons, only the resident silverside species was identified as indicator species (Atherina spp.). Diversity Indices There were clear differences in all diversity measures between sampling gears, presence of vegetation and season ( Figure 5, Table S2 in Supplementary Materials). In terms of species richness, the highest values were for the 50-m beach seine (50-m BS) and beam trawl (BT) (panel A). Habitats with vegetation sampled with any of the four gears showed higher species richness than unvegetated locations. There were higher values of species richness (S) for 25-m BS, 50-m BS and beam trawl during summer months, and lower values for all gears in winter (panel B in Figure 5; Table S2). For species diversity represented by the Shannon-Wiener index (H), the highest values were registered for 50m BS and beam trawl (panels B and C). There were also higher values of H for vegetated locations compared to unvegetated locations, and higher values of H for stations sampled during summer months (particularly 50-m BS), and lower values of H during the winter period. There is a distinct effect of sampling vegetated habitats with push net, with high values of H when compared with habitats without vegetation sampled with 25-m BS (pvalue < 0.0001; Table S2). With regard to species evenness (J), there were slightly higher values for BT, and very low values for samples collected with push net in unvegetated locations (panels E and F). There was not an obvious effect of season, only slightly lower values for samples collected in spring with 25-m BS, and samples collected by push net had high variability. In terms of phylogenetic diversity (PD), 25-m BS had highest values (panels G and H). The GLM detected a significant interaction between push net and vegetation, meaning that vegetated habitats sampled with push net had higher phylogenetic diversity than unvegetated stations with 25-m BS. For functional diversity (FD), there were no major differences between sampling gears (panels I and J). There were higher values of FD in vegetated habitats sampled with any of the four gears, and there was a strong effect of sampling with push net in vegetated habitats compared with unvegetated. Summer and spring months show slightly higher values of FD but this is not very clear for samples collected with 25-m BS and 50-m BS. Discussion For the development and implementation of monitoring programs in estuaries and coastal lagoons, it is necessary to select the most adequate sampling methods for detecting spatiotemporal changes in species composition, abundance, and diversity, while also minimizing the costs and damage to the local assemblages of these sensitive ecosystems [28,32]. Here, we showed that the use of complementary sampling gears suitable for particular habitat types within a coastal lagoon capture a wide range of ichthyofaunal diversity that has been linked to the ecological status and the functioning of these highly productive ecosystems [60]. A strong seasonal influence was found in previous studies in the Ria Formosa, where the fish assemblage richness and abundance increased with the recruitment of marine juvenile migrant individuals during spring and summer into the lagoon [42,61,62]. This might explain the identification of two assemblages distinguished by season in the MRT and a significant effect of season in the GLMs in terms of species richness and diversity. Furthermore, different habitats had distinct characteristics not only in terms of type of substrate, but also depth at low tide, hydrodynamics and distance to the openings of the sea, environmental factors that play a strong role in structuring ichthyofauna diversity and abundance in the Ria Formosa [4]. For example, Ribeiro et al., (2012) [63] found that sampling with Riley push net in vegetated habitats resulted in higher species richness and diversity than unvegetated habitats, where higher densities of fewer species were encountered. This gives support to the results, where a distinct assemblage was identified (cluster II in MRT and significant interactions in the GLMs) characterized by species living in vegetated habitats that were sampled with the push net. Similar outcomes of seagrass and saltmarsh habitats containing maximum diversity and richness were found in other studies [9,37]. Not all diversity metrics (species richness, species evenness, Shannon entropy, taxonomic distinctness, and Rao's quadratic entropy for functional diversity) varied significantly between sampling gears (Table S2). The results show no significant differences in terms of functional diversity between 25-m beach seine and beam trawl, despite the latter having higher values of species richness and diversity (p-value > 0.05). This could be because of a large range of FD values across samples, especially for the 25-m beach seine. This variation can also be explained by the nature of each gear itself; some gears are more efficient in sampling particular habitats or species with particular functional traits than others. The beach seines are more efficient in sampling small pelagics since they fish the entire water column, particularly the 50-m beach seine (highest species accumulation rate, Figure 3), while the beam trawl catches mostly benthic and epibenthic species since it operates close to the seafloor. The Riley push net captures a large portion of small individuals, being most efficient for sampling juveniles of commercial species [63]. In fact, the beam trawl was used to sample the main deeper channels in the lagoon and captured mainly benthic species, while the beach seines were deployed in the secondary channels and caught more benthopelagic species, belonging to Sparidae and Labridae families. In addition, the deployment methods of the two beach seines differed, with the 50-m used to encircle while the 25-m net was first towed along the shore before being hauled to land. However, the analysis did not show significant differences between these two gears. Each index gave a distinct picture of the fish assemblage, with only a few similar comparisons between gears across indices. This shows the importance of considering alternative measures of diversity [32]. Other studies have found that functional diversity can be positively related with species richness and diversity [64], but this relationship is not always positive and linear [52]. Phylogenetic diversity and species richness also display different patterns of diversity and do not seem to be correlated [65,66]. Functional traits may change with species ontogenetic shifts, especially in areas that contain different life stages [67]. However, information on trait variability for the life stages of all species included in the analysis was not available but should be accounted for in future studies. Indices based on biomass can be useful complements to those based on abundance, especially for revealing different insights into temporal trends of functional diversity [68], but this is outside the scope of this study. Following the Water Quality European Standard from 2006, the recommended methodology for sampling species composition and abundance in generic transitional waters such as the Ria Formosa is with the beam trawl [69]. However, as found in our comparative study, other sampling gears such as beach seines and push net provide different representations of the fish assemblage, particularly when different dimensions of diversity (FD and PD) are incorporated. Even though the beam trawl samples contained higher species richness and diversity, they had lower phylogenetic diversity. For sampling with a push net, the fish assemblage changed drastically between habitats with and without vegetation and could only be operated at specific locations. As such, a combination of 25-m beach seine and beam trawl might be an effective sampling strategy to cover multiple aspects of diversity. As in Rotherham et al., (2012) [33], each gear used (beam trawl and multi-mesh gillnets) gave a unique picture of assemblages of fauna, so the most complete representation of the entire fish community required sampling with both methods. Several multi-metric fish indices use distinct sampling gears [25]. In Ireland for example, a standard multi-method approach (beach seine, fyke nets and beam trawl) is used in transitional waters for the implementation of the WFD and the development of an estuarine multi-metric fish index [70]. When designing a monitoring plan, the relative costs of deploying each sampling gear need to be taken into account (e.g., human resources, environmental impacts, time and financial expenditures). For this study, all the sampling gears required transport by boat to the sampling locations. The Riley push net required only one person to operate, while the beam trawl needed the skipper to navigate the boat and deploy the gear, and at least another person to help record data, label and store the catches. The beach seines demanded more human power (skipper and three people) to haul the nets and process the catches, particularly the 50-m beach seine. Additionally, the beam trawl operates with heavy ground gear that drags along the lagoon floor and disturbs bottom habitats. On the other hand, the beach seines need to be hauled by a group of people that were occasionally stepping on the seagrass patches which can damage these sensitive habitats. Although operating the beach seines was more labour intensive, these gears allowed sampling of a greater variety of habitats. In contrast, the beam trawl and push net were limited to the deeper channels and the shallow creeks, respectively. Conclusions Given the heterogeneity of habitats, variability among sampling gears, and seasonal effects, the use of a multi-gear approach would provide a robust assessment of the fish assemblage structure in coastal lagoons as the Ria Formosa. Combining the 25-m beach seine and beam trawl might be the most advantageous strategy given the limitations of sampling with a push net and the operational costs of the 50-m beach seine. This work thus contributes with new knowledge that adds to current guidance on selection of fish sampling methods in coastal lagoons, an essential parameter for the assessment of ecological status and biodiversity conservation of these ecosystems. This information is of paramount importance for implementing policies and management plans at local, regional, and national level to meet the objectives set out in the UN Sustainable Development Goals under the 2030 agenda. Supplementary Materials: The following supporting information can be download at https://www.mdpi.com/article/10.3390/d14100849/s1, Table S1: All species identified and counted for each gear; Table S2 Funding: This study received funds from the Commission of the European Communities (DG XIV C1/99/061) and Portuguese national funds from FCT -Foundation for Science and Technology through projects UIDB/04326/2020, UIDP/04326/2020 and LA/P/0101/2020. Institutional Review Board Statement: At the time when the fieldwork was conducted (2000)(2001)(2002), there was no code of practice to handle live animals, but the handling practices included the 3R approach ("Replacement, Reduction and Refinement") through subsampling highly abundant fish species, releasing of animals in good condition and practices for minimizing suffering and pain of animals. Permits authorizing the sampling of fish in the Ria Formosa lagoon were obtained from the Institute for the Conservation of Nature and Forests (ICNF), the Ria Formosa Natural Park (PNRF) and Port of Faro Maritime Authority (CPF/CPO/IPTM). Data Availability Statement: Data requests should be addressed to the corresponding author.
v3-fos-license
2018-04-03T05:25:45.405Z
1991-05-15T00:00:00.000
24403232
{ "extfieldsofstudy": [ "Medicine", "Biology" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://doi.org/10.1016/s0021-9258(18)31512-6", "pdf_hash": "753dbfc964774960a37fc2636b5d56605dbae198", "pdf_src": "Adhoc", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41145", "s2fieldsofstudy": [ "Biology" ], "sha1": "1bb0ff06188ca8b6f2acb26d62086504b8c0bc00", "year": 1991 }
pes2o/s2orc
Induced Factor Binding to the Interferon-stimulated Response Element INTERFERON-a AND PLATELET-DERIVED GROWTH FACTOR UTILIZE DISTINCT SIGNALING PATHWAYS* Interferon-a (IFNa) and platelet-derived growth factor (PDGF) each rapidly stimulate binding of nuclear factors from Balb/c 3T3 fibroblasts, to a 29-base pair regulatory sequence derived from the 5‘ upstream re- gion of the murine 2-5A synthetase gene. This regulatory sequence contains a functional interferon-stimu- lated response element (ISRE) and also functions as a PDGF-responsive sequence. We show that IFNa in- duces binding of a protein of molecular mass 65 kDa to the ISRE. Constitutively expressed ISRE-binding proteins of 98 and 150 kDa are also demonstrated. Excised complexes were loaded directly onto a 10% sodium dodecyl sulfate-PAGE system (26). The gel was then dried under vacuum and subjected to autoradiography. Plasmid Construction and Transient Transfection Assays-The ISRE oligonucleotide was ligated into the BarnHI site of the 5' poly linker of pBLcat2 (27), located 5' of the bacterial chloramphenicol acetyltransferase gene, driven by the herpes virus thymidine kinase promoter. The forward orientation was confirmed by diagnostic re- striction digests and sequencing. Transient transfections and chloramphenicol acetyltransferase activity assays were carried out as de- scribed (7, 13). receptor interactions are unknown. 2-5A synthetase is a double-stranded RNA (dsRNA)-dependent enzyme whose expression is transcriptionally induced by IFNa (3). 2-5A synthetase catalyzes polymerization of adenylate residues into a series of 2',5'-linked oligomers. 2-5A oligomers transiently activate a latent cellular endoribonuclease which acts as a translational regulator of gene expression (reviewed in Ref. 2). 2-5A synthetase activity is also induced by epidermal growth factor (4) or dsRNA, and platelet-derived growth factor (PDGF) (5) in fibroblast cells and in rat PC12 cells by nerve growth factor (6). The induction of 2-5A synthetase mRNA by PDGF occurs in the absence of new protein synthesis, suggesting that it is a direct response to this growth factor (5, 7). 2-5A synthetase activity in the liver decreases dramatically shortly after partial hepatectomy in rats (8) and has been shown to accumulate to high levels late in S phase in synchronized mouse embryo fibroblasts (9). These results have led to speculation that 2-5A synthetase activity is involved in regulating fundamental aspects of cellular metabolism such as growth and differentiation. In direct support of this we have shown that plasmid-directed overexpression of 2-5A synthetase activity results in a marked reduction in both growth rate and colony size of a human glioblastoma cell line (10). Recently, a number of groups have demonstrated the binding of IFNa-modulated nuclear factors to 5' cis-regulatory sequences of Type I IFN-inducible genes. We and others have identified one such regulatory sequence, residing in the 5' upstream region of the 2-5A synthetase gene (11-13), which is highly conserved in a number of IFNa-regulated genes of both mouse and human origin. This sequence functions as an IFN-stimulated response element (ISRE) (14) and specifically binds IFN-modulated nuclear factors in a manner which correlates with transcriptional activation of these genes. Factor binding, contact point analyses, and transcriptional studies have led to the elucidation of a consensus ISRE sequence, (A/G)GAAA(A/G)(N)GAAACT (where N is any nucleotide, Refs. [12][13][14][15][16][17]. Furthermore, an IFNa-inducible DNase I-hypersensitive site lies approximately 50 base pairs upstream of the ISRE in the IFN-inducible ISG-15 gene, implying that ISRE binding factors induce a transcriptionally open chromatin conformation immediately upstream of ISRE-containing genes (18). We have also demonstrated that the 2-5A synthetase ISRE sequence is PDGF-responsive in Balb/c 3T3 cells (7). Since the 2-5A synthetase ISRE represents an early target of both mitogenic (PDGF) and growth-inhibitory (IFNa) signals, we have sought to characterize ISRE-specific binding proteins, as well as the intracellular signaling pathway(s) by which activation of ISRE binding occurs. Although protein kinase 8765 was random primer-laheled to a specific activity of >lo" cpm/pg. Hyhridization and washing was a t 42 "C for this probe. Oligonucleotides-A double-stranded oligonucleotide, representing nucleotides -8O/-52 (relative to a putative translational start site) of the murine 2-5A synthetase gene (11)? which contains a functional ISRE, was synthesized on an Applied Riosystems 308A DNA synthesizer. An oligonucleotide representing nucleotides -55/-99 of the human IFN/j gene was synthesized for use as a control for ISRE binding reactions. CTGAAAGGGAGAAGTGAAAGTGGGAAATTCCWTG. Underlined in the ISRE is the binding element recognizing IFN-induced factors. Underlined in the IFNP IRE sequence are nucleotides involved in recognition of a factor exhibiting characteristics of NF-KB, which is activated by dsRNA (21)(22)(23). Oligonucleotides were synthesized with RarnHI-compatible linkers at the 5' terminus (GATC). Gel-purified oligonucleotides were mixed with an equimolar amount of their respective complements, heated to 65 "C for 15 min, and annealed at room temperature for 18 h. These preparations were used directly in labeling reactions. Extract Preparation and Electrophoretic Mobility Shift Assays-Electrophoretic mobility shift assays (EMSA) were done as previously described, using native 4% polyacrylamide gels run in a Tris/glycine buffer (pH 8.3) (7, 24). Nuclear extracts were prepared according to Dignam et al. (25), or whole cell extracts were prepared as follows. Cell pellets (2-4 X 10' cells) were extracted with buffer A (25) for 5 min on ice. Cells were then pelleted for 10 s a t 10,000 X g, and the supernatant was discarded. Pellets were washed twice more in buffer A and extracted with 100 pl of buffer C (25) for 5 min on ice. Cell debris was pelleted and the supernatant subjected to 15,000 X g for 30 min. This high salt, whole-cell extract (WCE) was dialyzed in 100 volumes of buffer D (25) extraction steps were carried out a t 4 "C. Oligonucleotides were end-""PIATP. labeled for use in EMSA by T4 DNA kinase in the presence of [y-UV-induced Cross-linking Assays-For cross-linking experiments, the ISRE synthetic oligonucleotide was random primer-laheled in the presence of [(u-:"'P]~ATP, according to standard procedures. TTP was suhstituted by bromodeoxy-UTP in order to render the labeled binding sites UV-sensitive. After incubation with nuclear or whole cell extracts (40-80 pg) binding reactions were resolved on native 6% polyacrylamide gels. These gels were then exposed to a 302 nm UV light for 60 min (FotoDyne UV 300 transilluminator), under an ice pack. The wet gel was then exposed to Kodak XAR-5 film and visualized complexes excised. Excised complexes were loaded directly onto a 10% sodium dodecyl sulfate-PAGE system (26). The gel was then dried under vacuum and subjected to autoradiography. Plasmid Construction and Transient Transfection Assays-The ISRE oligonucleotide was ligated into the BarnHI site of the 5' poly linker of pBLcat2 (27), located 5' of the bacterial chloramphenicol acetyltransferase gene, driven by the herpes virus thymidine kinase promoter. The forward orientation was confirmed by diagnostic restriction digests and sequencing. Transient transfections and chloramphenicol acetyltransferase activity assays were carried out as described (7, 13). RESULTS Photochemical Cross-linking of IFNa-induced Proteins to the ISRE-We and others have identified both constitutive and IFNa-induced factors, which bind specifically to ISRE sequences found in the upstream of IFNa-inducible genes (12)(13)(14)(15)(16)(17). We sought to characterize the DNA-binding protein components of these factors in Balb/c 3T3 cells (clone A31) by covalent, UV-induced cross-linking of IFNa-induced extracts to a labeled ISRE oligonucleotide. The molecular masses of these ISRE-binding proteins were estimated by resolution of cross-linked complexes on denaturing polyacrylamide gels (Fig. 1). IFNa induced two ISRE-specific complexes ( A and B, of ISRE-binding proteins regulated by IFNa. High-salt extracts (60-70 pg) of quiescent A31 cells treated with rHuIFNnAD (1000 IU/ml) for 15 min were incubated with 10" cpm of hromodeoxyuridine-substituted, random hexamer-labeled ISRE oligonucleotide under standard conditions. a, reaction products from IFN-treated extracts were run on 6% native polyacrylamide gels and UV-irradiated (300 nm) for 1 h. Complexes marked A and H are IFN-induced, and C represents a constitutive binding complex (see Fig. 2). b, cross-linked complexes (A-C) were resolved in small scale binding reactions (10 pg/lane), and slices of native gel corresponding to each complex were then excised from the native gel. Slices corresponding to each complex were collected (5-61 lane) and analyzed on a 10% sodium dodecyl sulfate-PAGE system, for molecular mass estimation. Kd, kilodaltons. As expected, no crosslinked bands were obtained from regions of the UV-exposed native gel which did not correspond to A, B, or C (not shown). Autoradiography was for 3 days with an intensifying screen. We isolated each of these complexes from native band mobility shift gels, for cross-linking analysis. Fig. 16 illustrates the results of such an experiment. Both IFN-induced complexes, A and B, contained a DNA-binding species of approximate molecular mass of 65 kDa. Constitutive complex C, seen in A31 nuclear and whole-cell extracts regardless of IFN treatment, contained 98-and 150-kDa DNA-binding species (Fig. l b ) . A and B migrated more slowly than C in native gels but contained a smaller DNA binding component than C, suggesting the presence of additional, non-DNA-binding proteins in the activated A and B complexes. PK Inhibitors Block PDGF-induced, but Not IFNa-induced, Gene Expression-To determine the nature of signaling pathways mediating ISRE-dependent gene activation by IFNa and PDGF, we have used the protein kinase inhibitors, staurosporine and K252a (28,29). 2-5A synthetase mRNA is induced by PDGF or IFNa/P, with similar kinetics (7). This activation has been shown, in both cases, to be mediated by the ISRE-containing sequence represented by the 29-base pair oligonucleotide used in the above analysis (7). When confluent A31 cells were pretreated with 10 nM staurosporine there was marked inhibition of 2-5A synthetase mRNA induction by PDGF, and no detectable induction with 100 nM staurosporine (Fig. 2). In contrast there was no inhibition of mRNA induction by IFNa at 10 nM staurosporine, with a slight inhibition seen a t 100 nM. Similar results were obtained with a structurally related inhibitor, K252a (28) (Fig. 2b). Direct activation of Ca"/phospholipid-depend- ent protein kinase C by treatment of cells with TPA did not induce detectable 2-5A synthetase mRNA (Fig. 2a). Control experiments indicated that both staurosporine and K252a treatment, at the higher concentration used, resulted in a marked generalized inhibition of protein phosphorylation (as measured by [y-:"P]ATP incorporation in TPA-and Bt2cAMP-stimulated cells) in the 3T3 cultures (data not shown). In order to confirm that the effect on PDGF-induced 2-5A synthetase mRNA was the result of a transcriptional inhibition, we tested the effect of staurosporine on PDGF-induced, ISRE-dependent transcription in transient transfection assays. We have previously shown that a plasmid carrying the Escherichia coli chloramphenicol acetyltransferase gene under the control of a single ISRE (pMuISREcatF), is inducible by IFNa and PDGF in A31 cells (7). Pretreatment of A31 tranfectants with staurosporine blocked PDGF-induced chloramphenicol acetyltransferase activity (Table I). Thus inhibition of kinase activity by staurosporine inhibited the ISRE-dependent PDGF transcription response in these cells. The above results suggest that ISRE-specific trans-acting factors are activated by different biochemical pathways responsive to PDGF-or IFNa-receptor interactions. To test this directly we employed a sensitive EMSA, which we have previously used to identify early PDGF-and IFN-modulated ISRE binding factors (7,13). Confluent A31 cells were treated for 2 and 15 min with PDGF or IFNa, and crude nuclear extracts or WCE were analyzed for ISRE binding, using the 29-base pair ISRE oligonucleotide as probe. Within 2 min of treatment IFNa-induced complexes (i.e. A and B of Fig. 1) were clearly evident (Fig. 3a), and these complexes showed a parallel increase in abundance a t 15 min post-treatment. In extracts from cells treated for 2 min with crude PDGF, no enhanced complex formation was seen, but by 15 min, PDGFinduced ISRE binding was clearly evident (Fig. 3a). T o ensure that equal amounts of active extract were assayed in each reaction, we measured binding to a labeled oligonucleotide representing regulatory sequences in the promoter of the human IFNP gene, and termed IRE anterferon gene _Regulatory Element, Ref. 30). A constant amount of constitutive binding to the IRE was observed regardless of cell treatment (Fig. 3b). PDGF-induced ISRE complexes comigrated in the native PAGE system with the IFNa-induced complexes A and B (Fig. 3a). While this result does not provide proof of identity between PDGF-and IFN-induced factors, the differential kinetic displayed in the induction of binding by IFNa and PDGF does suggest these two agents signal activation of ISRE binding factor(s) through different receptor-coupled pathways. To examine more closely this apparent separation in signal transduction pathways, we employed the EMSA on extracts made from cells pretreated with the kinase inhibitors, staurosporine and K252a, prior to IFNa or PDGF treatment (Fig. 4a). We confirmed that authentic murine IFNa//3 induced the same complex pattern as seen with the human hybrid rIFNaAD, used in the experiments shown in Fig. 3 (Fig. 4a, lanes 1 and 2 ) . The induction of ISRE binding activity by IFNa was not blocked by either staurosporine or K252a (Fig. 4a, lanes 1-4), although staurosporine did inhibit this response to some degree. PDGF-induced ISRE binding, in contrast, was almost completely inhibited by either K252a or staurosporine treatment (Fig. 4a, lanes 5-9). It is possible the partial inhibition of IFN-induced binding by staurosporine is reflected in the inhibition of PDGF-induced binding, a much weaker response. The differential sensitivity of PDGF-and IFN-induced binding to K252a (compare lanes 4 and 9, Fig. 4a) makes this unlikely. rPDGF-BB homodimer and purified human PDGF-AB heterodimer preparations induced ISRE binding identical to crude PDGF, which was predominantly of the AB heterodimer type (Fig. 4a, lanes 5-7). These results indicate that IFNa-and PDGF-receptor interactions trigger different signaling pathways in the activation of ISRE-binding factors. This result is in accordance with the effects of these kinase inhibitors on PDGF-induced, ISRE-dependent transcription (Table I), and induction of 2-5A synthetase gene expression by these two agents (Fig. 2a). Since staurosporine and K252a did not block IFN-induced ISRE binding, we tested for the effect of activators of kinasedependent signal transduction on induction of ISRE binding. Neither TPA (Ca"/phospholipid-dependent protein kinase C activator) or Bt2cAMP (CAMP-dependent protein kinase A activator) treatment of intact A31 cells activated ISRE bind- sensitive to protein kinase inhibitors. a, WCE of A31 cells were incubated (5 pg/reaction) with a bodylabeled ISRE oligonucleotide, with binding and gel conditions as in Fig. 3. Lanes represent cells treated (15 min) as follows: 0, untreated; 1 and 2, 1000 IU/ml rHuIFNaAD or murine IFNa/P, respectively; 3 and 4, as in 1, with 15-min pretreatment of 100 nM staurosporine or K252a, respectively. 5, 200 units/ml crude AB-PDGF; 6, 10 ng/ml rBB-PDGF; 7, 20 ng/ml highly purified AB-PDGF; 8 and 9, as in 5, with 15-min pretreatment a t 100 nM staurosporine or K252a, respectively; 10 and 11, as in 5, including 200 neutralizing units/ml 7F-D3 anti-MuIFNn/P or polyclonal anti-HuIFNP, respectively. Induced complexes (AIR, according to Fig. 1) are indicated. The major constitutive band is labeled C, as in Fig. 1. The minor, nonspecific band (starred) is not seen consistently in these experiments. The autoradiogram shown represents a 20-h film exposure. b, confluent cultures of A31 cells were untreated, or treated with 1000 IU/ml IFNa, 100 ng/ml TPA, or 1 mM Bt2cAMP for 15 min, then extracted as in a. WCE (5 pg) from each treatment was analyzed by EMSA using labeled ISRE as probe. TPA treatment of parallel cultures resulted in a marked stimulation of c-jos transcription, as determined by nuclear runoff analysis (not shown). Bt2cAMP activity is routinely assayed in our laboratory by induction of neurite outgrowth in a human neuroblastoma cell line. c, WCE of IFNa-induced A31 cultures were pretreated a t room temperature for 10 min prior to analysis of ISRE binding activity by EMSA with the following: no compound (0), 10 mM NEM, 10 mM dithiothreitol (DDT), or 10 mM each of NEM and dithiothreitol. Subsequent incubation with end-labeled ISRE was under otherwise standard EMSA conditions (4, a-c). Complexes are marked AIR as in Fig. 1. ing factors (Fig. 4b). This result, in conjunction with the kinase inhibitor data, indicates that stimulation of Ca"/ phospholipid-dependent protein kinase C or CAMP-dependent protein kinase A is not sufficient to activate ISRE binding factors, and further, that these two kinases are unlikely to be involved in IFNa signaling of ISRE factors. IFNcu-induced 65-kDa ISRE Factor Has Characteristics of ISGF3"Recently, an IFNa-induced ISRE binding factor, ISGF3, has been identified in human cells, the activation of which correlates with IFN-induced transcription of the ISG54 and ISG15 genes. ISRE binding activity of HeLa cell ISGF3 is sensitive to treatment in uitro with NEM (31). Since binding of the IFNa-induced A31 complexes A and B (Fig. 1) is competed for by an ISRE derived from the human 2-5A synthetase gene, and the murine ISRE recognizes similar factors in extracts of IFN-treated human fibroblasts (data not shown), there appears to be conservation of IFN-regulated ISRE binding factors between these species. In order to determine whether the IFN-induced 65-kDa protein (Fig. l) of an IFN-inducible Enhancer 8769 might be a component of murine ISGF3, we treated extracts of IFNa-treated A31 cells with NEM in vitro. ISRE binding activity of the complex containing the 65-kDa protein (band Fig. 4c) was abolished by NEM treatment; thus this protein may be the DNA-binding subunit of murine ISGF3 (NEM would not necessarily have to work directly on p65). Identical results were obtained with the PDGF-induced ISRE complex (data not shown). Antibodies to MuIFNP Inhibited PDGF-induced 2-5A Synthetase m R N A Expression but Not ISRE Binding-Previous work on the PDGF-induced 2-5A synthetase response has suggested that it is an indirect result of IFNP induction (5,7). T o address this possibility, we included a neutralizing antibody, 7F-D3, to MuIFNp in the culture medium during PDGF treatment of A31 cells. These cells were treated for 6 h (mRNA, Fig. 2a) or 15 min (ISRE binding, Fig. 4a) with PDGF (with and without 200 neutralizing units of 7F-D3) or an equivalent amount of anti-HuIFNP as control. 7F-D3 efficiently inhibited PDGF-induced accumulation of 2-5A synthetase mRNA (Fig. Za), indicating that IFNp is involved in this response. We next determined whether 7F-D3 had an inhibitory effect on PDGF induction of ISRE binding factors. This antibody completely inhibits PDGF-induced chloramphenicol acetyltransferase expression from pMuISREcatF (7). Surprisingly, PDGF induced ISRE binding activity regardless of the presence or absence of 7F-D3 antibody (Fig. 4a, compare lanes 5 and IO). Thus, induction of ISRE binding by PDGF did not appear to be mediated by IFNP. This apparent discrepancy with the 2-5A synthetase gene expression data (Fig. 2a) is discussed below. DISCUSSION A number of recent studies indicate that ISRE sequences are highly conserved in the 5' upstream region of a number of IFNa/P-inducible genes. This sequence confers inducibility by IFNy (32), IFNa (12)(13)(14)(15)(16)(17), PDGF (7), and dsRNA (data not shown) on promoters normally unresponsive to these agents. Here we have used photoaffinity labeling experiments to reveal a minimum of three nuclear proteins which specifically contact the ISRE, exhibiting apparent molecular masses of 150,98, and 65 kDa. Only the 65-kDa protein bound to the ISRE sequence in an IFN-dependent manner. PDGF induced a n identical pattern of ISRE-specific complex formation as IFNa (Figs. 3 and 4) on nondenaturing gels, but the abundance of the PDGF-induced complex was much less than was seen with IFN ( Fig. 3a). Thus we have not been able to definitively cross-link a 65-kDa (or any other) induced protein in extracts from PDGF-treated cells. Proof of the identity between PDGF-and IFNa-induced ISRE binding factors will require more rigorous biochemical analyses of these induced complexes. We propose the identity of the IFNa-induced 65-kDa ISRE-binding protein as p65ISp, for 65-kDa IFN-stimulated protein. ~65'" is almost certainly a component of the E (early) factor described by Stark, Kerr, and colleagues (15), which binds to the ISRE sequence of the IFNa-inducible 6-16 gene. A similar factor, ISGF3, has been described (31), which mediates activation of the ISRE sequence of ISG-54 and ISG-15 genes (16). Active ISGF3 is a heteromeric complex of two identified protein subunits, ISGF3a and ISGF3y. IFNa-stimulates the stoichiometric association of ISGF3a and ISGF37 in the cytoplasm of HeLa cells, followed by nuclear translocation (31). It has not been established which ISGF3 subunit binds to the ISRE. Similarly, active E factor is found in purified cytoplasts of human lymphoblastoid cells after IFNa treatment, in a form that is rapidly translocated to the nucleus in intact cells (15). The 6-16 gene ISRE sequence competes for induced factor binding to the 2-5A synthetase ISRE in vitro,' These reports are consistent with identity between ISGF3 and E factor. Formation of HeLa cell ISGF3 i n vitro was sensitive to NEM treatment, and NEM treatment of A31 extracts prevented formation of the IFNa-induced ~65'" complex in uitro (Fig. 4c). Thus ~65'"' may represent a DNAbinding subunit of murine ISGF3. Another broadly inducible factor, NF-KB, undergoes nuclear translocation after TPA treatment of intact cells (21). Activation of NF-KB and nuclear translocation can be catalyzed in vitro by addition of the purified subunit of either Ca2+/ phospholipid-dependent protein kinase C or CAMP-dependent protein kinase A, directly implicating these two kinases in signal transduction and physiological activation of NF-KB (33). NF-KB has recently been shown to be dsRNA-inducible (22,23) and binds to the positive regulatory domain I1 in the dsRNA-inducible IFNp gene IRE (30, 34). The 2-5A synthetase ISRE is also highly inducible by dsRNA in transient transcription assays, suggesting that NF-KB might be interacting with the ISRE (data not shown), consistent with the proposed overlap in factors mediating inducibility of IFN genes and IFN-responsive genes (35). A non-DNA-binding cytoplasmic protein of 65 kDa has been identified as an integral part of inducible NF-KB (36). However, ~65'" is distinct from the 65-kDa NF-KB subunit on the basis of its DNA binding activity. This distinction is further evident from our experiments with a dsRNA-inducible IRE sequence, shown in Fig. 3b. This sequence spans nucleotides -99/-55 of the IFNP gene, and contains four GAAANN motifs (35), including one centered in the NF-KB-binding site (23). Two of these GAAANN motifs are also present, as direct repeats, in the core ISRE. Neither IFNa or PDGF induced factor binding to the IFNp IRE site. In addition, the IFNP IRE sequence did not compete with the ISRE for binding of ~65'" in vitro (data not shown). Finally, in contrast to NF-KB activity, which is unmasked by deoxycholate (21) IFNa-induced ISRE binding (E factor) is abolished by treatment of cytoplasmic extracts with deoxycholate (15). We have previously shown that, like staurosporine and K252a, antibodies to murine IFNp block PDGF induction of the ISRE/chloramphenicol acetyltransferase hybrid gene in A31 cells (7). Therefore, since IFNp is active at picomolar concentrations, significant concentrations of the polypeptide may be synthesized in response to PDGF, even in the presence of cycloheximide, which inhibits protein synthesis incompletely (up to 95%; see "Discussion" in Ref. 37). This suggested the possibility that PDGF induces 2-5A synthetase gene expression indirectly, through the kinase-dependent induction of IFNP. In order to test whether such an indirect mechanism is responsible for PDGF modulation of ISREbinding factors, we determined the effect of a monoclonal antibody to murine IFNp on the induction of ISRE factors by PDGF (Fig. 4). This is one of the antibodies shown to block PDGF-induced ISRE/chloramphenicol acetyltransferase activity in transient transcription assays (7). Surprisingly, we observed that PDGF stimulated ISRE binding in the presence of this antibody, even though PDGF-induced 2-5A synthetase mRNA accumulation was blocked under these same conditions (Fig. 2a). Similarly, it has recently been shown that the protein kinase inhibitor, H-7, blocks IFNinduced 2-5A synthetase mRNA accumulation in human lymphoblastoid cells without affecting IFN-induced transcription (38). H7 does not inhibit IFNa induction of ISRE binding factors (data not shown). Thus, early signals leading to acti-It has recently been shown that binding of heat shock factor the case of the PDGF-induced 2-5A synthetase gene, we conclude that activation of ISRE binding factors is a direct, early response to PDGF, but that this activation alone is inadequate for initiation of transcription from the ISRE (our data do not exclude the involvement of non-ISRE sequences in the overall response of 2-5A synthetase to PDGF). This observation may hold true for other ISRE-containing genes, and suggests that PDGF and IFNp cooperate in the full induction of ISRE-directed gene expression, early in PDGFinduced mitogenesis. We have previously shown that small amounts of IFN (detectable only by radioreceptor assay and antibody neutralization) are produced by confluent, arrested cultures of a human glial cell line (3). IFN is likely also present in the culture medium of the confluent 3T3 cells used here. IFNa has been proposed to signal cells through rapid generation of diacylglycerol and inositol 1,4,5-triphosphate second messengers, suggesting activation of a Ca*+/phospholipiddependent protein kinase C in IFNa signal transduction (40). In support of this notion, IFNa has been reported to induce c-fos transcription (41) and diacylglycerol production (39,42) similar to PDGF. These results imply that mitogenic and growth inhibitory factors share early transmembrane signal transduction pathways. We have shown here that PDGF and IFNa utilize signaling pathways exhibiting distinct kinase dependencies, in the transcriptional activation of a common pattern of ISRE-dependent gene expression in 3T3 cells. Our data leave open the question of whether signaling events upstream of kinase activation might be shared by PDGF and IFNa. Clearly, there is an advantage to the cell in distinguishing transmembrane signal transduction pathways triggered by activation of specific growth stimulatory (PDGF) and growth inhibitory (IFNa) receptors.
v3-fos-license
2019-04-13T13:09:58.764Z
2012-09-19T00:00:00.000
55773927
{ "extfieldsofstudy": [ "Computer Science" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://www.omicsonline.org/open-access/generic-framework-for-multi-disciplinary-trajectory-optimization-of-aircraft-and-power-plant-integrated-systems-2168-9792.1000103.pdf", "pdf_hash": "53ab9f671c42a02b1ede9d6f38293dcde8742433", "pdf_src": "MergedPDFExtraction", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41146", "s2fieldsofstudy": [ "Engineering", "Environmental Science" ], "sha1": "cd2a4cec8539deb0c56bbbd2e403ff9523fd25b0", "year": 2012 }
pes2o/s2orc
Generic Framework for Multi-Disciplinary Trajectory Optimization of Aircraft and Power Plant Integrated Systems The air transport industry today is paying a lot of attention to growing public concern about the environmental issues of air pollution, noise and climate change. The past decade has witnessed rapid changes both in the regulations for controlling emissions and in the technologies used to meet these regulations. Considering the critical nature of the problem regarding the environmental footprint of aviation several organizations worldwide have focused their efforts through large collaborative projects such as Clean Sky Joint Technical Initiative (JTI). Clean Sky is a European public private partnership between the aeronautical industry and the European Commission. It will advance the demonstration, integration and validation of different technologies making a major step towards the achievement of the environmental goals set by ACARE (Advisory Council for Aeronautics Research in Europe). The ACARE Vision 2020 and associated Strategic Research Agendas (SRAs) have successfully steered European aeronautics research in recent years by setting the objectives of reducing CO2 by 50%, NOx by 80% and Noise by 50% compared to year 2000 [1]. Ability to meet these challenges only is possible with a strong commitment to the vigorous evolution of technologies and achieving new breakthroughs. Over the last few years several alternatives have been proposed and most of them are long term solutions such as changing the aircraft and engine configurations and architectures. Hence all the manufacturers have started focusing and developing their strategies along the other possible options. The management of trajectory and mission is one of the key identified solutions found in achieving the above set goals and is a measure that can readily be implemented. Introduction The air transport industry today is paying a lot of attention to growing public concern about the environmental issues of air pollution, noise and climate change. The past decade has witnessed rapid changes both in the regulations for controlling emissions and in the technologies used to meet these regulations. Considering the critical nature of the problem regarding the environmental footprint of aviation several organizations worldwide have focused their efforts through large collaborative projects such as Clean Sky Joint Technical Initiative (JTI). Clean Sky is a European public private partnership between the aeronautical industry and the European Commission. It will advance the demonstration, integration and validation of different technologies making a major step towards the achievement of the environmental goals set by ACARE (Advisory Council for Aeronautics Research in Europe). The ACARE Vision 2020 and associated Strategic Research Agendas (SRAs) have successfully steered European aeronautics research in recent years by setting the objectives of reducing CO 2 by 50%, NO x by 80% and Noise by 50% compared to year 2000 [1]. Ability to meet these challenges only is possible with a strong commitment to the vigorous evolution of technologies and achieving new breakthroughs. Over the last few years several alternatives have been proposed and most of them are long term solutions such as changing the aircraft and engine configurations and architectures. Hence all the manufacturers have started focusing and developing their strategies along the other possible options. The management of trajectory and mission is one of the key identified solutions found in achieving the above set goals and is a measure that can readily be implemented. In order to truly understand the optimized environmental friendly trajectories it is necessary to simultaneously consider the combined effects of aircraft performance, propulsion system and engine performance, environmental emissions, noise and flying trajectories. GATAC (Green Aircraft Trajectories under ATM Constrains) is a multi-disciplinary optimization frame work which is being collaboratively developed to achieve the above requirement by Cranfield University and other partners as part of the Systems for Green Operations -Integrated Technical Demonstrator (SGO-ITD) under the Clean Sky Joint Technical Initiative [1]. The Gatac Environment This section presents an overview of the main features and capabilities of the GATAC multi-disciplinary optimization framework. It can be considered as a state-of the-art optimization framework with optimizers and simulation models to perform multi-objective optimization of flight trajectories under Air Traffic Management (ATM) constraints. The top level structure of the GATAC framework is shown in Figure 1. The framework consists of, the GATAC Core, Model Suite, Graphical User Interface (GUI) and Post-Processing Suite. It interacts with a suite of models as configured at set-up time. The GATAC core is the core engine of the interaction framework and provides the connectivity between the various models. It also provides for the organization of an evaluation process (within the Evaluation Handler) and includes functionalities such as parameter stores, data parsing, translation function and interfacing with models. It also supports the repeated calling of sets of models to enable trajectories to be evaluated step by step with number of steps being defined by the user at set-up time. The core, therefore, is programmable as the user sets-up the problem at hand within the Evaluation Handler by defining connectivity between models and any data translation and other similar functions. This can be done either directly using a purposely defined domain specific language or graphically via GUI. In this way, the user effectively defines (formulates) the optimization problem. The optimization process takes place in the GATAC Core, which accesses an optimization function chosen from a suite by the user [2,3]. A key feature of GATAC is that, he user can select any algorithm from the optimization suite without the need to modify the problem formulation because; the framework caters for normalization of data. Indeed, the algorithm in the optimization suite are designed to handle normalize variable parameters. The normalized parameters are then de-normalized by the integration framework as specified by the user before being input to the evaluation handler. Similarly the data that are output from the evaluation handler are again normalized before being input to the optimizer to close the optimization loop ( Figure 1). As the data exchanged between the optimization core and the models need to be defined according to the input and output data of each model and module. GATAC caters for the automatic definition of data structures by means of a dictionary. The automatic definition is carried out by GATAC at set-up time according to the output and input variables of the specific models and modules invoked in the problem definition. These data structures then enable the correct data transfer between the models and modules. The GATAC can be run either on a single stand-alone machine or a distributed system with multiple computers (Figure 2). In the latter case the model suite is replicated on a number of different machines, on which a daemon will be running in the background. The daemon is even-triggered and instructed to run particular models by the Framework Manager, where the GATAC core resides. When its particular job is complete, the relevant daemon will return the results to the GATAC core. In this way, the core maintains full control of the optimization process. Data exchange between the GATAC core and the daemons is achieved through Ethernet LAN connectivity between the respective computers. The model suite is distributed on a single machine or different machines acting as hosts. The data exchange between components carried out through Ethernet LAN. The Figure 2 illustrates the architecture and operating network of the GATAC distributed system [2,3]. The Nsgamo Genetic Optimiser The NSGAMO (Non Dominated Sorting Genetic Algorithm Multi-Objective) is one of the genetic based optimizers incorporated in the GATAC framework. This optimizer is able to perform optimization of two objectives without or with constraints. Figure 3 shows the sequence of steps of the NSGAMO genetic algorithm. According to the flowchart, at the first step an initial population of the test cases (candidate trajectories) is created randomly. The size of the initial population determined by the product of the prescribed population size with an initialization factor (>=1) A larger initial population size increases the probability of the optimizer converging to the global optimum point but slows down the optimization process. The optimizer then sends all the cases to the GATAC framework for the evaluation handler to evaluate and return the results (optimization objective) to the optimizer. On receipt of the results, the optimizer performs fitness evaluation on the data (i.e. qualifies the population). As optimum point is identified on the first generation, a second generation population is created and the process repeated. The process is repeated until convergence criteria are met (either a maximum number of generations will have been generated and evaluated or Pareto convergence will have been reached). In order to reduce the computational time of subsequent generation is reduced to prescribed population size. To achieve this only the best solution s of the previous population are selected to generate the next generation. New generations are created using different methods such as stochastic universal sampling, random selection and genetic operators (crossover and mutation). In the case of single objective optimization the result is the best-case while for a multi-objective optimization, the final result is a Pareto Front [3]. The implementation of the NSGAMO algorithm allows, for via a text file, the user definition of the various parameters associated with the optimization, which include population size, optimization method, mutation and crossover ratio, selection method and type of mutation and crossover and other parameters. A detailed description of the testing and benchmarking of the optimizer performance is presented in reference [4]. Engine model The engine model developed for this study is based on Trent 895 which is a 3-spool high by-pass ratio turbofan engine with separate exhausts. The engine model is designated as CUHBR (Cranfield University High By-Pass Ratio) and was modeled using data available from public domain and making educated engineering assumptions where necessary. This engine has been selected to power the longrange aircraft which has been used to develop the aircraft performance model. The engine model has been developed and simulated using TURBOMATCH which is an in-house gas-turbine performance simulation and diagnostics software developed at Cranfield University [5]. The tool is used to model the design point of the engine and study its off-design performance. TURBOMATCH is a fully modular engine cycle simulator that can perform design point, off-design, steady state, and transient conditions as well as degraded performance analysis of gas turbines. The TURBOMATCH engine model is assembled from a collection of existing interconnected elements called 'Bricks' . Individual bricks are controlled by a numerical solver and represent the thermodynamic equivalent of gas turbine components including; intake, fan, compressor, combustion chamber, turbine, duct, and nozzle. Bricks are called up to model the architecture of the gas turbine and a numerical solver is used to solve the mass and energy balances between the interconnected bricks. TURBOMATCH also allows for the modeling of different fuels, extraction of bleed air and the shaft power off-takes, cooling air, component degradation, reheating, or sequential combustion etc. The outputs from the tool include the calculation of the overall performance of the engine in terms of gross and net thrust, fuel flow, Specific Fuel Consumption (SFC) as well as the thermodynamic parameters and gas properties at the inlet and outlet of each component. Detailed operational parameters such as efficiency, rotational speed, power required/power delivered, surge margin in case of the fan and compressor or thrust coefficients in the case of nozzles, are also provided. For the purpose of this study, the engine is modeled by developing a representative input file that represents the configuration of the CUHBR engine. Figure 4 is a schematic of the CUHBR TURBOMATCH model. As shown in the figure, the LP turbine drives the fan. Similarly, IP turbine and HP turbine drive the IP compressor and HP compressor respectively. It has been assumed that part of the air is bleed from the HP compressor to cool the HP turbine and no cooling air bleeds for the IP turbine and LP turbines. The secondary air system has been largely simplified and handling bleed has not been considered. The engine design point has been selected at maximum rated thrust during take-off under ambient International Standard Atmospheric (ISA) conditions at sea level, and the engine mass flow, bypass ratio and overall pressure ratio have been obtained from the public domain. An iterative trial and error process has been required in order to match the performance of the engine model with the reference engine performance data at DP as well as cruise phase. At the design point assumptions are made with regards to the pressure ratio split between the compressors, component efficiencies, surge margin, cooling mass flows, duct/intake and burner pressure losses, burner efficiency, as well as bleed and power off-takes. The fan pressure ratio is iterated and optimized for the maximum thrust and minimum SFC. The Turbine Entry Temperature (TET) of the cycle is iterated until the calculated With the fixed design point, a series of Off-Design (OD) performance simulations have been performed in order to simulate the effects of ambient temperature, altitude, flight Mach number and TET on net thrust and SFC as a further model verification process. The Figure 5 and 6 shows the variation of net thrust and specific fuel consumption for different flight Mach numbers at different altitudes under OD performance. Parameter VALUE UNIT As the flight velocity increases the performance of the engine is influenced by three main factors: momentum drag, ram compression and ram temperature rise. The momentum drag rises with the flight speed with a consequence reduction of the momentum imparted to the air by the engine. Therefore, the net thrust, which is defined as the difference between gross thrust and intake momentum drag, drops with the rising of flight Mach number. The second effect is the ram compression and it has a double effect. Firstly, it increases the nozzle pressure ratio and therefore the net thrust. Secondly, it raises the inlet pressure and thus air density along with mass flow. The last effect is the ram temperature rise, which produces an increment of air temperature at fan inlet. This leads, at constant shaft speed, to a decrement of non-dimensional power setting and hence thermal efficiency. The momentum drag and the ram compression are generally the main effects. At low speed, momentum drag is the main effect and the net thrust drops quickly with the rising of flight speed. Since Mach number is less than 0.3 the effects of temperature rise and am compression are small. At higher Mach number the effect of compressor rise starts to be important and, as it is possible to observe, the gradual decrease in net thrust. As shown in Figure 5 the net thrust decreases when the altitude increases with a constant Mach number. When the altitude increases the air density drops, leading to a reduction of mass flow and hence net thrust. The reduction of air density does not alter the non-dimensional power setting of the engine. Moreover, in the troposphere the reduction of net thrust due to the drop of the air density is in partly offset by the positive effect of the decrement in ambient temperature. Indeed, the ambient temperature falls linearly in the troposphere, from 15˚C at sea level to -56˚C at the top of the Troposphere at 11 km. At constant shaft speed, when the temperature drops the non-dimensional power setting raises leading to an increment in pressure ratio therefore in net-thrust. Figure 6 shows how the increases with flight Mach number. In order to fly at faster speed more fuel is required. The increment of the fuel flow overcomes the decreasing on net thrust and rises with Mach number. In Figure 7 and 8 are the effects of ambient temperature and on net thrust and are shown. At low TET give rise to low thermal efficiency and jet velocity which create a high propulsive efficiency which resulted in high SFC. Similarly at high TET leads to give high thermal efficiency and high jet velocity which result in low propulsive efficiency. The figure 7 shows the best compromise between thermal efficiency and propulsive efficiency for several ISA deviations. With the variation of ambient temperature there are two main effects that have to be considered [6]. The first effect is well described Figure 9 whereas considering an ideal cycle between fixed values of overall pressure ratio is shown. Figure 9 shows that on a 'hot day' the compressor work will be greater than in a 'normal day' . This is due to the fact that compressing hot air requires more work. However, the turbine's work is not affected by ambient temperature because in this case the overall pressure ratio does not change. Consequently on a hot day, the difference between turbine work and compressor work will be less than the normal day therefore the net thrusts will decrease. Also this effect is reflected with a shift in compressor operating point with a variation in ambient temperature. This is due to the fact that, the non-dimensional rotational speed N T depends on shaft speed and temperature. Assuming constant the rotational speed of the shaft, in a hot day the ambient temperature increases and hence the non-dimensional rotational speed decreases. Therefore, the operating point will move to the left and downwards so the pressure ratio and the non-dimensional mass flow will decrease. The opposite will occur in a cold day. For constant ambient temperature, with the increment of the pressure ratio and net thrust increase. For constant with the rising of ambient temperature the net thrust drops. Vice versa, with the decrement of ambient temperature the net thrust rises. Aircraft Performance Model The software that has been used to simulate the integrated aircraftengine performance is called HERMES. It has been developed at Cranfield University in order to assess the potential benefit of adopting new aircrafts, engine concepts and technologies [7]. The aircraft model is capable to simulate the performance of different types of aircrafts, from a baseline aircraft to an advanced one for a given civil mission. The aircraft model computation starts reading the required input data from an input file. As described below, these data regard the general arrangement of the aircraft and mission profile. Some of these data are usually available from the public domain or defined by the user. The user has to specify as an input the MTOW and the weight of the payload. Moreover, the user has to set either the fuel load or the mission range. In the first case HERMES will compute the mission range whilst in the second case HERMES will assess the required amount of fuel to complete the mission. In the case that the user has set the initial amount of fuel, the value of the range will be considered as an initial guess and will not influence the resulting values. The range is calculated iteratively. In each iteration process the fuel required for a trial distance is computed. As soon as the total fuel is consumed the convergence is achieved. This is obtained calculating the trial OEW by subtracting the assessed total fuel, which is given by the sum of mission fuel plus reserve, and the payload from the MTOW. The trial OEW is then compared with the OEW set up in the input file and the difference is used to redefine the distance and the time spent at the cruise. The convergence is achieved when the difference of OEW is within 0.1 %. Input data module The input data required for the aircraft model are information regarding the geometry, configuration and the required performance of the aircraft. These input data are used by the aircraft performance and aerodynamic modules to calculate the performance and aerodynamics characteristics of the aircraft. Mission profile module The mission profile is subdivided into different phases. The overall mission profile is defined by the user and is used by the aircraft performance module to compute the distance, fuel and time for each the each segment in which the mission is subdivide. In addition, TURBOMATCH refers to the mission profile in order to calculate the engine performance. Atmospheric module The atmospheric conditions for a given Mach number and altitude have a great influence on the aircraft and engine performance. Therefore, the atmospheric module calculates the ISA conditions both in the lower atmosphere and stratosphere. Moreover, the user has the possibility to alter the temperature from ISA standard values to simulate non-standard conditions. Engine data module The performance of the engine greatly influences the aircraft performance. The engine data usually includes, maximum take-off thrust: required to assess the length, fuel and time required for the take-off; maximum climb thrust and SFC: required to compute fuel consumed, horizontal distance covered, rate of climb and time to climb; cruise and descent performance. Aerodynamic module Calculates the aerodynamic performance of the aircraft for the given flight conditions. The module elaborates the information regarding the mission profile, the aircraft and aerodynamics properties in order to compute the drag characteristics in form of drag polar profile and drag coefficients. The drag polar can be always expressed using two main components of drag; one is dependent on lift and the other independent on lift. Therefore, the total drag coefficient can be expressed using the following equation: The term C DO is the zero lift drag coefficient and is a constant while C D1 is the lift dependent drag or induced drag coefficient and it can be expressed as follows: Where K is called lift dependent factor. Combining the previous two expressions it is possible to write the well-known drag coefficient expression: The calculation of the zero lift drag coefficient is performed using the component build-up method, which has the following general expression [5]: The flat-plate skin friction coefficient Cf and the form factor φ, which estimates the pressure drag due to viscous effects, are used to assess the subsonic profile drag of a particular component. The factor is Q used in order to take into account the effects of the interference drag on the component. As highlight in the previous equation, the product of the wetted surface of the component, S Wet and Cf, φ and Q allows to calculate the total drag on the component, c. Using this method it is possible to calculate the drag arising from several components such as fuselage, tail plane, fins, nacelle, outer wings and engine pylons. It also allows the estimation of miscellaneous drag arising from deployed flaps, landing gear and trim conditions. The coefficient C DO is then calculated dividing the total drag by the reference area S ref , which is the plan wing area. The lift induced drag is estimated using the following equation [8]: Where the coefficients C 1 and C 2 are a function of the wing aspect ratio and taper ratio and are used to take into account the wing plan form geometry. The coefficient C 3 and C 4 are used to account the nonoptimum wing twist and viscous effect respectively. Aircraft performance module Information from the other modules is passed to the aircraft performance module. In turn, the aircraft performance module computes the overall performance of the aircraft for each segment in which the entire mission is divided. Typical outputs include: fuel consumption, distance covered, mission duration, engine thrust and SFC for the whole, mission and for each flight segment. The calculations of the climb rely on the rate of climb, which is defined as the ratio between the change in height and the time assuming zero wind velocity: During the calculation of the rate of climb appropriate acceleration factors are included in order to take into account the following cases: • When the aircraft is climbing in the stratosphere, the ambient temperature reduces, thus at constant Mach number the airspeed decreases because the speed of sound drops. • During a climb at constant equivalent air speed the true air speed is increasing because the air density drops with altitude. Therefore, the time required to flight from an altitude 1 h to an altitude 2 h is given by the following expression: Similarly, the total fuel is computed by summing the fuel burnt in each interval. The calculation of the flight range is a function of the engine and aircraft parameters and the available quantity of fuel. In order to derive the equations implemented in the aircraft performance module it is necessary to define some fundamental variables that are involved. Firstly, for an aircraft in horizontal, steady state flight at constant true airspeed V, the engine thrust has to be equal so the aerodynamic drag. Therefore it is possible to write: Where F is the engines thrust, D and L are the drag and lift of the aircraft respectively W is the aircraft weight and E is the aerodynamic efficiency, which is defined as the ratio between the lift and the drag. Considering the definition of lift and the drag forces, the aerodynamic efficiency can be expressed as follows: Where ρ is the density of the air and Re f S is the wing plan area. The specific fuel consumption, SFC, of an aircraft powered by a turbojet or turbofan engine is defined as the ratio of fuel flow (Q) per specific thrust (F S ): The specific range r a is defined as the flight distance dR per unit of fuel consumed so: . . = = a dR V E r dm SFC W The integration of the above equation leads to compute the total cruise range: Three different flight schedules can be chosen which lead to three different sets of assumptions for the variables: • Cruise at constant altitude, SFC and lift coefficient; • Cruise at constant airspeed, SFC and lift coefficient; • Cruise at constant altitude, airspeed and SFC Usually the second option is used. Therefore the lift coefficient, expressed as following: Has to remain constant. This allows concluding that the ratio of the aircraft weight to the air density has to remain constant. During the cruise the fuel is consumed thus the weight of the aircraft decreases and the density has to decrease accordingly. This can be achieved allowing the aircraft to climb. At the same time, with the decrement in air density the thrust will decrease and it can be assumed that the true air speed is constant. In practice the aircraft are not allowed to climb during cruise by the air traffic control so airlines adopt a stepped climb procedure. For each segment at constant altitude, it is possible rewrite the range equation considering constant SFC airspeed and lift coefficient: The total range is the sum of the flight range for each interval. As pointed out above, in order to improve the overall efficiency of the aircraft the airliners allow performing what it is known as step climb cruise. Similarly, the user can subdivide the cruise into intervals and specify for each interval different flight Mach numbers and altitudes. During the descent phase the drag of the aircraft is greater than the thrust produced by the engine leading the aircraft to glide. The descent starts at cruise Mach number and it reduced till the 250 knots at sea level. The calculation of the descent phase is similar to the climb calculation presented above (Figure 10). The user has to set up in the mission profile input file the different intervals of the descent phase. Using an iterative method the flight time, rate of descent and horizontal distance covered are assessed. public domain. As mentioned before, in order to configure the aircraft model it is required to setup an input file with several parameters, including, aircraft geometry, configuration, mission profile and weight breakdown. Table 3 reports the main performance parameters regarding the LRACPD available from the public domain. Regarding the geometry of the aircraft and the engine in some of the required information is listed. Some of the parameters are not available in the literature therefore they have been assumed. The accuracy of the aircraft model has been verified against published data using the payload-range diagram. In this diagram the aircraft range is plotted against the payload (Table 4). There are usually three-baseline aircraft configurations, including: • Maximum Payload range; • Maximum economic range; • Ferry range. In the maximum payload range the aircraft take-off with both maximum take-off weight and maximum payload weight. Therefore, the amount of fuel is given by the following equation: In the maximum economic range, similarly to the previous case aircraft take-off with its maximum take-off weight, but this time with the maximum amount of fuel. Therefore the range will increase. The amount of carried payload is given by: Regarding the ferry range, the aircraft take-off with no payload and with the maximum amount of fuel. The ferry range is the maximum range of the aircraft. The take-off weight is given by the following equation: TOW=OEW+PW Max =297,550+141,880=493,350 kg Considering that the initial amount of fuel was known for each mission using HERMES the flight range has been calculated and compared with published data. Figure 11 shows the comparison between the payload range diagram of LRACPD and the aircraft model. Due to the lack of more information a step cruise from 10,000 to 11,000 meters was assumed. In Table 5 shows the values of the payload and fuel weight are reported for each mission along with the difference between the range of the real aircraft and the model. Emissions Prediction Model The emission prediction model used in this work is the P3T3 empirical correlation model which has been integrated as part of Cranfield University HEPHASTUS emission prediction tool. This model estimates the level of emissions at altitude using a correlation with the emissions measured at ground level. This methodology is straightforward. Firstly, during the certification test of the engine the emission indices are measured. These indices are subsequently corrected to take into account the variation of altitude and flight speed. In order to do that, it is necessary to know the combustion parameters for the operating conditions at both ground level and altitude. These parameters are: burner inlet pressure and temperature, fuel and air ratio and fuel flow. In addition the model takes into account the variation of humidity from the sea level to altitude. The model is capable of predicting all the emissions and in this paper main focus given to the NOx emissions only. Detailed model layout shown is shown in Figure 12. The engine tests results published by ICAO the level of emissions and other main parameters are measured for different engine operating conditions. These conditions are: 1. Take combustor inlet temperature, inlet pressure and air mass flow have to be known. Even if these values are not measured during the ICAO tests they can be assessed using gas turbine performance simulation software. At this point, similarly to EINO X , burner inlet pressure and FAR are plotted for different burner inlet temperatures. Then, using the combustor inlet temperature at altitude it is possible to obtain the respective value of EINO X at ground level from the specific plot. This value of EINO X is then corrected for taking into account the differences in FAR and inlet combustor pressure between ground level and altitude. The values of exponent and establish the severity of EINO X correction. Finally, a correction for the humidity influence is also taken into account. Having calculated the value of EINO X , the emitted NO X in kilograms is given by: Where FF, is the fuel flow in [kg/s], and the time in seconds. The variation of humidity change with the altitude relative to ISA sea level is taken into consideration. The correction increases with increasing altitude. If the measurement of EINO X at sea level has been done in a day with a high level of relative humidity, say 60%, the correction with the altitude will increase EINO X by around 12.5%. At typical cruise altitude the error by choosing different curves of relative humidity is small because the air is dry. ICAO suggests using 60% of relative humidity for calculations [9]. Engine manufactures during the years have gathered a large amount of data from engine testing, which have facilitated in defining the pressure coefficient to be set in the model. In the rig tests the combustor inlet conditions are varied independently in order to establish their relative effect on NO X formation. The value of pressure exponent is commonly in the range between 0.3 and 0.5 in typical cruise condition [10]. This value varies as a function of the combustor type, operating conditions and measurement variability. An average value of 0.4 is normally used for all civil aircraft engines. Regarding the FAR, the data from the engine manufacturers shows that during the cruise the FAR is 10% richer than at ground level with constant combustor inlet temperature. The main advantage of using the P3T3 model relative to other emissions models such as the physics based stirred reactor model is the low computational time required because it is based on empirical correlations. The required computational time is a key feature for a model that has to be used in aircraft multi-objectives trajectory optimization study considering the large amount of calculations involved. Emission Prediction Model Setup The file used to setup the engine emission model requires information about engine emissions, the combustor inlet pressure and temperature, the fuel flow and the fuel/air ratio for the four operating conditions at ground level. In the ICAO database it is possible to find only data regarding the emissions indices. Therefore, the values of combustor inlet temperature and pressure, fuel flow and fuel/air ratio have to be assessed using an engine simulation tool. TURBOMATCH has been used for this work. Table 6 indicates the relevant data available from the ICAO engine database: As it is possible to notice from the Table 5 only the fuel flow and the emissions indices are available along with the power setting of each mission phase, a series of off-design simulations has been carried out using TURBOMATCH in order to find the other necessary performance data of the engine. In the OD section of the engine model input file the value TET for each phase of the mission will be taken to match the values of the fuel flow with ICAO database. In Table 7 compare the fuel flows of the engine model under different flight phases with the public domain data available in ICAO. Then, the values of pressure and temperature at the burner inlet and the fuel/air ratio have been set in the emission model input file. Aircraft Trajectory Optimization In this study the entire flight profile has been divided in to three main phases: climb, cruise and descent. Three parameters have been used to define the flight trajectory: aircraft speed (M, TAS and EAS), flight altitude and mission range. The mission range has been kept constant for the all optimization studies. Therefore the study has been mainly focused on the trajectory optimization between two-fixed destinations. The climb and cruise phases are simply defined using 18 points and the cruise Mach number. However, in the optimization process only six design variables have been considered in order to reduce the required computational time. These design variables are: five values of altitude and the cruise Mach number. The first four values of altitude are used to define the climb trajectory whilst the last altitude point defines the cruise altitude, which is constant for the entire cruise. The other points are computed by interpolation between two consecutive design variables maintaining constant increment in altitude. For each design variable a boundary has been set to ensure that the resulting optimized trajectories were both feasible and with constant rising climb altitude. These boundaries can be considered as explicit constraints since they are directly applied to the design variables. Table 8 shows the limitation values for each design variable. A gap in altitude between two consecutive variables has been considered in order to guarantee a constant increment in altitude. Speed (EAS) during climb was fixed with the aircraft performance input file to 250 knots for the first two climb segment and 320 knots for the three subsequent climb segments. Moreover, climb and descent phases are flown at fixed power setting. For both phases maximum power setting is selected, i.e. maximum thrust at maximum TET permitted in the given flight phase. According to Laskaridis et al. [8] a common method to climb is at constant EAS. As it is shown in Table 8 the maximum allowable altitude has been limited to 11,000 meters. This limitation is related to the fact that, as altitude increases, the Reynolds number falls because the ratio density to absolute viscosity drops. At certain altitude, the Reynolds number will fall below a critical value of 105 and the flows about the blades of compressors and turbines will start to separate. This situation leads to two main consequences: (a) The flow is not deflected as much as before thus the compressors and turbines power drops leading a reduction of thrust; (b) SFC increases because of the increment in losses associated with the turbulent wakes that, in turn, cause a reduction of compressors and turbines efficiencies. According to Pilidis [6] at an altitude of 11,000 meters, for a large turbofan, the effect due to the drop of Reynolds number lead a reduction about 2 % of thrust. In the engine performance model adopted in this work the effect of Reynolds number is not taken into account. Therefore a limitation of 11,000 meters has been considered in order to obtain more realistic results. The descent trajectory starts at cruise altitude and speed and it has been divided into 10 segments. For each segment the flight speed has been chosen as it is stated in Table 9. Multi Disciplinary Optimization of Aircraft Trajectories The overall optimization running sequence is shown in Figure 13. At the start of the optimization process the optimizer (GATAC) generates the first set of design variables. In this work the design variables are five altitude points and the cruise Mach number. The first four altitude points describe the climb trajectory whilst the last point corresponds to the altitude of the cruise. The design variables are written in the input file of the aircraft model along with all the other required parameters. The following step consists in the execution of the aircraft model. As already explained, the aircraft model also requires the specifications about the aircraft and engine performance data. The execution of the aircraft model generates two output files. The first file regards the aircraft performance. The results include mission duration, fuel consumption and distance covered for the whole mission and for each flight segment. The second file contains the performance of the engine for each phase of the mission. SFC, thrust, shaft speed as well as engine temperatures and pressures can be found in this file. Information regarding the mission profile, fuel consumed and engine performances during the mission are used to generate the input file for the emission model. In addition, the emission model required other data regarding combustion specification and engine emission. These data are read from a specific input file. The emission prediction model computes the values of NO X , CO, UHC emitted during the mission in kilograms. Then, the output data are read by GATAC and, based on these values; a new generation of design variables are created. This process is repeated till the integrated optimisation criteria are satisfied. Assumptions and Considerations A number of assumptions have been required in the present study, including: 1. A procedure is implemented in the aircraft model to ensure that each point, which defines the climb trajectory, has a higher altitude of the previous one. 2. The cruise phase is flown at constant altitude and constant flight Mach number; 3. The climb phase is flown at constant power setting. This means that the profile generated for every range and altitude selected is nearly the same; 4. The descent phase not taken for the optimization process. 5. The user cannot choose arbitrary descent profile and it is automatically calculated by the HERMES code by interpolation between the cruise altitude and the landing altitude. Therefore, the descent profile is a function of the cruise altitude only; 6. The continuity of the flight speed has not been guaranteed between the cruise and the descent phase. This could be cause variation of flight speed between cruise and descent phase; 7. For simplicity, taxi phases, take-off and landing have not been included in the mission profile and hence in the overall calculation and optimization. The consequence is that the total flight range considered by the optimizer comprises only climb, cruise and descent. The climb phase starts at 475 meters of altitude whilst the descent phase terminates at sea level altitude; 8. Although in HERMES it is possible to take into account the flight diversion mission, it has not been considered in this work; 9. A deviation of +3 degrees has been assumed respect ISA conditions for the entire mission. Following section presents the different optimization studies carried out in the GATAC framework. In each the aircraft flight trajectory which has been optimized keeping the aircraft and engine configurations unchanged including the payload equivalent to 301 passengers. As stated in the above section the design variables utilized are associated with flight altitude during the climb and cruise phases and aircraft flight Mach number during cruise. The trade-offs of conflicting objectives such as flight time, fuel burnt and NO X emitted have been considered under each case study (Table 10). Case 1: fuel burnt vs flight time This optimization study has been carried out for two conflicting objectives: minimum fuel burnt and minimum flight time. No other constrains were applied. Figure 14 illustrates the Pareto front obtained with the GATAC NSGAMO optimizer [11]. The mission range was set equal to 14,195 kilometers. The Pareto curves were generated from 100 and 300 generations with series of points, where each point represents a trajectory, with its combination of design variables (altitudes and Mach number). For each point of the Pareto curve it is impossible to minimize further any objective from points given in the Pareto front. In Figure 14 The two trajectories lead to important differences in terms of flight time, fuel burnt and emissions. This was expected considering the trade-off between minimum fuel and minimum flight time. The two trajectories differ of 8.63% (79 min) in flight time and about 12.27% (14,790 kg) in fuel burnt. Less fuel burnt means less emission of NO X and CO 2 . While the emission of CO 2 is directly related with fuel burnt, the relation between NO X and fuel is different, as it will be described in the following sections. However, the fuel-optimized trajectory leads to higher emissions during the descent phase than the time-optimized trajectory. Considering that the aircraft is flying at higher altitude in the fuel-optimized trajectory the descent phase will be longer. Moreover, differently from the climb phase, the descent phase is fixed and it is not part of the optimization process. The two trajectories are shown in Figure 15 based on the selected design variables in Table 11. In order to minimize the fuel burnt the optimizer suggests a solution where the aircraft flies the cruise phase at highest possible altitude 11,000 meters and Mach number equal to 0.804. Generally, decreasing the speed and increasing the altitude lead to a decrement in drag and therefore in required thrust. This lower thrust requirement, in turn, means lower engine power setting along with lower fuel burnt. However, other important aspects have to be considered. A reduction in flight speed means more flight time with a negative effect on the fuel burnt. Therefore the optimizer has to assess the best compromise. In this case the resulting cruise Mach number for minimum fuel burnt trajectory is 0.804, which is higher than the minimum allowed value that was set to 0.75. Moreover, in order to reach as quickly as possible the highest allowable altitude an increment of engine thrust and power setting is also required. It is interesting to notice in Figure 16 how the optimizer for the fuel-optimized trajectory proposes segment 17 affording a much greater fuel consumption respect to the time-optimized trajectory. This is done in order to gain height as quickly as possible, which leads to lower fuel consumption for the following segments. In Figure 16 it is worth noting that the climb phase comprises the first 18 segments and the following segments represent the cruise phase. The minimum flight time means maximum TAS. Therefore in order to minimize the time during the cruise the aircraft flies at the highest Mach number permitted, which is 0.85. The optimizer suggests also flying at lowest altitude. This is correct since the speed of sound is highest at sea level along with TAS. Moreover, as already explained, the thrust increases with the decreasing in altitude because of the air density. The altitude profile of the two climb trajectories is shown in Figure 17 Again, it is possible to notice that the climb gradient of the fueloptimized trajectory is greater than the time-optimized one. The aircraft has to accelerate as faster as possible in order to gain height. The acceleration to gain height for the fuel-optimized trajectory is well shown in Figure 18. The step in the flight Mach number is due to the passage from 250 knots to 320 knots of EAS during the climb phase as described above. Case 2: minimum flight time vs minimum nox emitted The next study that has been carried out regards the optimization of two conflicting objectives: minimum flight time and minimum NO X emitted. It is important bearing in mind that optimizing for fuel consumption is different from optimizing for NO X emissions. Similarly to the previous case, the trade-off of the two objectives leads to the characteristic of the Pareto curve shown in Figure 19. The point C and D represent the minimum flight time and minimum NOx trajectories respectively. A comparison of the results considering Time and NO X optimized trajectories is shown in Table 11. The minimum flight time trajectory has exactly the same values of flight time, fuel burnt and emissions already calculated in Case1. Regarding the NO X -optimized trajectory, the total NO X emitted is about 1,457 kilograms and the reduction is up to 35.07% compared with the time-optimized trajectory. In the fuel optimized trajectory the total NO X emitted is 1,519 kilograms. The flight time, for the NO X -optimized trajectory is 1,065 minutes while for the fuel-optimized trajectory is 995 minutes. Considering that the two cruise altitudes are the same for minimum NO X and fuel, in order to reduce the NO X emission the optimizer suggests flying at a lower speed than the fuel-optimized trajectory with an increment in the flight time. Figure 20 shows the flight path for the two optimized trajectories. Considering that the time-optimized trajectory is exactly the same trajectory described in case one further consideration is required at this point. Regarding the NO X -optimized trajectory, it is interesting to notice that the optimizer suggests a solution where the aircraft flies at lowest and highest allowable Mach number and cruise altitude respectively. Highest altitude and lowest speed lead to minimize the engine thrust requirement. In turn, low thrust requirement means low TET, which results in low NO X emission. In Figure 21 and Figure 22 shows the altitudes Mach number variation of the two climb trajectories against the flight distance. It is evident how, similarly to the fuel-optimized trajectory, in order to minimize the NO X emission the optimizer suggests to reach the highest admissible altitude as faster as possible. It is important to consider the fact that large amount of NO X is produced at TET values in the region of 1700 ~ 1800 K and it increases exponentially with TET. In this respect, the trajectory optimized for flight time produced a large amount of NO X . One of the main reasons for that, besides the fuel burnt, is the high value of TET required for the thrust. Considering only the cruise phase, the time-optimized trajectory emitted about 409 kilograms of more NO X respect to the NO X -optimized trajectory. Conclusion The multi-disciplinary optimization framework has been implemented in order to investigate the potential of greener trajectories as future possible solutions for the reduction of aircraft environmental impact. The optimization framework comprises three different simulation models: engine model, aircraft model and emissions prediction model and a GA based NSGAMO optimizer. The multi-objectives optimization studies have been carried out in GATAC frame work focusing on minimization of conflicting objectives, such as fuel burnt versus flight time and NO X versus flight time for long range trajectories. In the first optimization study a long-range mission of 14,195 kilometers has been considered. The results show a difference of 8.63% (79 min) in flight time and about 12.27% (14,790 kg) in fuel burnt between the fuel-optimized and time-optimized trajectories. In order to minimize the flight time the optimizer suggests a solution where the aircraft has to fly at minimum allowable altitude and maximum flight Mach number. On the other hand, the flight trajectory that minimized the fuel burnt is one in which the aircraft has to fly at maximum permissible altitude. The cruise Mach number that minimizes the fuel consumed does not correspond to the minimum allowed Mach number but it is a result of a compromise between fuel flow (power setting) and flight time. Regarding the minimization of NO X emissions the results show that
v3-fos-license
2009-01-19T12:28:20.000Z
2007-11-29T00:00:00.000
15889292
{ "extfieldsofstudy": [ "Physics" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://doi.org/10.1016/j.physletb.2008.12.058", "pdf_hash": "040bcbd5e0ebb4c36b41ef3cea6260c3680860c3", "pdf_src": "Arxiv", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41147", "s2fieldsofstudy": [ "Physics" ], "sha1": "040bcbd5e0ebb4c36b41ef3cea6260c3680860c3", "year": 2007 }
pes2o/s2orc
Lowest order QED radiative corrections to five-fold differential cross section of hadron leptoproduction The contribution of exclusive radiative tail to the cross section of semi-inclusive hadron leptoproduction has been calculated exactly for the first time. Although the experience of inclusive data analyses suggests us that the contribution of radiative tail from the elastic peak is of particular importance, similar effects in the semi-inclusive process were only recently estimated in the peaking approximation. The explicit expressions for the lepton part of the lowest order QED contribution of exclusive radiative tail to the five-fold differential cross section are obtained and discussed. Numerical estimates, provided within Jefferson Lab kinematic conditions, demonstrate rather large effects of the exclusive radiative tail in the region at semi-inclusive threshold and for high energy of detected hadron. Introduction The semi-inclusive deep inelastic scattering of a lepton on the nucleon represents an important tool for studying strong interaction. The possibility of representing a semi-inclusive hadron leptoproduction (SIHL) cross section as a convolution of the virtual photon absorption by the quarks inside the nucleon and the subsequent quark hadronization allows one to investigate these mechanisms separately. The SIHL experiment provides not only complete information on the longitudinal parton momentum distributions available in inclusive deep inelastic scattering (DIS) experiments, but also an insight on the hadronization process and on parton orbital momenta. It is well known that SIHL events are altered by the real photon emission from the lepton and hadron legs as well as by additional virtual particle contributions. Due to i) the fact that most of the outgoing particles in SIHL remain undetected and ii) the finite resolution of experimental equipment, not all events with the real photon emission can be removed experimentally. Moreover, the contribution of events with an additional exchange of virtual particles cannot be removed at all. As a result the measured SIHL cross section includes not only the lowest order contribution which is the process of interest ( Fig. 1 (a)), but also the higher order effects whose contribution has to be removed from the data. Since the latter cannot be extracted by experimental methods, the corresponding radiative corrections (RC) have to be calculated theoretically. The primary step in the solution of the task on RC calculation in the lepton nucleon scattering assumes the calculation of the part of the total lowest order QED correction that includes real photon emission from lepton leg as well as the additional virtual photon between the initial and final leptons and the correction due to virtual photon self-energy. There are two basic reasons for why other types of RC, such as box-type contribution or real photon emission from hadrons, are less important. The first is that these corrections do not contain the leading order contribution which is proportional to the logarithm of the lepton mass, and therefore, their contribution is much smaller comparing to RC from lepton part. The second is that the calculation of these effects requires additional assumptions about hadron interaction, so it has additional pure theoretical uncertainties, which are hardly controlled. In the very first detailed SIHL experiments [1] RC were unknown and Monte Carlo simulations based on the approach from Ref. [2] were used to correct the data. The results of this Monte Carlo method however, have not been tested so far against the direct calculations of radiative effects. Meanwhile most experiments at high energies [3] neglected RC completely [4]. The calculations of the lepton part of the lowest order QED RC to SIHL cross sections were performed in Refs. [5,6] using Bardin-Shumeiko covariant approach [7]. In Ref. [5] the radiative effects were calculated for the threedimensional cross section of unpolarized and polarized SIHL (target and lepton were longitudinally polarized) and the FORTRAN code for numerical a) b) c) estimates was provided as a patch (named SIRAD) to POLRAD code [8]. In Ref. [6] RC for the unpolarized five-fold differential cross section have been computed and FORTRAN code HAPRAD has been developed. However in both papers, RC do not include the contribution of the radiative tail from the exclusive reaction at the threshold. In inclusive DIS experiments analogous effects from the elastic radiative tail [2,9] give an important contribution to the observable cross section and, moreover, there exist kinematic regions (e.g. at high y or Q 2 and small x), where this contribution is dominant. This additional term of RC to SIHL has been investigated until now only in the peaking approximation [10]. In the present Letter the contribution of the lepton part of the lowest order QED RC to SIHL due to exclusive radiative tail is calculated exactly for the first time. This is done using the approach from Ref. [11,12] and notations from Ref. [6]. The RC were calculated for complete five-fold differential cross section. The technique of exact calculation of the lowest order RC (over α) is used in this Letter. The accuracy of the calculation is defined by accuracy of numerical integration, which can be easily controlled. Whereas actual values of the correction depend on the particular choice of the exclusive reaction parameters. The rest of the Letter is organized as follows. In Section 2 we define kinematics of the investigated processes, obtain explicit expressions for the contribution of exclusive radiative tail to the five-fold differential cross section of SIHL, and investigate its analytical properties considering the soft photon limit. Discussion of the numerical results and concluding remarks are presented in Section 3. Also the explicit expressions allowing for the presentation of the results in closed form are given in Appendix. Kinematics and explicit expressions Feynman graphs giving the Born as well as the lepton part of lowest order QED correction to SIHL cross section from the exclusive radiative tail are shown on Fig. 1 (b,c). The radiative tail is generated by the real photon emission from the lepton leg accompanying the exclusive leptoproduction: Following the notations of Ref. [6] we call measured in the final state hadron h, which is observed in coincidence with the scattered lepton l ′ . The second hadron u that completes the exclusive reaction remains undetected. Here k 1 (k 2 ) is the four-momentum of the initial (final) lepton (k 2 1 = k 2 2 = m 2 ), p is the target four-momentum, (p 2 = M 2 ), p h (p u ) is the four-momentum of the detected (undetected) hadron (p 2 h = m 2 h , p 2 u = m 2 u ), and k is the emitted real photon four-momentum (k 2 = 0). The set of variables describing the five-fold differential SIHL cross section can be chosen as follows: where q = k 1 − k 2 and φ h is the angle between (k 1 , k 2 ) and (q, p h ) planes in the target rest frame reference system (p = 0). For the description of the real photon emission we will use the following three variables: with φ k being the angle between (k 1 , k 2 ) and (q, k) planes in the target rest frame reference system. We also will use the following Lorentz invariants: where the explicit expressions for a 1,2,k , b and b k coefficients can be found in Appendix B. It is also useful to define the non-invariant quantities describing kinematics of the detected hadron such as its energy E h , longitudinal p l and transvers p t three-momenta with respect to the virtual photon direction in the target rest frame. This quantities can be expressed through the Lorentz invariants introduced above in the following way: where ν is a virtual photon energy in the targets rest frame. Instead of the commonly accepted in SIHL analyses variable p 2 t we will use the variable t. This is dictated not only by the fact that t is Lorentz invariant, but also by the necessity to distinguish the forward and backward hemispheres, mixed in the p 2 t -differential cross section. At intermediate energies of Jefferson Lab, the contribution of backward kinematics is significant, in particular for heavy hadrons detected in the final state. This backward kinematics is related to the target fragmentation mechanism described in terms of fracture functions [13]. Also one can notice that p 2 t -differential cross section is divergent in the completely transverse case p l = 0 making difficult numerical integrations 1 . According to Eq. 33 of Ref. [12] the contribution of the one-photon emission from the lepton leg to the exclusive hadron leptoproduction cross section can be presented as the integral of the squared matrix elements described by Fig. 1 u and the photon solid angle. The integration in our case is three-fold, because the measured exclusive cross section is four-dimensional and the cross section with emission of one additional photon is seven-dimensional. However, when we consider this contribution to the five-fold differential SIHL cross section, one photonic variable is fixed by measurement, and the contribution has a form of a two-dimensional integral. Specifically, we use the inelasticity v as an observable in SIHL: 1 See Fig. 3 and comments after Eq. 26 for details. At the same time the variable R is fixed by both observable and two photonic variables in the following way: Hence the integration over other two unobserved photonic variables: τ and φ k requires for the calculation of the exclusive radiative tail contribution to SIHL. The cross section responsible for the lepton part of the exclusive radiative tail (see Fig. 1 (b,c)) is given by The squared matrix element M 2 R can be presented as a convolution of the leptonic and hadronic tensors. The former has well-known structure: while the latter can be presented in a following covariant way: where and the Lorentz invariant structure functions H i can be related to the exclusive photoabsorption cross sections as shown in Appendix A. After convolution of the leptonic and hadronic tensors the squared matrix element reads: Combining Eqs. 8 and 12 we obtain the contribution of the exclusive radiative tail to SIHL cross section: The integration limits over τ are given by τ min,max = (S x ± λ q )/2M 2 . Quantities θ i (τ, φ k ) have the following form: where F IR , θ i2 and θ i3 are defined in Appendix B and The structure functions H i depend on shifted kinematic variables modified with respect to ordinary ones by the real photon emission: The region of changes for these variables is depicted in Fig. 2. Maximum and minimum values of these variables are defined in the following way: where In this figure one can see the Born point as well as points that correspond to the so-called collinear singularity (that was only used in [10] for peaking approximation): photon is emitted along the momentum of the initial (final) lepton. These points correspond to the following shifted variables: where If in Eq. 13 we restrict our consideration only to the soft photon emission the result has to be proportional to the Born contribution to the exclusive cross section with a coefficient, independent of type of considered process. To obtain this well-known relation between contributions of the soft-photon emission and the Born to the cross section of the exclusive process, it is necessary to integrate Eq. 13 over z keeping only the photons with energy ω in the limits: ω min ≤ ω ≤ ω max ≪ all energies and masses. The corresponding Born contribution reproduces the cross section of the exclusive leptoproduction and can be expressed in terms of the coefficients (15) and the structure functions The integration variable z and the photon energy in the target rest frame are related as while the limit z 0 corresponds to the situation when emitted photon energy is equal to zero: Therefore taking into account where λ m = Q 2 (Q 2 + 4m 2 ), finally we obtain the sought equality in the form: or, in the limit m → 0, Thus, we reproduced expected result for the cross section of soft photon irradiation (e.g., see Eq. (7.64) of [14]). Most recent experiments measuring SIHL are being performed at Jefferson Lab. In particular, the large acceptance of CLAS detector allows for extraction of the information about the five-fold differential SIHL cross section in a rather wide kinematic region that covers almost the whole z-range as well as the entire φ h -range. In this section we present numerical results for the five-fold differential SIHL cross section in CLAS kinematic conditions. One remark has to be discussed before presenting numerical estimates. Consider t-dependence of p 2 t at fixed Q 2 , x and different z that is presented in Fig. 3 for π + -electroproduction in electron-proton scattering. The upper tlimit corresponds to the maximum value of the detected hadron longitudinal momentum p l : while the lowest one at low energy is given by the SIHL threshold i.e. when missing mass square reaches its minimal value. Here m π is a pion mass. A remarkable feature of this plot is that the curve z = 0.1 crosses the point where p l changes the sign and both positive and negative values of p l give the same p t . As it was mentioned above, due to the common denominator |p l | the p 2 t -differential SIHL cross section diverges at this point. To estimate the value of RC we introduce the radiative correction factor in the standard way where σ obs (σ B ) is the radiatively corrected (Born) five-fold differential cross section of the semi-inclusive hadron leptoproduction. The analytical expressions of RC obtained in previous section can be applied to leptoproduction of any hadrons observable in the lepton-nucleon scattering. However we restrict our numerical studies to the case of π + production in electron-proton scattering. The calculation of RC factor requires applying the parameterization of the photoabsorption cross sections. We use the model developed by collaboration MAID 2003 [15]. This model provides parameterizations for each of the required photoabsorption cross sections which are continuous in whole kinematic region. It accurately predicts the behavior of the photoabsorption cross sections in the resonance region and has true asymptotic behavior for higher W and Q 2 by means of the fit from Ref. [16]. The numerical estimation of this effect requires knowledge of the structure functions within the kinematical restriction for the shifted variables presented in Fig. 2. However the most important region is concentrated near the s-and p-collinear singularity (see Eq. 18) where the integrand expression reaches its maximum value. A possible effect of the specific choice of photoabsorption cross sections (e.g., choice of the MAID 2003 model) can be investigated by comparing the predictions of the model with experimental data or other model predictions in this particular region. For example, the CLAS kinematics restrictions (i.e. E beam = 6 GeV, 1 GeV 2 < Q 2 < 7 GeV 2 , 0 < p t < 1.5 GeV) for collinear singularity region are 0.07 GeV 2 < Q 2 s,p < 10 GeV 2 , 1.17 GeV 2 < W 2 s,p < 10 GeV 2 and −8 GeV 2 < t s,p < 8 · 10 −3 GeV 2 . Since the MAID 2003 describes experimental data in this region sufficiently well [15] (at least for dσ L /dΩ h and dσ T /dΩ h provided the main contribution for the total cross section) and provides convenient parametric form for all required photoproduction cross sections, the choice of the MAID 2003 seems to be reasonable and practical. In application of data analyses collected in the specific regions especially in situations when the other two photoabsorption cross sections dσ T T /dΩ h and dσ LT /dΩ h give rather large contribution (e.g., for measurements of t-or φ hdependence), the model independence has to always be tested by comparison with data (or other models) in the collinear regions. This is especially important in the light of recent investigations which demonstrated that in certain cases the MAID 2003 can be imperfect [17]. Examples of RC factor including the exclusive radiative tail contribution are shown on Figs. 4, 5 and 6. It could be seen in Fig. 4 that the exclusive radiative tail contribution at z ∼ 0.1 is small or even negligible but rapidly increases with the growing z near SIHL threshold, i.e., when t → t min . Such behavior appears from the first term in Eq. 14 due to smallness of the denominator in expression for R defined by Eq. 7. In contrast to the elastic radiative tail contribution to inclusive DIS, the minimally allowed value of Q 2 (see Eq. 13) in the integrand of Eq. 13 does not reach the region close to the photon point region Q 2 → 0 where the exclusive cross section increases rapidly. As a result, the so-called t-peak (or Compton peak) often essentially contributing to the cross section of the elastic radiative tail [18,19] does not appear in the considered case, and the main contribution to exclusive radiative tail appears from the collinear region. The absolute value of the exclusive radiative tail rapidly increases with growing the invariant t (or missing mass of the detected lepton-hadron system). However, the SIHL cross section increases with t much faster making the relative contribution of the exclusive radiative tail small or negligible at large t. Meanwhile, the situation changes to the opposite at small t i.e. close to the threshold where the exclusive radiative tail exceeds the SIHL cross section (Fig. 4). Moreover, one can see in Figs. 4 and 5 that the contribution of the exclusive radiative tail can significantly modify the φ h -distributions at middle t distorting usual A + B cos φ h + C cos 2φ h behavior. In experimental data analyses the five-fold cross sections are estimated in specific bins. There exist several schemes of how radiative correction can be applied to the cross section observed in a certain bin. The simplest and most practical variant is to calculate the RC for the center of the bin, i.e., for means of kinematic variables defining the bin. If the bin is broad over one or several kinematic variables then the Born cross section and the contribution of the exclusive radiative tail can change differently over the bin. In this case the integration over the bin has to be performed with taking into account all experimental cuts which are used by experimentalists to form the bin. The exact procedure to apply the RC to data collected in such bins supposes integration of the cross section of RC over a bin. Since this procedure is computationally extensive, often approximate procedures are used. One possible scheme of such approximate integration is a so-called 'event-by-event' scheme where reweighing the RC factor defined by Eq. 28 is applied for each reconstructed event. Some of experimental cuts used by experimentalists can essentially influence the RC factor for the bin. One often used cut is the cut on missing mass or inelasticity. This cut allows to avoid a contribution of resonances and also it is important for RC calculation. Tab. 1 presents the results of RC calculation for these two approaches to binning forming (i.e., with and without cutting the resonance region) as used in ref [20]. As one can see, applying the cut on missing mass allows to suppress the contribution of the exclusive radiative tail. Note, there are experimental situations where RC including the contribution from the resonance region are of great importance. Examples include the analysis of the measurements of the threshold reactions, e.g., "quark-hadron duality" which is based on a comparison of production in the resonance region with extrapolation of DIS measurements, or reanalysis of older Cornell data from [1] which were collected without any cuts of the threshold region. Therefore, our program includes an option to apply a cut on missing mass squared at the integrand of the cross section of the exclusive radiative tail. Summarizing, the exclusive radiative tail contribution to complete five-fold differential unpolarized SIHL cross section has been calculated exactly for the first time. Respective FORTRAN code for the numerical estimation is opened for the scientific community. Numerical analysis performed for kinematical conditions of the current experiments at JLab demonstrated that the RC to the SIHL coming from the exclusive radiative tail is high in the regions of small t and close to the threshold while for φ h ∼ 180 0 the kinematical region where RC is important is much wider. This contribution significantly modifies φ hasymmetries of the SIHL cross section. The present approach is quite general and can be extended to other SIHL reactions providing knowledge of the exclusive cross section at the threshold. The calculated correction to the SIHL due to the radiative tail from exclusive processes is important and its contribution always has to be taken into account in analyses of data in current and future experiments on the SIHL. Currently, this correction is ignored or analyzed in the peaking approximation [10], quality of which can be evaluated only by comparison with exact formulae presented in this Letter. Several sources of systematical uncertainties have to be investigated in the data analyses including i) the specific choice of the model for the photoproduction cross sections, ii) quality of peaking approximation, if this approximation is used instead of the exact formulae, and iii) the choice of specific scheme of radiation correction procedure in a specific bin if it is used instead of exact integration of the radiative tail cross section over the bin. Table 1 Relative exclusive radiative tail contribution at different φ h to the observed cross section for the kinematical points of ref. [20] Nazionale di Fisica Nucleare (Genova, Italy) for their generous hospitality during his visit.
v3-fos-license
2018-10-02T01:19:39.602Z
2018-09-19T00:00:00.000
52812907
{ "extfieldsofstudy": [ "Medicine", "Biology" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://academic.oup.com/conphys/article-pdf/6/1/coy052/26106119/coy052.pdf", "pdf_hash": "ca639674a2214c0ff9d181de79e22b90d0fb50c2", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41148", "s2fieldsofstudy": [ "Environmental Science" ], "sha1": "e171d8afecb657d135d2e9fa3adb046098cb2e45", "year": 2018 }
pes2o/s2orc
Dose and life stage-dependent effects of dietary beta-carotene supplementation on the growth and development of the Booroolong frog This study is the first to show that dietary beta-carotene supplementation has a dose and life stage-dependent effect on vertebrate growth and development. Knowledge in this domain could assist with the captive breeding of threatened species. Introduction Dietary carotenoids are expected to directly impact vertebrate growth and development by acting as powerful antioxidants. Growth is associated with high metabolic activity and oxygen consumption, accompanied by free radical production and potential for oxidative stress . By scavenging and quenching free radicals produced during metabolism, carotenoids can reduce the oxidative damage caused to cells, tissues and DNA, enabling more efficient cell 1 division and differentiation (Edge et al., 1997), and consequently increased growth . In turn, an increase in growth (and more rapid changes in body size) should facilitate more rapid progression through developmental stages (Gillooly et al., 2002;Marri and Richner, 2014). Additionally, dietary carotenoids might influence growth and development indirectly by improving immune function and general health , or foraging performance (Toomey and McGraw, 2011). Despite sound theoretical arguments suggesting that carotenoid supplementation will improve vertebrate growth and development, this remains strongly debated because evidence for beneficial effects remains equivocal, both within and between taxa. In some species, clear benefits of carotenoid supplementation have been reported (e.g. Torrissen, 1984;Christiansen and Torrissen, 1996;Cucco et al., 2006;Arulvasu et al., 2013). However, in other species there have been no detectable benefits (e.g. Ř ehulka, 2000; Amar et al., 2004;Wang et al., 2006;Byrne and Silla, 2017), or negative effects (Cothran et al., 2015). This interspecific variation might exist because species differ in their capacity to utilize carotenoids, reflecting different evolutionary histories in carotenoid consumption. For instance, because herbivores consume higher amounts of carotenoids than carnivores, they might have a greater physiological capacity to absorb and process these micronutrients (Tella et al., 2004). Alternatively, much of the interspecific variation reported in the literature might simply reflect the fact that very few studies have tested for dose-response relationships. Within species, the effect of dietary carotenoids on growth and development may shift depending on the quantities of carotenoids consumed. Based on optimization theory, we should expect that benefits will be restricted to a specific range of metabolically-relevant concentrations, with growth and development optimized at intermediate concentrations (Christiansen et al., 1995;Christiansen and Torrissen, 1996). If carotenoids are provided at extremely low concentrations, effects on growth and development might be negligible, and any benefits might remain undetected. In contrast, carotenoids at extremely high concentrations may stunt growth and development because high concentrations could be toxic, and large amounts of energy might be required to detoxify and remove excess nutrients, limiting growth (Palozza et al., 1997;Cothran et al., 2015). High concentrations could also have pro-oxidant effects, whereby excess carotenoids elicit an increase in oxidative stress, either directly by generating additional ROS (Stoss and Refstie, 1983), or indirectly by suppressing the endogenous antioxidant system (Bouayed and Bohn, 2010). Adding to this complexity, is the possibility that optimal carotenoid concentrations differ between life stages. For most vertebrate species growth will be most rapid during juvenile life stages, so we should expect that benefits associated with ROS scavenging will be most pronounced during early development. Surprisingly, however, past studies testing for effects of carotenoids on growth have largely ignored the potential for life-stage effects, instead focussing on very narrow periods of either juvenile or adult development. Anuran amphibians (frogs and toads) provide a model group for investigating carotenoid dose-response relationships across different life stages because most species have a biphasic life cycle, with juvenile and adult life stages separated by a distinct period of metamorphosis (Wilbur and Collins, 1973). At each life stage, anurans consume different types of food (in general larvae eat algae and adults eat invertebrates), creating life stage specific differences in carotenoid intake. Growth is usually several orders of magnitude faster during the larval life stage, and assuming that this is when ROS production is highest, this is the life stage when the antioxidant properties of carotenoids will be most beneficial. Antioxidant properties may also be valuable during and immediately after metamorphosis. At this developmental phase, rapid cell division and morphological change can dramatically increase ROS production (Blount et al., 2000). Moreover, a pronounced spike in cellular oxygen concentration during the transition from aquatic to terrestrial life would be expected to cause oxidative stress . To date, the effect of dietary carotenoids on growth and development has been examined in six anuran species, with a focus on the combined effects of five common carotenoids; astaxanthin, cryptoxanthin beta-carotene, lutein and zeaxanthin Dugas et al., 2013;Cothran et al., 2015;Byrne and Silla, 2017). As for studies in other vertebrates, studies in amphibians have mainly tested for effects at one dose or life stage, and findings have been inconsistent. Despite these inconsistencies, benefits to growth and development have been reported for a number of anuran species from several families Dugas et al., 2013), suggesting that carotenoid-mediated growth benefits could be widespread in amphibians. From an applied perspective, knowledge of carotenoid dose effects on amphibian growth and development may have considerable conservation value. Globally, amphibians are declining faster than any other vertebrate class and the recommended conservation action for threatened species is captive breeding and reintroduction (Gascon et al., 2007). To date, however, the success of captive breeding programs has been limited, in part due to a lack of knowledge regarding amphibian nutritional requirements. Identifying whether specific doses of carotenoids improve anuran growth and development, and whether the importance of carotenoids varies between life stages, may help recovery teams more effectively generate mature individuals needed to maintain viable captive insurance colonies. Furthermore, rapid generation of healthy animals could increase the number of individuals available for release, which is known to be an important predictor of reintroduction success (Tarszisz et al., 2014). The present study aimed to investigate the influence of beta-carotene on the growth and development of the critically endangered Booroolong frog Litoria booroolongensis. The specific aim was to determine whether effects of dietary beta-carotene supplementation on growth and development are either dose (0, 0.1, 1 and 10 mg g −1 ) or life-stage (larval and post-metamorphic) dependent. Methods The procedures outlined below were performed following approval by the University of Wollongong's Animal Ethics Committee (approval number AE 14/21). Study species Litoria booroolongensis is listed by the IUCN as critically endangered (Hero et al., 2004) and in 2008, under the recommendation of the NSW Department of Environment and Heritage (OEH), a captive breeding program for the species was established at Taronga Zoo (Sydney, Australia). Experimental animals A total of 360 L. booroolongensis eggs (from four discrete clutches) were collected from a captive colony maintained at Taronga Zoo. Egg clutches were laid between 02 January 2015 and 17 January 2015, and all eggs hatched within 3 days of oviposition. Within 1-2 days of being laid, egg clutches were transported to the Ecological Research Centre at the University of Wollongong. Individual clutches were held in~700-1000 ml of carbon-filtered water in plastic aquarium bags enriched with atmospheric oxygen. Hatchling tadpoles from each clutch were held communally in 20-l aquaria (one clutch per aquaria) for a 2-week period prior to being transferred to individual experimental containers. Immediately after hatching, tadpoles were fed a basal diet of ground fish flake mixture (75:25 Sera Flora/Sera Sans; SERA, Heinsberg, Germany), ad libitum, every 2 days before the beginning of the experiment. A 2-week acclimation period was imposed in order to ensure all tadpoles were feeding properly and were of similar size at the beginning of the experiment. All tadpoles were 9-15 days old post-hatching at the commencement of the experiment. Experimental design Individuals were reared on one of four diets supplemented with different concentrations of beta-carotene. Beta-carotene was selected because it is one of the major carotenoids found in algae and herbivorous insects consumed by anurans (Takaichi, 2011), has been shown to significantly impact anuran reproduction , and has been identified in the skin tissue of wild Booroolong frogs (B. Tinning, 2016, pers. comm.). Larval and post-metamorphic basal diets (containing 0.015 and 0.005 mg g −1 total carotenoids, respectively) were supplemented with beta-carotene at the following concentrations: 0 mg g −1 (control), 0.1 mg g −1 (low), 1 mg g −1 (medium) and 10 mg g −1 (high). These supplement doses were chosen because they encompassed, and expanded on those previously used to investigate the effects of dietary carotenoids on amphibian growth, and other fitnessdetermining traits Dugas et al., 2013;Silla et al., 2016). Each treatment included 72 replicate individuals (288 individuals in total) and individuals remained on the same diet treatment throughout both larval and post-metamorphic life stages. The experiment commenced on 19 January 2015 and the larval stage of the experiment was completed by 13 July 2015, when the last tadpole metamorphosed. The post-metamorphic stage of the experiment commenced when the first individual metamorphosed on 3 March 2015 and was completed by 26 January 2016, when the last frog reached 7-months of age postmetamorphosis, and sex could be determined. Animals (n = 288) were individually housed in plastic containers (10 cm diameter and 10.5 cm high) throughout both life stages. Each experimental container was enclosed in a black plastic ring so that there was no visual contact between individuals. The experimental containers were held in sets of nine in plastic trays, which were positioned across three shelves in a constant temperature room. The experimental containers were aligned in rows of three, with each diet treatment alternating between rows in the following order; 0, 0.1, 1 and 10 mg g −1 (see Supplementary Fig. S1). The room was artificially illuminated on a 15:9-h, day: night cycle (including twilight lighting for half an hour each cycle to simulate dawn and dusk). In addition to overhead room lighting, UV-B lights (Reptisun 10.0 UV-B 3600 bulb; Pet Pacific, Emu Plains, Australia) were suspended~20 cm above each container, providing 6.5 h per day of UV-B light (between 10 am to 4:30 pm). Ambient temperature in the room was maintained at 22°C (range was 21.9-23°C). Larval husbandry and nutrition During the larval life stage, containers were filled with 550 ml of carbon-filtered water. Water changes and water quality testing were conducted according to methods described previously (Keogh et al., 2018). Over the duration of the experiment, ammonia concentrations remained low (<0.50 mg l −1 ), nitrate remained at 0 mg l −1 and pH remained at 7.4. Tadpoles were fed one of the four diets supplemented with different concentrations of beta-carotene (0, 0.1, 1 and 10 mg g −1 ; Supplementary Table S1). Each diet consisted of 990 mg of ground fish flake mixture and a combination of beta-carotene (0.1, 1 and 10 mg g −1 ) and cellulose microcrystalline powder (435 236; Sigma-Aldrich, Castle Hill, NSW; Supplementary Table S1). Total carotenoid concentrations in the basal diet were <0.015 mg g −1 (see Suppementary Table S2 for a carotenoid profile). Cellulose was used to ensure that each experimental diet consisted of the exact same quantity of feed. Cellulose is commonly used to create balanced designs in dietary studies because it has no nutritional value, is odourless, tasteless and cannot be digested (Dias et al., 1998;Deng et al., 2006). Importantly, in a dietary study run in parallel to our main experiment, cellulose was found to have no effect on the survivorship, growth or development of L. booroolongensis larvae (see Supplementary Table S3). Experimental diets were pre-prepared by suspending 1000 mg of feed in 10 ml of reverse osmosis water (RO water; Sartorius Stedim Biotech, Goettingen, Germany). Feed suspensions were stored in 20 ml syringes and frozen in opaque containers at −20°C until required. On feeding days, syringes were defrosted at room temperature, homogenised and administered in a dropwise manner to ensure an even proportion of feed was administered to each individual. Tadpoles were fed two drops of feed suspension (dry mass; 0.0184-0.0291 g) for the first 4weeks of the experiment, and then three-drops (dry mass; 0.0415-0.0706 g) for the remainder of the experiment (to support increased energetic demands). Tadpoles were fed treatment diets bi-weekly (Monday and Friday) for the duration of the experiment. This feeding regime ensured animals were fed ad libitum. Quantifying larval survival, growth and development To quantify tadpole survivorship, each individual was observed every second day and scored as alive or dead. To quantify tadpole growth, each individual was photographed once per fortnight using a Panasonic Lumix DMC-FT20 camera. To ensure accurate photos, water volume in each experimental container was dropped to 100 ml before taking the photo, and then immediately refilled to 550 ml. The development of individual tadpoles was assessed every 2 days using Gosner staging tables. At the emergence of forelimbs (Gosner stages 43-46), the water volume in each experimental container was dropped to 100 ml. To facilitate metamorphosis, a piece of sponge (diameter 9 cm) and a thin layer (<1 cm) of aquarium stones was added to the bottom of each container, and placed on an angle to allow metamorphosing tadpoles to climb from the water. The tadpoles were not fed during this developmental period as tail reabsorption met their nutritional needs. Tadpoles were checked daily, and once an individual had completely reabsorbed its tail, it was blotted with absorbent tissue (kimwipes), weighed and photographed next to a scale. The day of complete tail reabsorption was used to quantify time to metamorphosis. Images of tadpoles and metamorphs were used to determine body length using image analysis software (ImageJ version 1.45s). For each tadpole, head and tail lengths were measured separately, and total tadpole length was calculated by summing these two measures. For each metamorph, snout-vent length was measured. For each individual, two measurements were made and values averaged. Post-metamorphic husbandry and nutrition At metamorphic climax (Gosner stage 46), individuals were moved from their larval housing into new containers, which remained in original shelf positions. The post-metamorphic containers and cleaning regime used have been described previously (Keogh et al., 2018). Post-metamorphosis, frogs were fed twice a week with 2-4 day old crickets (Acheta domestica) until they reached sexual maturity at 7 months of age. Total carotenoid concentrations in the basal diet were <0.006 mg g −1 (see Supplementary Table S2 for a carotenoid profile). Beta-carotene was supplemented by dusting the crickets with 0, 0.1, 1 or 10 mg g −1 of beta-carotene powder. Diets were again balanced using cellulose to control the overall quantity of feed that frogs received. In addition, to ensure healthy bone growth all individuals received calcium supplementation by adding 200 mg of calcium powder containing vitamin D3 (Repti-Cal, Aristopet, Melbourne, Australia) to each gram of treatment powder. Quantifying post-metamorphic survival, growth and development To quantify adult survivorship, individuals were observed every second day and scored as alive or dead. To quantify adult growth rate, each individual was photographed and weighed every four weeks post-metamorphosis. Frogs were removed from their experimental containers, blotted dry with absorbent tissue paper and weighed to the nearest 0.01 g. Frogs were then photographed using a Panasonic Lumix DMC-FT20 digital camera. The snout-vent length [measured from the tip of the nose (snout) to the bottom of the anus (vent)] was measured for each individual from digital images using image analysis software (ImageJ, version 1.45s). For each individual, two measurements were made and values averaged. An index of body condition was calculated by taking the residuals of an ordinary least squares linear regression of body mass (g) against SVL (mm). Once the frogs reached sexual maturity (at 7 months post-metamorphosis), they were euthanised and sex determined following dissection. Statistical analyses The effect of supplement dose on: (i) larval survival until metamorphosis (0-8 weeks post-hatching), and (ii) metamorph survival until sexual maturity (0-7 months post-metamorphosis) was tested using likelihood ratio chi-squared tests. Two tadpoles and six metamorphs were excluded from each analysis because they died as a result of unexpected adverse events. Linear mixed effects models fitted with restricted maximum likelihood (REML) were used to test the effect of supplement dose (0, 0.1, 1 and 10 mg g −1 ) on: (i) tadpole body length (measured from snout tip to tail tip) at experimental weeks 0, 2, 4, 6 and 8, (ii) the body mass (g) of individuals at metamorphosis, and (iii) the time taken (days) until metamorphosis. In each model the treatment dose was the fixed categorical effect and clutch number (1-4) was included as a random effect. Linear mixed effects models fitted with restricted maximum likelihood (REML) were also used to test the effect of treatment dose on metamorph mass, and metamorph body condition each month for seven months post-metamorphosis. In these models, supplement dose and individual sex were the fixed categorical effects, and clutch number (1-4) data and stabilize variances, we log transformed all response variables before running analyses. For LME models that detected significant effects, differences between treatment levels were compared using post hoc Tukey HSD tests. All statistical analyses were performed using JMP 11.0 software package (SAS Institute Inc. North Carolina, USE). Time to metamorphosis The time tadpoles took to metamorphose was highly variable, ranging from 49 to 175 days (mean ± SEM = 81.06 ± 2.042 days), and there was a significant difference in time to metamorphosis among diet treatments (LME: F 3,253 = 3.85, P = 0.010). Tadpoles supplemented with a dose of 1 mg g −1 metamorphosed 1-8 days faster than all other supplement doses (0, 0.1 and 10 mg g −1 ), and the difference in time to metamorphosis between tadpoles supplemented with a dose of 0 mg g −1 and those supplemented with a dose of 1 mg g −1 was significant (Tukey's HSD test: P = 0.006; Fig. 2). Our finding that larval growth and development was not improved at the low or high-supplement doses indicates that the effects of dietary beta-carotene were dose dependent. A lack of any effect at the low supplement dose may have resulted because beta-carotene was preferentially invested in other fitness-determining traits, such as immune function or exercise performance (as per the carotenoid trade-off hypothesis; Lin et al., 2010). Alternatively, the concentration of beta-carotene used might simply have been too low to significantly alter cellular function and cause any significant change Byrne and Silla, 2017). The lack of any beneficial effect at the high-supplement dose might have several explanations. First, above a certain threshold concentration beta-carotene might begin to have toxic effects, resulting in energy being diverted away from growth and into detoxifying and eliminating excess beta-carotene. Such toxic effects of carotenoids have been reported in other species (Palozza et al., 1997;Cothran et al., 2015). Alternatively, high concentrations might have elicited a mild pro-oxidant effect. Research with cell models has suggested that relatively high doses of beta-carotene can induce an overproduction of ROS due to either: (i) autoxidation of carotenoid metabolites, (ii) changes in the activity of cytochrome P450 enzymes and/or (iii) alterations to endogenous antioxidant defences (Palozza et al., 2003). If high concentrations of beta-carotene have a similar pro-oxidant effect in Booroolong frogs, oxidative damage to cell structures could have slowed somatic growth, resulting in an overall reduction in developmental rate. Dose-dependent effects were also evident post-metamorphosis. At the low supplement dose (0.1 mg g −1 ) there was no effect on post-metamorphic growth and development, but at both the intermediate (1 mg g −1 ) and high (10 mg g −1 ) doses there was evidence for negative growth effects. Significantly reduced body mass and body condition indicates that postmetamorphic frogs supplemented with intermediate to high doses of beta-carotene had reduced energy and protein reserves and were relatively unhealthy (Ferrie et al., 2014). As for larvae, negative effects of high beta-carotene doses may have resulted from beta-carotene either having a toxic effect or causing pro-oxidant activity, with these effects being more pronounced in post-metamorphic animals. Irrespective of the exact cause, our finding that the effect of beta-carotene dose differed in larval versus postmetamorphic animals supports our second hypothesis that the effects of dietary beta-carotene supplementation are life stage-dependent. The most likely reason for this difference is that individuals produce much higher levels of ROS as larvae than as adults due to significantly faster rates of somatic growth prior to metamorphosis, making the protective antioxidant capacity of beta-carotene more important during early life. In support of this argument, the average percent weight gain of tadpoles per week was more than double that of post-metamorphic individuals. In principle, ROS production in tadpoles is also likely to have increased exponentially as larvae approached metamorphic climax because this is a developmental period that requires extremely high metabolic activity (Pough and Kamel, 1984). Our findings have important conservation implications. Booroolong frogs are listed as critically endangered, and captive breeding and reintroduction are important recovery actions for this species. Supplying captive Booroolong frogs with dietary beta-carotene has the potential to support these actions in a number of ways. First, the capacity to more quickly generate large numbers of metamorphs might help managers to maintain colonies at optimal sizes and avoid issues associated with natural attrition and inbreeding. Second, more rapid generation of frogs could increase the number of individuals available for release, which could improve reintroduction success by overcoming issues linked to high dispersal, demographic stochasticity or low individual fitness at low population densities (i.e. the Allee effect; Armstrong and Seddon, 2008). Beyond benefits to growth, supplying frogs with dietary beta-carotene also has the potential to significantly improve reproductive success through positive effects on female fecundity and egg quality (see Dugas et al., 2013). Such improvements to captive breeding protocols could reduce the costs of recovery programs and make them more financially viable, and sustainable (Tarszisz et al., 2014). Given that anuran amphibians are declining faster than any other vertebrate group, and that captive breeding is a standard recovery action for threatened species globally, dietary supplementation of beta-carotene (and other carotenoids) could potentially benefit conservation programs for various threatened species. At present, however, it remains uncertain whether benefits to frogs are likely to be widespread. To date, the effect of carotenoids on amphibian growth and development has only been investigated in seven anuran species (including L. booroolongensis), and findings have been equivocal Dugas et al., 2013;Cothran et al., 2015;Byrne and Silla, 2017). More broadly, our results add to the very small but growing body of evidence that dietary carotenoid supplements improve vertebrate growth and development. Carotenoids have been shown to influence growth and development in fish and birds, however, most studies have examined no more than two supplement doses, or have used a mixture of carotenoid compounds, or carotenoid derived substances, such as spirulina, shrimp head meal, and marigold petal meal (Boonyaratpalin and Unprasert, 1989;James et al., 2006;Sinha and Asimi, 2007;Ezhil et al., 2008;Güroy et al., 2012;Arulvasu et al., 2013). Without testing the effects of carotenoid compounds supplemented at three or more doses, it is impossible to attribute beneficial effects to specific carotenoids and identify optimal supplement doses. To the best of our knowledge, our study is the first to dem- dietary beta-carotene supplementation on vertebrate growth and development. Considering the effect of carotenoids more broadly, only two studies (both in Salmo salar) have tested the effects of individual carotenoid compounds on growth and development at more than two doses. These studies both reported dose effects; fish fed higher doses of astaxanthin exhibited improved growth and survivorship (Christiansen et al., 1995, Christiansen andTorrissen, 1996). Furthermore, these studies provided evidence for life-stage effects; juvenile fish required a higher dose of astaxanthin to exhibit enhanced growth and survivorship. In most vertebrates, growth rates will differ significantly between juvenile and adult life stages. As such, we predict that consideration of the effects of different doses of carotenoids across different life stages will reveal that dietary carotenoid supplementation has more widespread and profound effects on vertebrate growth and development than currently demonstrated. Supplementary material Supplementary material is available at Conservation Physiology online.
v3-fos-license
2023-07-13T15:05:13.715Z
2023-07-11T00:00:00.000
259838434
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/2304-6732/10/7/802/pdf?version=1689051034", "pdf_hash": "848bb4cb32416f19b5fad7ab83da61853f102e25", "pdf_src": "ScienceParsePlus", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41150", "s2fieldsofstudy": [ "Materials Science", "Physics" ], "sha1": "28efd81de9cd8ed72abae3713021aceb858709f5", "year": 2023 }
pes2o/s2orc
A Blue-Light-Emitting 3 nm-Sized CsPbBr 3 Perovskite Quantum Dot with ZnBr 2 Synthesized by Room-Temperature Supersaturated Recrystallization : Recently, tuning the green emission of CsPbBr 3 quantum dots (QDs) to blue through quantum size and confinement effects has received considerable attention due to its remarkable photophysical properties. However, the synthesis of such a blue-emitting QD has been challenging. Herein, supersaturated recrystallization was successfully implemented at room temperature to synthesize a broadband blue-emitting ZnBr 2 -doped CsPbBr 3 QD with an average size of ~3 nm covering the blue spectrum. The structural and optical properties of CsPbBr 3 QDs demonstrated that QD particle size may decrease by accommodating ZnBr 2 dopants into the perovskite precursor solution. Energy-dispersive spectroscopy confirmed the presence of zinc ions with the QDs. This work provides a new strategy for synthesizing strongly quantum-confined QD materials for photonic devices such as light-emitting diodes and lighting. Normally, a mixed halide perovskite such as CsPbCl x Br 3−x can cover the entire blue spectrum [15,16]. However, this approach is constrained by lattice mismatch, phase segregation, and chlorine vacancies [17][18][19]. Specifically, the Cl vacancies are responsible for deep defects and the low PLQYs [13,17,19]. On the other hand, CsPbBr 3 has a stable structure with fewer crystal defects due to its well-fitted atomic radii [19]. However, the instability of the perovskites to photon, thermal, moisture, and operational challenges is still a bottleneck for the commercialization of perovskite QD-based devices. The instability problem is more pronounced for Cl-based QDs due to defects as well as the intrinsic distortion of the [PbCl 6 ] 4− octahedron [20], opening the channel for the degradation of materials. Moreover, mixed CsPbCl x Br 3−x perovskites are prone to both phase segregation and spectral instability [21]. To overcome these drawbacks, in situ and/or post-treatment Cl defect passivation has been investigated using versatile ligands. However, excess ligands may impede the charge-transport properties of the perovskite QDs [22]. As a result, many researchers have examined pure bromine CsPbBr 3 QDs as a source for blue emission by substituting Pb 2+ with a similar-sized metal cation considering Goldschmidt's tolerance factor, t = (r A + r X )/ √ 2 · (r B + r X ) , with 0.813 ≤ t ≤ 1.107 for structural stability, where r A , r B , and r X are the radius of A cation, B cation, and X anion in the ABX 3 perovskite structure, respectively [23]. For example, the blue shift of optical spectra was demonstrated by partially replacing Pb 2+ with a divalent cation (Cd 2+ , Zn 2+ , Sn 2+ , Cu 2+ ) and a trivalent cation (Al 3+ , Sb 3+ ) [11,[24][25][26]. Here, the blue shift is attributed to the lattice contraction as these ions possess a smaller ionic radius than Pb 2+ . As a result, the Pb-Br bond becomes shorter and increases interaction between Pb and Br orbitals, which is responsible for the blue shift [11,[24][25][26][27][28][29]. Besides emission, the dopant metallic cation can enhance the stability of the CsPbBr 3 not only by increasing the defect formation energy but also passivating the defect state of the QDs [26][27][28][29][30]. The other strategy is controlling the quantum size of CsPbBr 3 NCs in a quantumconfinement regime [28,29]. The two most common synthesis methods, i.e., hot injection (HI) and supersaturation recrystallization (SR), usually afford CsPbBr 3 QDs with a size of greater than 7 nm, resulting in green light emission. Hence, different techniques have been employed to control the CsPbBr 3 QDs in strong confinement regimes like ligand composition engineering (oleic acid/oleylamine ratio change) [31], hydrogen bromide (HBr) acid etching-driven ligand exchange [32], synthesis under thermodynamic equilibrium environment [9,33], and the two-step SR technique [34]. Importantly, to date, most of the blue-emitting CsPbBr 3 QD synthesis has been based on the typical HI method, which requires a high-temperature injection as well as an inert gas environment, limiting its practical application due to energy and material cost. In this work, we synthesized a blue light-emitting ZnBr 2 -doped CsPbBr 3 QD using the SR method at room temperature (without any inert gas but under ambient conditions). This approach is initiated by the principle of the size-dependent stoichiometry of Br − in CsPbBr 3 QDs (higher Br − contents in the smaller QDs) and the equilibrium between the QDs and colloidal dispersion medium [33,35]. The~3 nm-sized QDs were obtained by adding a controlled amount of ZnBr 2 into the perovskite precursor solution, resulting in a broadband blue emission. Synthesis of ZnBr 2 -CsPbBr 3 QDs The ZnBr 2 was prepared by the SR method at room temperature [36]. Under magnetic stirring, 0.4 mmol CsBr, 0.4 mmol PbBr 2 , 1 mL OA, and 0.5 mL OAm were dissolved in 10 mL of DMF for 2 h as a source of Cs, Pb, and Br. Under the same conditions, 2.5 mmol ZnBr 2 was dissolved in 5 mL of DMF for 2 h. Then, 0 and 200 µL of a ZnBr 2 solution and 1 mL of the perovskite precursor (PbBr 2 /CsBr) solution were simultaneously injected into Photonics 2023, 10, 802 3 of 13 10 mL of toluene under vigorous magnetic stirring. The synthesized QDs were centrifuged at 3500 rpm for 5 min. Then, the QD dispersion mixed with 5 mL ethyl acetate was centrifuged at 9000 rpm for 10 min. Finally, the precipitated QDs were used for further characterization after re-dispersing in toluene acting as an antisolvent. Computational Methods The electronic band structures of the 3 × 3 × 3 CsPbBr 3 supercells without or with zinc doping were calculated using Cambridge Serial Total Energy Package (CASTEP, Materials Studio 2017, Vélizy-Villacoublay, France). General gradient approximation (GGA) with perdew-burke-ernzerhof (PBE) exchange-correlation functional was used to calculate both geometry optimization and the electronic properties of the materials [37]. The Mohkhorst pack grid of 3 × 3 × 3 size was constructed for k-points in the Brillouin zone. The energy of 1.0 × 10 −5 eV/atom, force of 0.02 eV/Å, maximum displacement of 4 × 10 −4 Å, and maximum stress of 0.04 GPa were used for geometry optimization. Figure 1 shows the schematic explanation of the supersaturated recrystallization process for the synthesis of CsPbBr 3 QDs. As shown in Figure 1a, when a green-lightemitting CsPbBr 3 QD was synthesized, the perovskite colloidal dispersion was transparent yellow. On the other hand, when a blue-light-emitting CsPbBr 3 QD was prepared, that was partially cloudy without any yellowish color, implying that the QD size might be very small with a strong quantum size and confinement effect in nanoscale. The desired amount of ZnBr 2 as described in the experimental section was employed to reduce the size of CsPbBr 3 QDs in nanoscale. This approach is consistent with the literature reports [33,35], in which ZnBr 2 was a source of excess Br − ions to adjust the size of CsPbBr 3 QDs. Smaller QDs have a larger surface area requiring sufficient amounts of surface ligands and/or Lewis base Br − ions. Results and Discussion The CsPbBr 3 QDs without/with ZnBr 2 dopant were synthesized by SR under ambient conditions [38]. Figure 2 shows the XRD patterns of the CsPbBr 3 drop-cast films [38][39][40][41], corresponding to the cubic phase of CsPbBr 3 [40]. Please note that CsPbBr 3 shows polymorphism depending on temperature, like orthorhombic (≤88 • C), tetragonal (88 ≤ T ≤ 130 • C), and cubic (≥130 • C) [38][39][40][41]. However, in the case of nanoscale crystals, the cubic phase was frequently observed in comparison to the orthorhombic, indicating the metastable state of QDs, i.e., kinetically metastable or stable but thermodynamically unstable. The XRD patterns for ZnBr 2 -doped CsPbBr 3 drop-cast film display more orientational order compared to the pristine CsPbBr 3 film without ZnBr 2 doping. Importantly, the same XRD patterns indicate that the zinc-ion doping into the CsPbBr 3 NCs does not affect the crystal structure of this perovskite crystal. This observation is consistent with the literature reports confirming that the zinc ion maintains a crystal phase of CsPbBr 3 [42,43]. However, the presence of a lot of small XRD peaks indicates that the orientational order is very small in the drop-cast CsPbBr 3 thin films composed of polydisperse nanoparticles. In this work, any post-annealing process, co-solvent addition, and antisolvent dispensing (so-called solvent engineering) were not employed, which should affect the crystal orientation during film formation via the intermediate phase of the perovskite precursors. Importantly, nanocrystals have a high surface energy and weakly bound surface ligands in the case of oleic acid, which may allow a further aggregation of QD particles in order to decrease free energies during thin-film drying processing, indicating a meta-stability of QDs. However, further detail could be addressed in our future work regarding the phase transformation and stability of nanoscale QDs. The CsPbBr3 QDs without/with ZnBr2 dopant were synthesized by SR under ambient conditions [38]. Figure 2 shows the XRD patterns of the CsPbBr3 drop-cast films [38][39][40][41], corresponding to the cubic phase of CsPbBr3 [40]. Please note that CsPbBr3 shows polymorphism depending on temperature, like orthorhombic (≤88 °C), tetragonal (88 ≤ T ≤130 °C), and cubic (≥130 °C) [38][39][40][41]. However, in the case of nanoscale crystals, the cubic phase was frequently observed in comparison to the orthorhombic, indicating the metastable state of QDs, i.e., kinetically metastable or stable but thermodynamically unstable. The XRD patterns for ZnBr2-doped CsPbBr3 drop-cast film display more orientational order compared to the pristine CsPbBr3 film without ZnBr2 doping. Importantly, the same XRD patterns indicate that the zinc-ion doping into the CsPbBr3 NCs does not affect the crystal structure of this perovskite crystal. This observation is consistent with the literature reports confirming that the zinc ion maintains a crystal phase of CsPbBr3 [42,43]. However, the presence of a lot of small XRD peaks indicates that the orientational order is very small in the drop-cast CsPbBr3 thin films composed of polydisperse nanoparticles. In this work, any post-annealing process, co-solvent addition, and antisolvent dispensing (so-called solvent engineering) were not employed, which should affect the crystal orientation during film formation via the intermediate phase of the perovskite precursors. Importantly, nanocrystals have a high surface energy and weakly bound surface ligands in the case of oleic acid, which may allow a further aggregation of QD particles in order to decrease free energies during thin-film drying processing, indicating a meta-stability of QDs. However, further detail could be addressed in our future work regarding the phase transformation and stability of nanoscale QDs. Figure 3 shows the EDS spectra for CsPbBr3 QDs. As shown in Figure 3, when ZnBr2 was incorporated into the perovskite precursor solution, the synthesized CsPbBr3 QDs exhibited the presence of the zinc element (see Figure 3b). Importantly, the zinc ions can stay with CsPbBr3 QDs in two possibilities, i.e., substitutional (e.g., substitution of Pb 2+ ions), and interstitial substitution in the bulk and/or surface of CsPbBr3 QDs. The exact location is out of scope in the current study. However, based on literature reports [23], the zinc ions are known to stay in the CsPbBr3 crystal structure by partially substituting the Pb 2+ ions in the bromide plumbate [29]. Figure 3 shows the EDS spectra for CsPbBr 3 QDs. As shown in Figure 3, when ZnBr 2 was incorporated into the perovskite precursor solution, the synthesized CsPbBr 3 QDs exhibited the presence of the zinc element (see Figure 3b). Importantly, the zinc ions can stay with CsPbBr 3 QDs in two possibilities, i.e., substitutional (e.g., substitution of Pb 2+ ions), and interstitial substitution in the bulk and/or surface of CsPbBr 3 QDs. The exact location is out of scope in the current study. However, based on literature reports [23], the zinc ions are known to stay in the CsPbBr 3 crystal structure by partially substituting the Pb 2+ ions in the bromide plumbate [29]. at room temperature. Figure 3 shows the EDS spectra for CsPbBr3 QDs. As shown in Figure 3, when ZnBr2 was incorporated into the perovskite precursor solution, the synthesized CsPbBr3 QDs exhibited the presence of the zinc element (see Figure 3b). Importantly, the zinc ions can stay with CsPbBr3 QDs in two possibilities, i.e., substitutional (e.g., substitution of Pb 2+ ions), and interstitial substitution in the bulk and/or surface of CsPbBr3 QDs. The exact location is out of scope in the current study. However, based on literature reports [23], the zinc ions are known to stay in the CsPbBr3 crystal structure by partially substituting the Pb 2+ ions in the bromide plumbate [29]. Figure 4a revealed that the shape of the pristine CsPbBr 3 QDs is cuboidal [44,45] with an average edge length of (~22 nm), whereas that of the ZnBr 2 -doped CsPbBr 3 QDs is very small,~3 nm (Figure 4d). This TEM image demonstrates that adding the appropriate amount of ZnBr 2 dopants into the perovskite precursor solution may allow control of the size of CsPbBr 3 in the strong quantum-confinement regime, affording the blue emission in the below. Figure 5 shows the size distribution of CsPbBr 3 QDs based on the aforementioned HR-TEM images. The pristine CsPbBr 3 QDs exhibit an average QD size of~22 nm. However, the range is somewhat broad, from~7 nm to~50 nm, indicating the polydispersity of nanoparticles in this SR method at room temperature, although the chosen conditions (OA and OAm ratio) should affect this QD size and distribution. Importantly, when ZnBr 2 was employed as a dopant for CsPbBr 3 QDs, the QD size is in the range of~1.5 to~5.5 nm, with the average size of~3 nm, guaranteeing the quantum size and confinement effect, a blue-light emission in this study. Figure 4 shows the HR-TEM mages of CsPbBr3 QDs (Figure 4a,b) without ZnBr2 and (Figure 4c,d) with ZnBr2 doping, respectively. Figure 4a revealed that the shape of the pristine CsPbBr3 QDs is cuboidal [44,45] with an average edge length of (~22 nm), whereas that of the ZnBr2-doped CsPbBr3 QDs is very small, ~3 nm (Figure 4d). This TEM image demonstrates that adding the appropriate amount of ZnBr2 dopants into the perovskite precursor solution may allow control of the size of CsPbBr3 in the strong quantum-confinement regime, affording the blue emission in the below. The optical properties of CsPbBr 3 without and with ZnBr 2 doping are summarized in Figure 6. As indicated above, the size of ZnBr 2 -doped CsPbBr 3 QDs was about 3 nm, which is much less than the Bohr exciton radius of~7 nm [14]. The very small size of the quantum dot was responsible for the significant blue shift of absorption and emission peaks (Figure 6c) due to the quantum-confinement effect. Moreover, the blue shift might be assisted by the lattice internal stress, owing to the zinc-ion doping in the perovskite NCs. As shown in Figure 6c, the absorption peaks in the UV-vis absorption spectra were detected at 396.5 nm, 375.0 nm, and 356.0 nm, respectively. The absorption peak of 396.5 nm is attributed to the transition from Br(4p) to Pb(6p) orbitals, whereas the 375.0 nm corresponds to the transition from Pb(6s) to the Pb(6p) orbitals [46]. The PL spectra are composed of multiple peaks in the deep blue spectrum region, and Figure 6d shows the multiple fitted PL data. The peaks are at 470.5, 459.1, 432.4, and 409.2 nm, respectively. The broadband emission and multiple-peak emission are attributed to the polydispersity of QD's spatial size (see Figure 5b) [47]. The Stokes shift and the FWHM of peak 4 are 12.7 nm and 17.0 nm, respectively. This small value of Stokes shift and FWHM implies that the peak of 4 results from the band-edge radiation in the form of free excitons [46]. However, the other peaks have larger Stokes shifts (more than 35.9 nm) and FWHMs (more than 24.6 nm). This phenomenon may take place due to the electron-phonon coupling, which is responsible for the increment of Stokes shift. Photonics 2023, 10, x FOR PEER REVIEW 7 of 14 Figure 5 shows the size distribution of CsPbBr3 QDs based on the aforementioned HR-TEM images. The pristine CsPbBr3 QDs exhibit an average QD size of ~22 nm. However, the range is somewhat broad, from ~7 nm to ~50 nm, indicating the polydispersity of nanoparticles in this SR method at room temperature, although the chosen conditions (OA and OAm ratio) should affect this QD size and distribution. Importantly, when ZnBr2 was employed as a dopant for CsPbBr3 QDs, the QD size is in the range of ~1.5 to ~5.5 nm, with the average size of ~3 nm, guaranteeing the quantum size and confinement effect, a blue-light emission in this study. The optical properties of CsPbBr3 without and with ZnBr2 doping are summarized in Figure 6. As indicated above, the size of ZnBr2-doped CsPbBr3 QDs was about 3 nm, which is much less than the Bohr exciton radius of ~7 nm [14]. The very small size of the quantum dot was responsible for the significant blue shift of absorption and emission peaks ( Figure 6c) due to the quantum-confinement effect. Moreover, the blue shift might be assisted by the lattice internal stress, owing to the zinc-ion doping in the perovskite NCs. As shown in Figure 6c, the absorption peaks in the UV-vis absorption spectra were detected at 396.5 nm, 375.0 nm, and 356.0 nm, respectively. The absorption peak of 396.5 nm is attributed to the transition from Br(4p) to Pb(6p) orbitals, whereas the 375.0 nm corresponds to the transition from Pb(6s) to the Pb(6p) orbitals [46]. The PL spectra are composed of multiple peaks in the deep blue spectrum region, and Figure 6d shows the multiple fitted PL data. The peaks are at 470.5, 459.1, 432.4, and 409.2 nm, respectively. The broadband emission and multiple-peak emission are attributed to the polydispersity of QD's spatial size (see Figure 5b) [47]. The Stokes shift and the FWHM of peak 4 are 12.7 nm and 17.0 nm, respectively. This small value of Stokes shift and FWHM implies that the peak of 4 results from the band-edge radiation in the form of free excitons [46]. However, the other peaks have larger Stokes shifts (more than 35.9 nm) and FWHMs (more than 24.6 nm). This phenomenon may take place due to the electron-phonon coupling, which is responsible for the increment of Stokes shift. Figure 7 shows the determination of optical band gaps of CsPbBr 3 QDs and ZnBr 2doped CsPbBr 3 QDs based on the Tauc model. From the UV−Vis absorption spectra, the bandgap was quantified by extrapolating the straight-line portion of the Tauc plot of (αhν) 2 vs. hν, in which h is Plank's constant, ν is frequency of incident photons, and α is absorption coefficient [48]. Resultantly, the bandgap is 2.30 eV for the pure CsPbBr 3 and 3.02 eV for ZnBr 2 -doped CsPbBr 3 , respectively. To demonstrate the defect densities and size effects of the CsPbBr 3 QDs, the TRPL decay spectra were analyzed. The PL decay curve, as shown in Figure 8, can be well fitted with a bi-exponential function Equation (1) [49]. where A 0 represents a constant and A 1 and A 2 are the weights of multiple exponential functions constants, whereas τ 1 and τ 2 indicate short and long lifetimes originating from the excitonic radiative recombination and the trap-assisted nonradiative recombination, respectively. The average lifetime (τ ave ) was calculated using Equation (2) and the fitting parameters are shown in Table 1. Figure 8 and Table 1 show that the lifetime is shorter for smaller-size QDs (ZnBr 2 -doped CsPbBr 3 QDs). A longer lifetime is due to longer exciton (charge) diffusion length before its recombination (radiative and nonradiative). It is rational for the smaller-size QDs to have a shorter lifetime [32,50,51] because smallersize QDs have a higher surface area-to-volume ratio, resulting in high defect density and faster charge recombination. Here, it is noteworthy that any crystals with grain and grain boundaries (including small nanocrystals) are metastable (or unstable) compared to the bulk single crystals, because the surface energy is high enough, indicating that the phase transformation could be undergone by lowering energy to reach the equilibrium state, although the kinetics is unknown. Photonics 2023, 10, x FOR PEER REVIEW 8 of 14 absorption coefficient [48]. Resultantly, the bandgap is 2.30 eV for the pure CsPbBr3 and 3.02 eV for ZnBr2-doped CsPbBr3, respectively. To demonstrate the defect densities and size effects of the CsPbBr3 QDs, the TRPL decay spectra were analyzed. The PL decay curve, as shown in Figure 8, can be well fitted with a bi-exponential function Equation (1) [49]. where 0 A represents a constant and 1 A and 2 A are the weights of multiple exponential functions constants, whereas 1 τ and 2 τ indicate short and long lifetimes originating from the excitonic radiative recombination and the trap-assisted nonradiative recombination, respectively. The average lifetime ( ave τ ) was calculated using Equation (2) and the fitting parameters are shown in Table 1. Figure 8 and Table 1 show that the lifetime is shorter for smaller-size QDs (ZnBr2-doped CsPbBr3 QDs). A longer lifetime is due to longer exciton (charge) diffusion length before its recombination (radiative and nonradiative). It is rational for the smaller-size QDs to have a shorter lifetime [32,50,51] because smaller-size QDs have a higher surface area-to-volume ratio, resulting in high defect density and faster charge recombination. Here, it is noteworthy that any crystals with grain and grain boundaries (including small nanocrystals) are metastable (or unstable) compared to the bulk single crystals, because the surface energy is high enough, indicating that the phase transformation could be undergone by lowering energy to reach the equilibrium state, although the kinetics is unknown. At this moment, it is notable that when we added ZnBr2 into the CsPbBr3 perovskite precursor solution, the doping effect and the QD size reduction occur simultaneously. This fact indicates that it is very hard to examine a plain zinc-ion doping effect because the particle size changes simultaneously. Hence, we would like to investigate this kind of At this moment, it is notable that when we added ZnBr 2 into the CsPbBr 3 perovskite precursor solution, the doping effect and the QD size reduction occur simultaneously. This fact indicates that it is very hard to examine a plain zinc-ion doping effect because the particle size changes simultaneously. Hence, we would like to investigate this kind of doping effect through the theoretical calculation based on the 3 × 3 × 3 supercell-sized CsPbBr 3 materials as shown in Figure 9a,b. To see the inherent electronic properties of Zn 2+ -doped CsPbBr 3 , we have calculated the electronic band structure and density of states. The PBE functional band structure of undoped and zinc-ion-doped 3 × 3 × 3 CsPbBr 3 supercells are shown in Figure 9c,d, respectively. However, Ghaithan et al. noted that the PBE pseudopotentials underestimate the bandgap of lead halide perovskites [52]. In our calculation, the bandgap of undoped 3 × 3 × 3 CsPbBr 3 supercells is about 2.01 eV (Figure 9c), whereas that of the doped sample was 1.72 eV (Figure 9d). Moreover, the bandgap analysis using the total density of states (DOS) shown in Figure 10a,b indicates that the bandgap decreased for the Zn 2+ -doped CsPbBr 3 . This trend of the bandgap reduction, when Pb 2+ was replaced by Zn 2+ in CsPbBr 3 , is in line with Guo et al.'s results [53]. However, in the case of our experimental results, the simultaneous change in QD size upon ZnBr 2 doping makes it very difficult to examine a pure doping effect. Hence, through the size reduction from~22 nm (without Zn 2+ ) to~3 nm (Zn 2+ doped), the doped sample exhibited a wider bandgap (i.e., a blue emitter) than the non-doped sample because of the apparent quantum size and confinement effects. Conclusions This study demonstrates the successful synthesis of strongly quantum-confined ZnBr 2doped CsPbBr 3 QDs under ambient conditions using the supersaturated recrystallization method. In this method, controlled amounts of ZnBr 2 were incorporated into the reaction medium to control the photophysical properties of the synthesized CsPbBr 3 QDs. As a result, 3 nm-sized blue-light-emitting CsPbBr 3 QDs were synthesized, covering the broad blue spectrum. The blue shifting of the emission spectrum is mainly attributed to the size reduction of the QDs. Our findings provide new insight into the synthesis of blue light-emitting CsPbBr 3 QDs at room temperature under ambient conditions. Future work may include the application of CsPbBr 3 QDs to photonic and optoelectronic devices including biosensors. Funding: This research received no external funding. The APC was funded by J.Y.K. Data Availability Statement: The datasets used and/or analyzed during the current study are available from the corresponding author upon reasonable request.
v3-fos-license
2018-04-03T01:00:16.069Z
2015-10-12T00:00:00.000
6125807
{ "extfieldsofstudy": [ "Biology", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://academic.oup.com/nar/article-pdf/44/4/e33/16668212/gkv1027.pdf", "pdf_hash": "b5935181894db27abc225f4c28845df77ed340df", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41151", "s2fieldsofstudy": [ "Biology" ], "sha1": "6e5133c655e5dc5078a05b8188345493e5032a2e", "year": 2015 }
pes2o/s2orc
Whole transcriptome profiling reveals the RNA content of motor axons Most RNAs within polarized cells such as neurons are sorted subcellularly in a coordinated manner. Despite advances in the development of methods for profiling polyadenylated RNAs from small amounts of input RNA, techniques for profiling coding and non-coding RNAs simultaneously are not well established. Here, we optimized a transcriptome profiling method based on double-random priming and applied it to serially diluted total RNA down to 10 pg. Read counts of expressed genes were robustly correlated between replicates, indicating that the method is both reproducible and scalable. Our transcriptome profiling method detected both coding and long non-coding RNAs sized >300 bases. Compared to total RNAseq using a conventional approach our protocol detected 70% more genes due to reduced capture of ribosomal RNAs. We used our method to analyze the RNA composition of compartmentalized motoneurons. The somatodendritic compartment was enriched for transcripts with post-synaptic functions as well as for certain nuclear non-coding RNAs such as 7SK. In axons, transcripts related to translation were enriched including the cytoplasmic non-coding RNA 7SL. Our profiling method can be applied to a wide range of investigations including perturbations of subcellular transcriptomes in neurodegenerative diseases and investigations of microdissected tissue samples such as anatomically defined fiber tracts. INTRODUCTION Spatial asymmetries in protein localization in highly polarized cells such as neurons are thought to be guided, at least in part, by mechanisms establishing local diversity in levels of the underlying transcripts (1). Subcellular transcriptome profiling is an emerging field that explores such transcript abundance patterns by combining cell culture techniques for selective RNA extraction with amplification methods for profiling the low amounts of transcripts that usually can be extracted. In neurobiology, characterization of the axonal transcriptome has become of interest based on observations that diverse neuronal functions such as axon guidance and regeneration as well as presynaptic functions depend on local protein translation in the axon and axon terminal (2). In order to investigate the axonal transcriptome, neurons are typically grown in compartmentalized chambers and RNA extracted from the axonal side is then processed for further analysis. Since the amount of RNA contained within axons is typically low, amplification steps need to be applied. So far, axonal RNA has been subjected to serial analysis of gene expression (SAGE) or microarray analysis and up to thousands of RNAs have been cataloged (3)(4)(5). However, novel techniques utilizing next-generation sequencing methodologies may provide a more comprehensive understanding of the axonal transcriptome. Current methods for transcriptome amplification use oligo(dT)-based reverse transcription followed by either template switching and exponential amplification or in vitro transcription for linear amplification (6). However, for subcellular transcriptome profiling it might be desirable to capture the whole transcriptome including non-polyadenylated non-coding RNAs in order to obtain a more complete picture of local transcriptome diversity. A potential approach for whole transcriptome amplification would be doublerandom priming whereby an oligonucleotide containing a random 3 end is used for both reverse transcription and second strand synthesis followed by polymerase chain reaction (PCR) amplification (7). Here we present a double-random priming protocol for amplifying total RNA using off-theshelf reagents. We systematically optimized and controlled several parameters of the method and applied this protocol to diluted series of total RNA ranging from 5 ng to 10 pg. We generated high-throughput sequencing libraries directly from the PCR products and observed a robust gene-by-gene correlation down to 10 pg input RNA. In order to demon-strate the applicability of our method, we cultured embryonic mouse motoneurons in microfluidic chambers and investigated the RNA content of the somatodendritic and axonal compartment using our profiling method. We found the RNA repertoire present within the axonal cytoplasm to be highly complex and enriched for transcripts related to protein synthesis and actin binding. Beyond that we identified a number of non-coding RNAs enriched or depleted in motor axons. We validated our motoneuron transcriptome data by three independent approaches: quantitative PCR, fluorescent in situ hybridization and comparison with previously generated microarray data. Our results demonstrate that whole transcriptome profiling is a suitable method to quantitatively investigate very small amounts of RNA and, to our knowledge, gives the most complete view of the axonal transcriptome to date. Due to this we suggest that whole transcriptome profiling lends itself to a number of applications. For example, we envision that the transcriptome profiling method described here may be suitable for detailed investigations on the axonal transcriptome alterations occurring in motoneuron diseases in particular, and in neurodegenerative disorders in general. Specifically, disorders involving RNAbinding proteins or defective RNA transport mechanisms might be suitable areas of application of our method. There is growing evidence that disease-associated proteins such as SMN in spinal muscular atrophy and TDP-43 in amyotrophic lateral sclerosis are involved in axonal RNA transport such that loss of their function may critically affect the axonal RNA repertoire (8,9). Beyond that, RNA transport processes occurring in response to nerve injury or during nerve regeneration could be analyzed by whole transcriptome profiling (10). Furthermore, transcriptomes from other subcellular neuronal compartments such as dendrites or growth cones that can be isolated in microfluidic chambers or through laser capture microdissection could be profiled by the method described here. For example, both coding and non-coding RNAs have been shown to translocate into dendrites in an activity-dependent manner to facilitate synaptic modifications (11). Our method could be utilized to investigate such changes in a transcriptome-wide manner. Additionally, microdissection of the synaptic neuropil from hippocampal slices or other brain subregions followed by whole transcriptome profiling might give insights into transcript alterations that accompany or underlie synaptic plasticity in vivo. Finally, whole transcriptome profiling might be a useful addition to the existing repertoire of single-cell gene expression techniques due to its ability to monitor both coding and non-coding transcripts. Animals CD-1 mice were bred in the animal husbandries of the Institute for Clinical Neurobiology at the University Hospital of Wuerzburg. Mice were maintained under controlled conditions in a 12 h/12 h day/night cycle at 20-22 • C with food and water in abundant supply and 55-65% humidity. In agreement with and under control of the local veterinary authority, experiments were performed strictly following the regulations on animal protection of the German federal law and of the Association for Assessment and Accreditation of Laboratory Animal Care. Primary mouse motoneuron culture with microfluidic chambers Spinal cords were isolated from embryonic day (E) 12.5 CD-1 mouse embryos and motoneurons derived from them were cultured as previously described (5,12). Briefly, lumbar spinal cord tissues were dissected and motoneurons enriched using p75 NTR antibody panning. Microfluidic chambers (Xona Microfluidics, SND 150) were pre-coated with polyornithine and laminin-111 (Life Technologies) and 1 000 000 motoneurons were directly plated in the somatodendritic main channel of a microfluidic chamber. Motoneurons were grown in neurobasal medium (Invitrogen) containing 500 M GlutaMAX (Invitrogen), 2% horse serum (Linaris) and 2% B27 supplement (Invitrogen) for 7 days at 37 • C and 5% CO 2 . CNTF (5 ng/ml) was applied to both the somatodendritic and the axonal compartment. To induce axon growth through the microchannels of the chamber BDNF (20 ng/ml) was added to the axonal compartment. Culture medium was exchanged on day 1 and then every second day. RNA extraction and preparation of serial RNA dilutions Total RNA was extracted from mouse E14 spinal cords using the RNeasy Mini Kit (Qiagen). DNA was removed with the TURBO DNA-free kit (Ambion) and RNA concentration was measured on a NanoDrop. The total RNA was then diluted to the desired concentrations as following. First, it was diluted to a concentration of 10 ng/l in a total volume of 100 l. Of this, 10 l were mixed with 90 l water to obtain 100 l of 1 ng/l RNA. Further 1:10 dilutions were prepared to obtain 100 l each of 100 pg/l and 10 pg/l RNA. Using these dilutions three replicate reactions each with 5 ng (5 l of 1 ng/l), 500 pg (5 l of 100 pg/l), 50 pg (5 l of 10 pg/l) and 10 pg (1 l of 10 pg/l) RNA were set up. For dilutions of Human Brain Reference RNA (HBRR, Life Technologies) 2 l of the RNA at 1 g/l were mixed with 18 l water to obtain 20 l of 100 ng/l RNA. Of this, 10 l were mixed with 90 l water to obtain 100 l of 10 ng/l RNA. 10 l of the 10 ng/l HBRR were then mixed with 2 l of a 1:1000 diluted ERCC RNA spike-in mix 1 (Life Technologies) and 88 l water. This 1 ng/l HBRR/ERCC mix was used for 1:10 dilutions to obtain 100 pg/l and 10 pg/l RNA. The 1 ng/l, 100 pg/l and 10 pg/l HBRR/ERCC dilutions were used as described above for setting up replicate whole transcriptome profiling reactions. For whole transcriptome amplification of compartmentalized motoneuron cultures total RNA was extracted from the somatodendritic and the axonal compartment using the Arcturus PicoPure RNA Isolation Kit (Life Technologies) with 10 l elution volume. 1 l of somatodendritic RNA and 10 l of axonal RNA were used for reverse transcription. For the control experiment 1 l of undiluted somatodendritic RNA or 1 l of somatodendritic RNA diluted 1:2000 in water was used, respectively. PAGE 3 OF 19 Nucleic Acids Research, 2016, Vol. 44, No. 4 e33 Whole transcriptome amplification A detailed protocol for whole transcriptome amplification is included in the Supplementary Material. Briefly, 20 l reverse transcription reactions were set up containing RNA, 0.5 mM dNTPs, 10 U RiboLock RNase inhibitor (Thermo Scientific), 100 U Superscript III (Life Technologies), 4 l 5× First-Strand Buffer, 1 l 0.1 M DTT and 2.5 M MAL-BAC primer (13) (Supplementary Table S1). Reverse transcription was conducted at 37 • C and was allowed to proceed for 10 h to bring reactions to completion. A similar reaction condition has previously been used for cDNA synthesis from single cells (14). Reactions were inactivated at 70 • C for 15 min. cDNAs were purified with the QIAEX II Gel Extraction Kit (Qiagen) and eluted in 20 l water. One microliter was removed and diluted 1:5 with water for evaluating reverse transcription efficiency by Gapdh quantitative PCR (qPCR). For second strand synthesis and final PCR amplification Accuprime Taq DNA polymerase (Life Technologies) was used which has previously been used for amplification of small amounts of RNA extracted in iCLIP experiments (15). Second strand synthesis was conducted in 50 l reactions containing 18 l purified cDNA, 1.725 M MALBAC primer, 1 l Accuprime and 5 l Accuprime buffer 2. Reaction conditions were: 98 • C for 5 min, 37 • C for 2 min and 68 • C for 40 min. Second strand amplicons were purified with QIAEX II Gel Extraction Kit and eluted in 20 l water. PCR amplification reactions were set up as 50 l reactions containing 19 l purified second strand amplicons, 1 l Accuprime Taq DNA polymerase, 5 l 10× Accuprime buffer 2 and 3.15 M MALBAC adapter primer mix containing equimolar amounts of each adapter (Supplementary Table S1). PCR reaction conditions for RNA dilutions were: 92 • C for 2 min followed by 12 cycles (5 ng), 15 cycles (500 pg), 18 cycles (50 pg) or 20 cycles (10 pg) of 92 • C for 30 s, 60 • C for 1 min and 68 • C for 1 min. For somatodendritic RNA 6 cycles and for axonal RNA 18 cycles were used. PCR amplicons were purified with AMPure XP beads (Beckman Coulter). To assess amplification efficiency 1 l of purified amplicons was diluted 1:5 in water for Gapdh qPCR. A total of 50 ng of the purified DNA was processed for library preparation using the NEBNext Ultra DNA Library Prep Kit for Illumina (NEB) in conjunction with the NEBNext Multiplex Oligos for Illumina (Index Primer Set 1) (NEB) according to the manufacturer's instructions. Libraries were amplified for eight cycles with the Illumina primers. Libraries were pooled and purified with AMPure XP beads for sequencing. Preparation of total RNAseq libraries Three replicate total RNAseq libraries from 5 ng HBRR input each were prepared using the NEBNext Ultra RNA Library Kit for Illumina (NEB) with omission of the mRNA isolation step. Instead, fragmentation was performed directly on total RNA by adding 4 l of first strand reaction buffer (5×) and 1 l random primers to 5 l of the 1 ng/l HBRR/ERCC mix (see above) and heating the mixture at 94 • C for 15 min. Afterward, the manufacturer's instructions were followed and final libraries were amplified with 13 cycles. Sequencing and read mapping Single-end sequencing was performed on an Illumina MiSeq machine using the MiSeq Reagent Kit v3 (150 cycles) and 1% spike-in of the phage PhiX control library. After demultiplexing of reads, quality assessment was performed using FastQC version 0.10.1 (http: //www.bioinformatics.babraham.ac.uk/projects/fastqc/). Reads were trimmed using inhouse scripts (Supplementary Figure S1). Illumina adapters were removed and only reads containing the minimal forward MALBAC sequence (5 -GAGTGATGGTTGAGGTAGTGTGGAG-3 ) were considered for downstream analysis. If the reverse MALBAC sequence (5 -CTCCACACTACCTCAACCATCACTC-3 ) was detected, the sequence was trimmed, identical reads were collapsed and 5 -as well as 3 -oligo-octamers were removed. If the reverse MALBAC sequence was not present, only the first 120 nt of the reads were considered. Collapsing was performed and the 5 -oligo-octamers were removed. The sequencing data described in this publication have been deposited in NCBI's Gene Expression Omnibus (16) and are accessible through GEO Series accession number GSE66230. For coverage plots and quantification of read mappings to rRNAs, intergenic, intronic, UTR and coding regions the CollectRnaSeqMetrics tool of the Picard Suite version 1.125 (http://broadinstitute.github.io/picard/) was used with default settings. rRNA-interval files were downloaded For saturation analysis BAM files were subsampled using an inhouse script. FPKM values were calculated using Cufflinks (see above) and after ERCC removal all entries with an FPKM ≥ 1 were considered expressed (17). For quantification of gene classes all FPKM values of expressed genes with FPKM ≥ 1 were summed within each ENSEMBL type. We noticed that in the ENSEMBL mouse annotation the abundant ribosomal transcript Gm26924 was annotated as 'lincRNA'. Therefore, we included it manually in the gene class 'rRNA'. For the analysis of the ERCC RNAs, transcripts with FPKM below 0.1 were set to FPKM = 0.1. Unsupervised complete linkage clustering of significantly differentially expressed genes as detected by Cuffdiff was performed on the rows and columns using the Euclidian distance as a similarity metric. As input log 2 (FPKM) values were used. For gene ontology (GO) term and Kyoto Encyclopedia of Genes and Genomes (KEGG) analysis we used the Database for Annotation, Visualization and Integrated Discovery (DAVID, http://david.abcc.ncifcrf.gov/home.jsp) (18). Quantitative PCR For evaluating amplification efficiency 2 l of diluted cDNA or amplicons were used per qPCR reaction. For validation of differentially expressed transcripts 2 l of diluted amplicons were used per reaction. qPCRs were set up as 20 l reactions containing 1 M each of forward and reverse primer and 10 l 2× Luminaris HiGreen qPCR Master Mix (Thermo Scientific) and run on a Lightcycler 1.5 (Roche). Primers are listed in Supplementary Table S2. For validation of differentially expressed transcripts Gapdh was used for normalization of total cDNA amounts. For each primer pair used in the study their amplification product was subjected to agarose gel electrophoresis to ensure that only a single band of the expected size was produced. Furthermore, for each primer pair we determined the melting curve of the template and set the acquisition temperature to just below the beginning of the respective melting peak of the template when programming the Lightcycler. Additionally, for every qPCR run a water control was included to ensure that no unspecific products were amplified. High resolution fluorescent in situ hybridization High resolution in situ hybridization was performed using ViewRNA probesets following the manufacturer's instructions from Panomics with minor modifications. Briefly, motoneurons were cultured on polyornithine and laminin-111coated cover slips for 5 days. Medium was removed, cells were washed two times in RNase-free phosphate buffered saline (PBS) and fixed with 4% paraformaldehyde in lysine phosphate buffer (pH7.4) containing 5.4% glucose and 0.01 M sodium metaperiodate for 15 min at room temperature. Cells were permeabilized using a supplied detergent solu-tion (Panomics) for 4 min at room temperature. Since coding mRNAs are highly masked by RNA-binding proteins in the axon, a protease digestion step was crucial to make target mRNAs accessible for probe binding. Therefore, for detection of β-actin and Gapdh transcripts in axons, cells were treated with a supplied protease at 1:8000 dilution for 4 min prior to hybridization. In contrast to coding RNAs, a protease digest resulted in an increased background and decreased signal for non-coding RNAs. Since these RNAs are highly abundant and less masked in the axon, omitting the protease digestion step resulted in optimized signal detection. Cells were incubated with probes diluted 1:100 in hybridization buffer at 40 • C for 3 h for non-coding RNAs and overnight for coding RNAs. For 7SK and 7SL custom antisense and sense probes were designed by Panomics. Next, coverslips were washed three times with a supplied wash buffer (Panomics) at room temperature and preamplifier, amplifier and label probe oligonucleotides (diluted 1:25 in the corresponding buffers) were applied sequentially and incubated each for 1 h at 40 • C. After final washing of label probes, cells were rinsed briefly in RNase-free PBS two times and immunostained against Tau protein using standard protocols. Briefly, cells were blocked in PBS containing 10% donkey serum, 2% BSA, 5% sucrose and 0.2 mg/ml saponin for 1 h at room temperature. Primary polyclonal rabbit anti-Tau antibody (1:1000, sigma T6402) was applied for 1 h at room temperature. Cells were washed thoroughly with RNase-free PBS and incubated with donkey anti-rabbit (H + L) IgG (Cy3, 1:500, Jackson 711-166-152) for 1 h at room temperature. Coverslips were embedded with Aqua Poly/Mount (Polysciences, 18 606) and subsequently imaged on Olympus Fluoview 1000 confocal system. Maximum intensity projections of 4-micron z-stacks were acquired using a 60× oil objective with 800 × 800 pixel resolution. Images were processed using ImageJ (MacBiophotonics). Negative controls were carried out by using sense probes, omitting probes and adding only amplifiers and label probes, and using a probe against Escherichia coli K12 dapB transcript which encodes for dihydrodipicolinate reductase. In addition, RNA digest was performed post fixation using RNase A (ThermoFisher Scientific, EN0531). RNase was added to the cells at a final concentration of 40 g/ml in nuclease-free Tris-ethylenediaminetetraacetic acid (EDTA) (pH7.5) and incubated at 37 • C for 1 h. Control cells were treated with nuclease-free Tris-EDTA (pH7.5). Optimization of whole transcriptome amplification efficiency We estimate the amount of total RNA that can be extracted from the axonal side of motoneurons grown in compartmentalized cultures to be in the lower picogram range such that amplification of reverse-transcribed cDNA is necessary to produce sufficient cDNA amounts for generation of high-throughput sequencing libraries. For this purpose we optimized a PCR-based double-random priming protocol. Since we were interested in the total RNA content of axons we did not remove ribosomal RNAs or used oligo(dT)based reverse transcription to select for polyadenylated PAGE 5 OF 19 Nucleic Acids Research, 2016, Vol. 44, No. 4 e33 RNAs. Instead, RNAs were reverse-transcribed with an oligonucleotide used previously for whole genome amplification [multiple annealing and looping-based amplification cycles (MALBAC)] containing a 3 random octamer and a 5 adapter sequence (13) ( Figure 1A, Supplementary Table S1). The same oligonucleotide is used for one round of second strand synthesis generating cDNA fragments that harbor the adapter sequence in a reverse-complementary manner at both ends. These fragments are then amplified by PCR using adapter oligonucleotides to produce sufficient cDNA amounts for generating high-throughput sequencing libraries. As an advantage of this approach each transcript is covered by multiple cDNAs of varying length thus preventing any bias in amplification (19). Whilst amplification of high input amounts of RNA occurs even under sub-optimal reaction conditions due to transcript overabundance profiling of low input amounts of RNA might require fine-tuning of reaction parameters to improve transcript capture. For this purpose we first optimized the whole transcriptome profiling protocol using 40 pg total RNA obtained from embryonic mouse spinal cord (see Supplementary Methods 'Optimization of whole transcriptome amplification' section). We monitored amplification efficiency of the PCR reactions by removing aliquots every two cycles starting at cycle 10 and measuring yield by qPCR. Since for whole transcriptome profiling it is important to capture both abundant and less abundant transcripts we monitored Gapdh ( Figure 1B) representing an abundant transcript as well as the less abundant Ubqln2 ( Figure 1C) by qPCR. The following parameters were tested: (i) two different polymerases (Accuprime Taq DNA polymerase or the strand displacement polymerase Bst, Large Fragment) for second strand synthesis, (ii) four different primer concentrations for second strand synthesis (0.2, 1.725, 5 or 10 M final concentration) and (iii) three different adapter primer concentrations for final PCR (0.2, 3.15 or 10 M final concentration). We found that all three parameters were critical for amplification efficiency. Whilst Bst and Accuprime Taq performed similarly for second strand synthesis of Gapdh, Accuprime Taq out-performed Bst for capturing Ubqln2. Therefore, we decided to use Accuprime Taq for second strand synthesis. Similarly, whilst a primer concentration of 0.2 M was sufficient for Gapdh amplification, Ubqln2 amplification efficiency required at least 1.725 M. Since further increase did not improve detection efficiency for Ubqln2 we decided to use 1.725 M primer concentration for second strand synthesis. For final PCR a primer concentration of 3.15 M was optimal and either a decrease or increase was detrimental for amplification efficiency. Taken together, our results demonstrate that Gapdh was robustly amplified under a variety of conditions whilst capture of the less abundant Ubqln2 required fine-tuning of reaction conditions. To test whether non-coding RNAs are amplified with similar efficiency we assessed levels of the long non-coding RNA (lncRNA) Malat1 at defined PCR cycles. As a result, Malat1 was amplified with similar efficiency as Gapdh suggesting that non-coding RNAs are captured by our protocol (Supplementary Figure S2). Whole transcriptome profiling of serially diluted RNA Following the initial optimization of our whole transcriptome amplification protocol we investigated its dynamic range using a diluted series of mouse spinal cord total RNA. As a starting point we sought to determine a suitable number of PCR cycles to amplify second strand synthesis products derived from 5 ng total RNA. For this purpose PCR aliquots were removed from 12 to 20 cycles and products were resolved by polyacrylamide gel electrophoresis (Figure 2A). After 12 cycles PCR amplicons were sized ∼150-600 bp. If more than 12 cycles were used larger-sized fragments appeared indicating overamplification. Thus, we reasoned that for 5 ng total RNA 12 cycles were suitable to obtain sufficient amounts of PCR products for visualization on a gel without overamplification. We also noticed the presence of non-specific products sized ∼25 bp. These products were inert and not amplified with increasing cycle numbers. Nevertheless, for amplicons that were further processed into high-throughput sequencing libraries (see below) we decided to purify PCR reactions with AMPure beads in subsequent experiments which readily removed such non-specific products. Next, we applied whole transcriptome profiling to three technical replicates each of 5 ng, 500 pg, 50 pg and 10 pg mouse spinal cord total RNA. Following second strand synthesis cDNA fragments were amplified for 12 (5 ng), 15 (500 pg), 18 (50 pg) and 20 (10 pg) cycles and PCR products were purified with AMPure beads. Final amplicons were similarly sized ( Figure 2B) and Gapdh was reproducibly amplified with similar efficiency for all dilutions ( Figure 2C). We used the PCR products directly for generating Illumina sequencing libraries without size selection and without adapter removal ( Figure 2D). This was made possible by using four adapter primers of varying length (Supplementary Table S1) for PCR which produced heterogeneous 5 ends required for Illumina cluster calling. After addition of the Illumina sequences and pooling of all replicates the final sequencing library was sized approximately 270-1000 bp ( Figure 2D, Supplementary Figure S3). Following high-throughput sequencing on an Illumina MiSeq we typically obtained ∼1.1-1.6 million reads per replicate (Supplementary Table S3). One 500 pg replicate gave rise to ∼2.7 million reads and one 10 pg replicate produced ∼950 000 reads. For data analysis we established a custom bioinformatics pipeline that screened reads for presence of the adapter sequence and utilized the random octamer region for 'molecule counting' of PCR duplicates (Supplementary Figure S1). More than 90% of reads for these samples contained the adapter sequence (Supplementary Table S3) and for all RNA input amounts the vast majority of sequencing reads was unique ( Figure 2E). After removal of PCR duplicates, reads were mapped to the mouse genome in order to calculate normalized read numbers per transcript expressed as FPKM values. Comparison of transcript levels showed a high degree of gene-by-gene correlation between the individual technical replicates for genes with FPKM ≥ 0.001 (Supplementary Figure S4A). The Pearson correlation coefficient was 1.0 even for the technical replicates derived from 10 pg total RNA. In line with the correlation analyses, individual tran- Figure S4B). However, we noticed that the variability in FPKM values for the lower-expressed transcripts increased between 50 and 10 pg input RNA. Since the Pearson coefficient might overestimate the correlations due to presence of highly expressed transcripts we also calculated the Spearman coefficient for all comparisons which considers gene ranks. The Spearman coefficient was >0.8 for the 5 ng, 500 pg and 50 pg replicate comparisons and <0.7 for the 10 pg replicates. Therefore, we estimate that the threshold of input RNA until which our protocol may still reproducibly yield quantitative information is 50 pg. The number of detectable genes was 12 621, 12 748 and 12 681 for the 5 ng replicates, 12 712, 12 993 and 12 798 for the 500 pg replicates, 12 330, 12 337 and 12 334 for the 50 pg replicates and 10 553, 10 516 and 10 609 for the 10 pg replicates. The number of expressed genes (FPKM ≥ 1) common to all three technical replicates was ∼10 000 for 5 ng, 500 pg and 50 pg of input RNA and decreased to ∼7600 for 10 pg RNA ( Figure 2F) corresponding to >80% and ∼72%, respectively, of expressed genes in each dataset ( Figure 2G). Thus, in line with the correlation analyses transcripts are reliably detected for RNA input amounts >50 pg. Nevertheless, even though the detectability somewhat decreased for 10 pg input RNA the number of transcripts shared between the replicates was still substantially non-random at this level. We also generated saturation plots by randomly subsampling fractions of the total reads and measuring the number of genes that were detectable with FPKM ≥ 1 from these fractions. For all samples >90% of the final number of expressed genes were already detectable when half the number of total reads were subsampled ( Figure 2H). Since 11 out of the 12 replicates gave rise to <1.6 million reads we investigated the 500 pg replicate producing ∼2.7 million reads in more detail. When ∼1.6 million reads of this replicate were subsampled >98% of the final number of expressed genes were detectable (Supplementary Figure S5). This indicates that the sequencing depth was sufficient to achieve representative gene detection. Finally, we evaluated to what extent the measured transcript levels were preserved among the different input amounts of RNA. For this purpose we first determined the top 20 most abundant transcripts by FPKM value in the 5 ng samples and assessed their levels in the lower input RNA samples ( Figure 2I). The rRNA gene Gm26924 was the most abundant transcript in all samples. In addition, we found other non-coding RNAs to be highly expressed. Among these were 7SK (Rn7sk), which is involved in transcriptional regulation, 7SL (Metazoa SRP), which is a component of the signal recognition particle and Rpph1, an RNA which is part of the RNase P ribonucleoprotein complex that cleaves tRNA precursors. The most abundant protein-coding RNAs were Actb encoding ␤-actin, Lars2 encoding mitochondrial leucyl-tRNA synthetase 2 and Tuba1a encoding tubulin alpha 1A. Furthermore, transcripts encoding the translation factors Eef1a1 and Eif4g2 were highly expressed. The abundance of these top 20 transcripts was similar for all samples down to 10 pg total RNA suggesting that whole transcriptome profiling preserves their relative expression levels even at low in- put amounts of RNA as was already indicated by the correlation analysis. In order to investigate the concordance of measured transcript levels among the different RNA input amounts on a global scale we calculated correlation coefficients for all replicate comparisons ( Figure 2J, Supplementary Figure S6). Whilst the Pearson correlation coefficient was ∼1.0 for all comparisons the Spearman coefficient was >0.8 for all comparisons involving 5 ng, 500 pg and 50 pg replicates. This indicates that whole transcriptome profiling detects transcripts at similar levels for total RNA input amounts of >50 pg. Characteristics of transcript capture by whole transcriptome profiling Since we do not fragment the input RNA prior to library preparation we sought to investigate the ability of our protocol to capture different regions across a transcript. When we visualized the read distribution for the lncRNA Malat1 we obtained a non-uniform read density profile for all replicates suggesting that different subdomains contained in an individual transcript are differentially available for profiling ( Figure 3A). The most likely explanation for this observation is that the random octamers used for the two random priming steps are biased toward particular sequences, as has been shown before for random hexamers (20). Nevertheless, when averaged across transcripts reads were distributed uniformly along their middle portions ( Figure 3B) whilst the 5 and 3 ends were considerably underrepresented. For all input amounts of RNA ∼80% of bases outside ribosomal genes originated from UTR and coding regions ( Figure 3C). In contrast, only ∼14% of aligned bases were derived from intronic regions. Considering that introns by far exceed the length of exons the relatively low number of intronic alignments indicates that mostly spliced mRNAs rather than pre-mRNAs were present. Furthermore, ∼6% of aligned bases were within intergenic regions. Thus, whilst the large majority of transcripts originated from annotated genes a nevertheless sizeable fraction was derived from intergenic RNAs. Finally, we examined the potential of whole transcriptome profiling to detect transcripts belonging to different gene classes. For this purpose we analyzed what percentage of the total FPKM for each sample is derived from individual gene classes ( Figure 3D). Surprisingly, about 42-44% of the total FPKM was derived from protein-coding genes. In contrast, rRNAs contributed 36-39% which is substantially below the expected relative amount of rRNA of about 80-90% in cells. The rRNA fraction detected is similar for all amounts of input RNA indicating that there is no amplification bias with additional PCR cycles. One possibility for the detection of less rRNA than would be expected is that rRNA genes might not be covered comprehensively by the ENSEMBL annotation or are masked by other genes. To test this possibility we calculated separately what proportion of aligned read bases were located within annotated rRNA genes. As a result ∼60% of all aligned read bases were within rRNA genes which is still below the actual fraction of rRNA ( Figure 3E). This suggests that whole transcriptome profiling captures less rRNA than would be expected. Another gene class that contributes a sizeable proportion (7-10%) of FPKM values is annotated as 'misc RNA' (Figure 3D). This class contains non-coding RNAs such as 7SK, 7SL and Rpph1, all three of which are ∼300 nt in length. In contrast, snRNAs, which are <200 nt, were underrepresented contributing <0.5% toward the total FPKM. Taken together, these results suggest that whole transcriptome profiling reliably detects both coding and non-coding RNAs of at least 300 nt with rRNAs being relatively underrepresented. Whole transcriptome profiling of standardized total RNA containing ERCC reference RNAs The whole transcriptome profiling results from mouse embryonic spinal cord RNA suggest that the method can be used to identify RNAs of different gene classes and that the relative proportions of transcript abundance are preserved when probing varying amounts of input RNA. Since the RNA extraction method which we used for the mouse embryonic spinal cord RNA might de-select certain RNAs of smaller size we also applied whole transcriptome profiling to the HBRR which is a standardized total RNA from human brain that contains RNAs of all sizes including small RNAs. Additionally, we included ERCC spike-in control RNAs which allow associating measured transcript levels with molecule numbers. 5 ng, 500 pg, 50 pg and 10 pg HBRR were reversetranscribed and amplified under the same conditions used for mouse spinal cord RNA. The size of the amplified products was ranged 150-600 bp similar to the products derived from mouse RNA ( Figure 4A). As negative control we also set up one reaction without RNA (0 pg) and amplified it for 20 cycles in parallel. As a result, no amplification products were generated when input RNA was omitted showing that the PCR amplicons formed from the serially diluted RNA are specific to the input provided. As a quality control step we performed qPCR for Gapdh on pre-amplified cDNA and amplified products. For all RNA dilutions Gapdh was amplified with similar efficiency ( Figure 4B). For the negative control no qPCR signal was detectable in the pre-amplified or amplified sample. As before, we proceeded to generate libraries for high-throughput sequencing ( Figure 4C). The final library for Illumina sequencing was sized similarly as the sequencing library produced from mouse spinal cord (Supplementary Figure S3). Sequencing reads were processed computationally as before and mapped to the human genome. Similar to the mouse RNA samples we observed a robust correlation of technical replicates suggesting that whole transcriptome profiling can be reproducibly applied also to small amounts of human RNA ( Figure 4D). Likewise, we evaluated the FPKM levels of the ERCC control RNAs which were mixed with the HBRR prior to its dilution. Similar to the HBRR transcripts the observed amounts of ERCC transcripts were consistent among the technical replicates and showed a good concordance even for the 10 pg samples ( Figure 4E). When we compared the ERCC FPKM values for different amounts of input RNA we also observed robust correlations among replicates (Supplementary Figure S7). This provides further evidence that whole transcriptome profiling preserves the relative levels of detectable transcripts for different input amounts of RNA spanning at least two orders of magnitude. Since the number of individual ERCC control RNAs in the stock is known it is possible to calculate their numbers in the serial dilutions. For all RNA input amounts the measured ERCC FPKM values were highly correlated with the calculated number of transcript molecules present in the samples ( Figure 4F). In the 5 ng HBRR samples ERCC transcripts with more than 400 copies were detectable. In line with the dilution the minimum number of copies required for detection was ∼40 in the 500 pg samples. In the 50 and 10 pg samples transcripts with as low as 1-10 copies became detectable in individual replicates. Thus, the detection threshold for individual RNAs scales with the total amount of RNA present. Taken together, these results indicate that the relative transcript levels observed by whole transcriptome profiling are consistent both on a global scale when assessing all transcripts as well as when subsets of individual transcripts are considered. Comparison of whole transcriptome profiling with total RNAseq Our results indicate that whole transcriptome profiling detects both coding and non-coding RNAs of as low as 300 nt length. In order to be able to compare our results with an alternative method that profiles total RNA we used a commercially available RNAseq kit and performed total RNAseq by omitting the initial selection step for polyadenylated transcripts. As input we used 5 ng HBRR containing ERCC control RNAs. We produced altogether three replicate Illumina libraries which were of similar size as those derived from whole transcriptome profiling ( Figure 5A, Supplementary Figure S3). The total RNAseq technical replicates were highly correlated ( Figure 5B), similar to the 5 ng technical replicates from whole transcriptome profiling. Importantly, whilst the FPKM values obtained from total RNAseq correlated robustly with those derived from whole transcriptome profiling we noticed a general 'shift' in the FPKM data toward the latter method ( Figure 5C). This suggests that, whilst there is a linear relationship between the transcript levels from both methods, the FPKM values for most transcripts are higher for whole transcriptome profiling than for total RNAseq. The same result was obtained when the levels of ERCC control RNAs were plotted separately ( Figure 5D). In order to determine the reason for this shift we analyzed the coverage of different gene classes by either method (Figure 5E). As a result we found that in total RNAseq rRNAs were represented by 77.6% of the total FPKM whereas in whole transcriptome profiling only 47.4% of the total FPKM were derived from rRNAs. When rRNA abundance was quantified separately 60.3% of all aligned bases were within rRNA genes for whole transcriptome profiling whilst for total RNAseq this was the case for 85.7% of aligned bases ( Figure 5F). Since the amount of rRNA determined by total RNAseq is within the actual cellular rRNA fraction of 80-90% these data now directly show that rRNAs are relatively less represented in whole transcriptome profiling data. In addition to rRNAs, we also found snRNAs detected to a higher extent by total RNAseq than by whole transcriptome profiling ( Figure 5E). In contrast, lncRNAs such as MALAT1, MEG3, NEAT1, RMST, MIAT and XIST had higher FPKM values in whole transcriptome profiling than in total RNAseq ( Figure 5G). Thus, the reduced detection of rRNAs and short non-coding RNAs such as snRNAs allows whole transcriptome profiling to cover other transcripts such as lncRNAs and those encoding proteins more extensively which increases their FPKM values. In agreement with this notion more genes were detected with FPKM ≥ 1 in all three replicates by whole transcriptome profiling (10 059 genes) than by total RNAseq (5938 genes) ( Figure 5H). The reproducibility of gene detection was similar for both methods ( Figure 5I). Finally, we compared both methods in terms of transcript coverage. Similar to mouse spinal cord RNA whole transcriptome profiling of HBRR covered transcripts uniformly with the exception of 5 and 3 ends ( Figure 5J) even though a slight bias toward the 3 half of transcripts was visible. In contrast, total RNAseq showed a more pronounced 5 bias indicating that more reads were derived from the 5 end of transcripts. In agreement, compared to whole transcriptome profiling more aligned bases were in UTRs than in coding regions in total RNAseq ( Figure 5K). Taken together, we reasoned that our whole transcriptome profiling protocol is suitable for investigating picogram amounts of input RNA typically obtained from axons of motoneurons grown in compartmentalized cultures. Whole transcriptome profiling of compartmentalized motoneuron cultures To investigate the transcriptome of motor axons we cultured wild-type embryonic mouse motoneurons in microfluidic chambers for 7 days in vitro as described before (5) (Figure 6A). We used our whole transcriptome amplification approach to analyze total RNA extracted from the somatodendritic and axonal compartments of five separate compartmentalized motoneuron cultures. For setting the number of PCR cycles required for amplification we used Gapdh since it has been detected in axons before (21,22) and we also observed it in the axons of motoneurons by fluorescent in situ hybridization (see Figure 7B). Following reverse transcription we observed an average Gapdh crossing point of 18.38 for the five somatodendritic samples and 28.35 for the five axonal samples corresponding to a ∼1000-fold difference in the amounts of RNA that could be extracted from these compartments (Supplementary Figure S8A). Therefore, we decided to amplify the somatodendritic cDNAs for six cycles and the axonal cDNAs for 18 cycles (Supplementary Figure S8A). We noticed that the two axonal samples containing the lowest amount of RNA as determined by the Gapdh qPCR crossing point also correlated poorly with the remaining three axonal samples with respect to geneby-gene FPKM values (Supplementary Figure S8B). Their average Gapdh crossing point is 31.33 which, when compared to the spinal cord samples, would correspond to a total RNA amount of ∼20 pg, which is below the threshold at which we found our method to be quantitative for spinal cord RNA. Therefore, for further analysis we only considered those three axonal datasets (and corresponding somatodendritic datasets) with estimated axonal levels of more than 20 pg (Supplementary Figure S8B). The number of expressed transcripts was similar in both compartments. 10 433 transcripts were detected on the somatodendritic side and 11 127 transcripts were detected on the axonal side ( Figure 6B). Likewise, transcript coverage was comparable for axonal and somatodendritic whole transcriptome profiling both on a global scale ( Figure 6C) as well as for individual transcripts as exemplified by the coverage along Malat1 ( Figure 6D). In order to evaluate the composition of somatodendritic and axonal RNA more closely we first determined the different classes of transcripts that could be detected in each compartment. We found that the RNA composition of axons is complex containing transcripts of multiple classes, similar to the somatodendritic side. In both compartments ∼50% of FPKM values were derived from annotated protein-coding transcripts, whilst the remaining 50% came from ribosomal and other RNAs ( Figure 6E). We noticed that in the axonal transcriptome mitochondrial rRNAs contributed twice as much to the total FPKM compared to the somatodendritic transcriptome (21.3% compared to 9.7%). This suggests that in the axonal cytoplasm mitochondria are, on average, relatively more abundant than in the cytoplasm of the soma. In contrast, cytoplasmic rRNAs were relatively less abundant in axons compared to the somatodendritic side (15.9% compared to 23.9% of the total FPKM). This difference in rRNA abundance also became apparent and was statistically significant when rRNAs were quantified separately ( Figure 6F). We also evaluated the relative abundance of individual non-coding RNAs in the axonal transcriptome ( Figure 6G). We first investigated the short non-coding RNAs 7SK and 7SL, which are of similar length (7SK: 331 nt, 7SL: 300 nt). We found 7SK relatively more abundant in the somatodendritic compartment, but also detectable in motor axons, which we validated by qPCR (see Figure 8A). In contrast, 7SL RNA was enriched in the axonal compared to the somatodendritic cytoplasm. We then analyzed the abundance of several lncRNAs, namely Malat1, Meg3, Rmst, Xist and Miat. All of these lncRNAs were present in the axonal compartment. Whilst relative levels of Malat1, Xist and Miat were similar in both compartments, Meg3 was enriched in the somatodendritic and Rmst in the axonal compartment. For Malat1 we validated its relative enrichment by qPCR (see Figure 8A). Finally, analysis of read mappings to gene segments revealed that both introns and RNAs derived from intergenic regions were more prevalent in axons than on the somatodendritic side (introns: 30.2% compared to 22.0%; intergenic: 14.9% compared to 7.8%) ( Figure 6H). Taken together, these data indicate that noncoding RNAs of different length and origin are present in motor axons. The non-coding RNAs 7SK, 7SL and 18S are located in axons of motoneurons A novel finding of our method is the detection of noncoding RNAs in the axons of motoneurons. Among these are 7SK and 7SL as well as the abundant rRNAs. In order to validate this finding using an independent method we performed fluorescent in situ hybridization (FISH) for these RNAs in cultured motoneurons ( Figure 7A). In agreement with its described function in transcription 7SK was abundant in the nucleus. Nevertheless, it was also detectable in the cytoplasm of the soma and in axons. In contrast, 7SL was abundant in the cytoplasm in line with its role in translation. Importantly, 7SL was readily detectable in motor axons which is in line with our whole transcriptome profiling data. 18S rRNA, a component of ribosomes, was similarly present in the cytoplasm and in motor axons. As negative controls, no FISH signal was detected when the probe was omitted (Figure 7A), when a sense probe for 7SK was used (Supplementary Figure S9A), when a probe for the E. coli transcript dapB absent in motoneurons was used (Supplementary Figure S9A) or when motoneurons were pretreated with RNase (Supplementary Figure S9B). We also performed FISH for two coding transcripts, Gapdh and β-actin (Actb). For whole transcriptome profiling we used Gapdh for setting the number of PCR cycles for the amplification of somatodendritic and axonal RNA. In line with these qPCRs we detected Gapdh in both compartments of cultured motoneurons by FISH ( Figure 7B). As a positive control, Actb, which has previously been detected in motor axons (8), was also abundant in the somatodendritic and axonal compartments. Whole transcriptome profiling identifies transcripts enriched and depleted in motor axons Next, we analyzed the presence of protein-coding transcripts in the somatodendritic and axonal transcriptome. For this purpose we first selected well-characterized synaptic marker proteins in our RNAseq data and validated their relative enrichment by qPCR ( Figure 8A). We found transcripts encoding the NMDA glutamate receptor subunit Grin3a and encoding the AMPA receptor subunits Gria1 and Gria2 to be enriched on the somatodendritic side in accordance with their postsynaptic localization. Additionally, the mRNA encoding Piccolo (Pclo), a protein involved in the organization of the presynaptic apparatus, was enriched on the somatodendritic side. In order to find transcripts significantly enriched in either compartment in an unbiased manner we performed differential expression analysis comparing the somatodendritic with the axonal datasets using Cuffdiff ( Figure 8B). We used the FDR-adjusted P-value, q, as a measure for significance. We found 545 transcripts enriched with q < 0.05 on the somatodendritic side and 468 transcripts with q < 0.05 enriched on the axonal side. Since six PCR cycles were used for pre-amplification of somatodendritic cDNA and 18 cycles for axonal cDNA we also performed a control experiment in order to test the effect of difference in cycle number on differential expression. For this purpose we used three replicates each of undiluted and diluted somatodendritic RNA, amplified the cDNA for 6 or 18 cycles, respectively, and performed differential expression analysis comparing the undiluted with the diluted RNA (Supplementary Figure S10). As a result, 13 transcripts were differentially expressed with q < 0.05. Of these, five were enriched in undiluted RNA and eight were enriched in diluted RNA. We overlayed these transcripts enriched in undiluted and diluted RNA with those enriched on the somatodendritic and axonal side and found two transcripts that were shared between undiluted and somatodendritic samples and three transcripts that were shared between diluted and axonal samples. After subtracting these from the list of transcripts enriched on the somatodendritic and axonal side 543 transcripts remained enriched in the somatodendritic compartment (Supplementary Table S4) and 465 transcripts remained enriched in the axonal compartment (Supplementary Table S5). For the latter we validated a number of candidates by qPCR ( Figure 8A). Their enrichment compared to the somatodendritic compartment was in agreement with the predictions by differential expression analysis. We also performed unsupervised clustering of the differentially expressed transcripts ( Figure 8C). As a result only the individual replicates for each compartment were related but there was no correlation between the compartments. This suggests that the transcripts identified by differential expression analysis are predictive for the somatodendritic and axonal transcriptome. The differential abundance of particular transcripts in the somatodendritic and axonal compartment might reflect a subcellular enrichment of specific physiological functions. In order to gain an overview over such functions we performed GO term and KEGG pathway analysis for the transcripts enriched in the somatodendritic and axonal compartment. For the analysis we used the 10 433 transcripts found to be expressed on the somatodendritic side and the 11 127 transcripts found to be expressed on the axonal side as background datasets. As a result, transcripts with synaptic functions ('synaptic transmission', 'synapse') were particularly enriched on the somatodendritic side ( Figure 8D, Supplementary Table S6). In contrast, we found transcripts with functions in protein synthesis enriched on the axonal side ('translation', 'Ribosome') ( Figure 8D, Supplementary Table S7). Among these were transcripts encoding ribosomal proteins such as Rpsa and eukaryotic translation initiation factors. Beyond translation we found transcripts involved in actin binding enriched in axons compared to the somatodendritic compartment. Notably, transcripts with functions in cell cycle regulation were also over-represented on the axonal side, including transcripts for cyclins and cyclin-dependent kinases. Comparison of whole transcriptome profiling results with available microarray expression data for compartmentalized neurons In order to evaluate the accuracy of transcripts detected by whole transcriptome profiling in compartmentalized motoneurons we compared our lists of transcripts with existing datasets generated by microarray expression analysis (see Supplementary Methods 'Comparison of whole transcriptome profiling of compartmentalized motoneurons with microarray data' section). As a starting point we chose the study by Saal et al. (2014) (5) in which the same cell culture set-up was used. In that study the extracted RNA was linearly amplified and probed with an Affymetrix Gene Chip R Mouse Genome 430 2.0 array harboring multiple probesets for each transcript. In order to compare our RNAseq data with the microarray expression levels we first generated a list of 17 587 transcripts that are covered by both microarray and whole transcriptome profiling and assigned either the microarray probeset showing the lowest or the probeset showing the highest expression value to any given transcript. For somatodendritic and axonal transcripts the correlation between RNAseq FPKM and microarray intensity values was low at ∼0.2 when the probesets with the lowest intensity values were assigned ( Figure 9A). When the probesets with the highest expression values were assigned to each transcript the correlation coefficients increased to >0. 5. This indicates that the probesets with the highest expression values are more representative of RNA levels and were used for further analysis. We then scanned the list of 17 587 transcripts that are covered by both microarray and whole transcriptome RNAseq for transcripts found to be expressed by either method. In the somatodendritic compartment 8245 transcripts were considered to be expressed by microarray and 8989 transcripts were considered to be expressed by whole transcriptome RNAseq ( Figure 9B). Of these, 6867 transcripts were common to both sets of transcripts corresponding to 83.3% of the transcripts detected by microarray and 76.4% of the transcripts detected by RNAseq. Thus, for the somatodendritic compartment microarray analysis and RNAseq identified a similar set of expressed transcripts. In the axonal compartment of motoneurons microarray profiling identified 5707 transcripts and RNAseq identified 9427 transcripts as expressed ( Figure 9B). Of these, 4998 transcripts were common to both methods which corresponds to 87.6% of the transcripts detected by microarray and 53.0% of the transcripts detected by RNAseq in axons. This suggests that in axons whole transcriptome profiling identifies a larger number of transcripts compared to microarray profiling. Next, we investigated a microarray dataset of transcripts expressed in axons of rat embryonic DRG neurons reported by Gumy et al. (2011) (4). In their study the authors report 2627 transcript probesets as expressed in DRG axons. Similar to the previous analysis we retained the probeset with the highest expression value for any given transcript and, additionally, removed those transcripts that we were not able to match with our RNAseq data. This produced a set of 1677 axonal DRG transcripts of which 1594 (95.1%) were present in the set of 11 127 transcripts that we detected by RNAseq in motor axons ( Figure 9C). However, the corre-lation between the RNAseq FPKM and microarray expression values for these 1594 transcripts was low at 0.33 (Figure 9D). Thus, whilst transcripts expressed in DRG axons also appear to be expressed in motor axons their individual expression levels vary between the cell types. DISCUSSION Subcellular transcriptomes of highly polarized cells such as neurons are complex containing both coding and noncoding RNAs. Their study would benefit from techniques that enable the simultaneous analysis of multiple classes of RNAs. Here we describe an optimized protocol for whole transcriptome profiling based on double-random priming using off-the-shelf reagents. This protocol was tested on serially diluted total RNA and applied to RNA derived from compartmentalized motoneuron cultures. Whilst whole transcriptome amplification techniques based on doublerandom priming have been described before and used successfully for amplification of low input amounts of RNA (7,19), we introduced several modifications. First, we used optimized parameters for second strand synthesis and PCR. We noticed that choice of polymerase during second strand synthesis as well as primer concentration during second strand synthesis and PCR had considerable impact on amplification efficiency. Thus, whilst abundant transcripts such as Gapdh might be captured under a wide range of reaction conditions, less abundant RNAs such as Ubqln2 might be more susceptible to such differences. Second, we found that one round of second strand synthesis using a Taq polymerase is sufficient for transcriptome capture. Existing protocols either use one or several rounds of second strand synthesis with a strand displacement polymerase (7,19). Third, we use PCR amplicons directly for Illumina library preparation without further enzymatic manipulations. Illumina MiSeq sequencing normally requires the first few bases to be heterogeneous since they are used for cluster calling (23). Therefore, low diversity samples require higher amounts of the spike-in control phage library PhiX to achieve 5 end heterogeneity. In order to overcome this limitation we used four adapter primers of various lengths simultaneously during the PCR to obtain diverse 5 ends of the amplicons. This allowed us to use only 1% of PhiX as spike-in. Fourth, we scan all reads for presence of the adapter sequence which ensures a stringent selection of reads derived solely from the amplification process. Finally, we used the random octamer sequence for molecule counting to eliminate PCR duplicates. We tested our protocol on serially diluted mouse and human RNA and also included external control RNAs. Even with modest sequencing capacity and read numbers of typically <2 million reads per sample we found that whole transcriptome profiling was scalable into the lower picogram range of input RNA. Whilst we estimated that 50 pg might be the lower limit of input RNA at which our protocol might still provide quantitative information a substantial number of transcripts was reliably detected even for 10 pg total RNA. Importantly, relative transcript levels were preserved for different numbers of amplification cycles which indicates that expression values can be compared across different RNA input amounts. We also compared our method with total RNAseq which we conducted using a conventional kit and omitting the initial poly(A) selection step. Compared to total RNAseq, whole transcriptome profiling captured less rRNA and, thereby, detected 70% more transcripts. One possible reason for the difference in rRNA coverage seen between the two protocols might be that in whole transcriptome profiling RNA is left intact prior to reverse transcription such that the highly structured rRNAs might at least partially re-fold and thereby are less amenable for primer binding or reverse transcription under the conditions used in our protocol. The inaccessibility of structured regions for primer binding might also explain the relative under-representation of 5 and 3 UTRs in our whole transcriptome profiling data since UTRs are known to harbor structural elements for translational control (24). In contrast, for total RNAseq RNA is initially fragmented which might allow a more representative capture of structured RNAs, particularly rRNAs. Whatever the reason might be we propose that profiling the whole transcriptome including rRNAs might provide some advantages, particularly for studies investigating the subcellular distribution of RNA. In axons, for example, the presence of rRNAs might be an indicator for the local translational potential and differences in translational capacity have been associated with the ability for axonal regeneration (25). Since our method is capable of monitoring rRNAs and coding RNAs simultaneously at different levels of input RNA we envision that it is clearly suitable to study axonal transcriptomes in such a comprehensive manner. In order to demonstrate that our protocol can give such neurobiological insights we investigated the total RNA content of embryonic motor axons from which picogram amounts of RNA can be extracted in microfluidic chambers. For comparison we also extracted RNA from the somatodendritic side. Even though it has already been shown that the subcellular transcriptome contained within neuron extensions is complex (26), to our knowledge an unbiased approach to obtain the complete catalog of coding and non-coding RNAs present in axons has not been conducted so far. Surprisingly, our results suggest that motor axons contain a similar number of transcripts as the somatodendritic part and that the RNA composition is similar between both compartments. Nevertheless, we found a large number of transcripts with distinct functions enriched in either compartment. On the somatodendritic side we noticed an enrichment of transcripts with synaptic functions which most likely originate from dendrites (5). In axons, transcripts with functions related to protein synthesis and actin binding were over-represented. Actin binding proteins as well as the organization of the actin cytoskeleton play important roles in growth cone establishment (27) and defects of axonal translocation of β-actin mRNA have been observed in models of motoneuron diseases (8). Another interesting finding was the observed existence of cell cycle associated mRNAs in the axonal compartment which is in line with previous results showing an enrichment of cell cycle associated mRNAs in the axonal compartment of embryonic dorsal root ganglia (4). Even though the associ- ated protein products are considered to primarily function in the nucleus, there are hints for some of these mRNAs to play important roles in axonal growth and axon pruning, neuronal migration, dendrite morphogenesis and dendrite spine formation as well as in synaptic plasticity (28). An important aspect of our whole transcriptome amplification protocol is that non-coding RNAs are captured including the abundant rRNAs. For example, we detected less rRNA on the axonal compared to the somatodendritic side of motoneurons. At the same time, transcripts encoding ribosomal proteins were relatively more abundant in axons compared to the somatodendritic compartment. Whilst we cannot rule out the possibility that the performance of our method is influenced by the initial amount of rRNA present one possible consequence would be that an alteration of the ribosomal RNA-to-protein stoichiometry might affect the number of functional ribosomes and, therefore, modify the translational potential available for protein synthesis in axons (29). However, it is worth noting that ribosomal proteins have also been associated with extraribosomal functions. For example, the ribosomal protein Rpsa, which is involved in ribosome biogenesis and, as part of the 40 S ribosomal subunit, in ribosome function, also binds cytoskeletal components such as actin and tubulin (30). Thereby, Rpsa locally targets ribosomes to the cytoskeleton and regulates cell migration through protein synthesis. Moreover, Rpsa acts as a laminin receptor and controls cell adhesion (31). In the brain, Rpsa mRNA is elevated during embryogenesis and declines in adulthood, further signifying its function in development (32). Nevertheless, the axonal presence of transcripts encoding ribosomal proteins as well as translation initiation factors points toward the possibility that regulated synthesis of these components can modify the translational potential of axons. Therefore, future studies might investigate to what extent these transcripts are being translated into functional components of the protein synthesis machinery. In addition to rRNAs, our method detected other noncoding RNA species with lengths of >300 nt. Among these were 7SK and 7SL, two structured RNAs of similar size which have been described for their roles in transcription and translation, respectively. In our motoneuron dataset we found 7SK more abundant in the somatodendritic compared to the axonal cytoplasm in line with its nuclear function in transcriptional regulation (33). We confirmed this localization pattern by FISH in cultured motoneurons. In contrast, 7SL was enriched in the axonal cytoplasm. As part of the signal recognition particle, 7SL is involved in co-translational transfer of proteins into the endoplasmic reticulum. Therefore, its abundance in axons as well as the presence of rRNAs such as 18S might be further indication for the presence of the protein secretion machinery in axons (34). Importantly, our whole transcriptome profiling method captured lncRNAs more efficiently than total RNAseq. Considering that lncRNAs have received much attention over the last years due to their multifaceted roles in gene regulation our method might provide an interesting opportunity for this field. In recent years it became clear that lncR-NAs are not just especially enriched in the central nervous system and are important for neurodevelopment but seem to have different functions in neurodegenerative diseases as well (35). Because of this knowledge it would be interesting to get a deeper insight in the localization and possible enrichment of these non-coding RNAs in different subcompartments of neurons as this was not possible so far. As a surprising finding we observed several lncRNAs in the axonal compartment. Even though lncRNAs have so far been predominantly studied for their nuclear roles in regulating gene expression (36) their presence in the axonal cytoplasm might indicate additional functions in translocation. RNAbinding proteins interact with these lncRNAs and thereby might mediate their axonal transport. For example, RMST has been found to interact with hnRNPA2/B1 (37). It is possible that such protein-RNA complexes might be transported subcellularly and, as part of larger transport particles, mediate subcellular trafficking of other RNAs. Therefore, our whole transcriptome profiling protocol could help to determine how the axonal transcriptome is affected by loss of these lncRNAs. In line with these data an interesting finding of our study was the detection of introns in motor axons. One possible explanation for this observation would be that unspliced pre-mRNAs or partially spliced mRNAs containing retained introns, respectively, are located in axons. Even though splicing normally occurs in the nucleus the possibility for cytoplasmic splicing has been discussed (38). However, an alternative explanation for the presence of introns in axons could be that introns are not simply byproducts of the splicing process but might give rise to functional RNAs themselves that act independently of their associated mRNA (39). In either case, the apparent axonal presence of intronic RNAs warrants further investigation. In conclusion, we describe here a method for whole transcriptome profiling that is scalable, quantitative and costeffective. It provides the major advantage that both noncoding RNAs and coding RNAs can be detected simultaneously. This also includes rRNAs the levels of which might indicate translational capacity and, therefore, might be an important parameter when studying subcellular transcriptomes in the context of local protein synthesis. SUPPLEMENTARY DATA Supplementary Data are available at NAR Online.
v3-fos-license
2018-04-27T03:18:22.462Z
2018-02-26T00:00:00.000
4844871
{ "extfieldsofstudy": [ "Geography", "Medicine" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/2043820617752009", "pdf_hash": "e7b5b1ead6a3b376fd180493c2ee260fe66d5927", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41156", "s2fieldsofstudy": [ "Sociology" ], "sha1": "8d0995f3845352f51647215de8944e44ed6fe660", "year": 2018 }
pes2o/s2orc
Comparative approaches to gentrification The epistemologies and politics of comparative research are prominently debated within urban studies, with ‘comparative urbanism’ emerging as a contemporary lexicon of urban studies. The study of urban gentrification has, after some delay, come to engage with these debates, which can be seen to pose a major challenge to the very concept of gentrification. To date, similar debates or developments have not unfolded within the study of rural gentrification. This article seeks to address some of the challenges posed to gentrification studies through an examination of strategies of comparison and how they might be employed within a comparative study of rural gentrification. Drawing on Tilly (Big structures Large Processes Huge Comparisons. New York: Russell Sage), examples of four ‘strategies of comparison’ are identified within studies of urban and rural gentrification, before the paper explores how ‘geographies of the concept’ and ‘geographies of the phenomenon’ of rural gentrification in the United Kingdom, United States and France may be investigated using Latour’s (Pandora’s Hope. London: Harvard University Press) notion of ‘circulatory sociologies of translation’. The aim of our comparative discussion is to open up dialogues on the challenges of comparative studies that employ conceptions of gentrification and also to promote reflections of the metrocentricity of recent discussions of comparative research. Introduction There is a growing interest in comparative research, particularly in urban studies where comparative urbanism is a vibrant subject of discussion (McFarlane and Robinson, 2012;Robinson and Roy, 2016;Ward, 2010), albeit one that has not hitherto featured in Dialogues in Human Geography. Here we rectify this omission by explicating the application of these debates to one research area where comparative research is prominent, namely the study of gentrification. As Bernt (2016: 1) observed, the arrival of comparative urbanism into gentrification scholarship raises challenges whose relevance constitutes 'a turning point not only for gentrification research, but also for the way we develop established concepts into a more global body of knowledge'. Bernt highlights how the rise of comparative research has led to an expansion in the geographical focus of gentrification studies, with attention paid to spatial variabilities in the concept, form and extent of gentrification. As Lees (2012: 157-158) comments, this interest preceded the emergence of the notion of comparative urbanism, with gentrification researchers having a long-standing interest in how 'theories of gentrification have travelled and how the process itself has travelled'. She adds that different forms of gentrification emerge 'in different places at different and indeed the same times' and that meanings associated with gentrification in one place may not translate easily, if at all, to other locations. Consequently, she argues, researchers need to 'critically debate the international significance of the term "gentrification"' and 'consider how comparison might take place' (Lees, 2012: 158). As such, gentrification research might be commensurable and reinvigorated by interest in comparative research. Yet, as Bernt (2016: 1) observes, the rise of comparative research has led to calls for abandonment of the gentrification concept, with Ghertner (2015: 522) wondering whether it is now 'time to lay the concept to bed'. Bernt (2016: 1), while drawing back from such arguments, sees value in some of Ghertner's claims and observes that the impact of comparative research on gentrification is 'an increasingly open question'. We address this question via consideration of the potential and value of comparative research on rural gentrification. While identified as a somewhat 'neglected other' to the study of urban gentrification (Phillips, 2004), recent decades have seen increasing reference to rural gentrification, especially in the United Kingdom (e.g. Phillips, 2002;Smith, 2002a;Stockdale, 2010) and North America (e.g. Darling, 2005;Hines, 2012;, but also in other countries (e.g. Hjort, 2009;Qian et al., 2013;Solana-Solana, 2010). There are, however, many countries where there has been little use of the concept, and even in places where it has been employed, rural gentrification remains a minor motif within rural geography and a peripheral constituent of wider gentrification debates. Theorizing from positions of marginality has been a major point of argument within elaborations of comparative urbanism (e.g. McFarlane, 2010;Roy, 2009Roy, , 2016, and we want to stimulate consideration of the extent to which framings other than the urban might contribute to elaborating comparative studies of gentrification. More specifically, we explore how a comparative study of rural gentrification in France, United Kingdom and United States could be developed to engage with the challenges identified by Bernt (2016). To develop its arguments, the article begins by considering strategies of comparison as outlined within comparative urbanism and explores how these have been performed within urban gentrification studies. Hitherto, discussions of comparative approaches within these studies have been narrow in focus, particularly when set alongside the literature on strategies, practices and politics of comparison associated with comparative urbanism. Drawing on Tilly (1984) and Robinson (2015), we suggest that practices of comparison enacted in gentrification studies are more diverse than are represented in existing literatures. From this starting point, the article argues that the strategies of comparison identifiable within urban gentrification studies are present within rural studies, albeit with differences in extent and focus. The article then focuses on a comparative study of rural gentrification in France, United Kingdom and United States, drawing on the concept of 'sociologies of translation', outlined by Latour (1999), to explore both the 'geographies of the concept' and 'geographies of the phenomenon' of rural gentrification (Clark, 2005). The article concludes by considering relationships between these two geographies of rural gentrification and strategies of comparison. Comparative urbanism and urban gentrification Comparative urbanism highlights the prevalence and complexity of comparison. Ward (2010: 473), for example, argues that 'comparison is practically omnipresent in much empirical social science research', while McFarlane (2010: 725) asserts that theoretical abstractions inevitably, albeit often implicitly, make comparative assertions, because 'claims and arguments are always set against other kinds of . . . possibilities or imaginaries'. Practices such as literature citation, for example, set up comparisons with existing bodies of knowledge. McFarlane claims that comparative practices should be explicitly discussed, with consideration paid to both epistemological methodologies and the politics of comparison. The former involves consideration of the practicalities of comparison, such as language, resources, the delimitation of scope and focus, methods of comparison and the role and construction of comparative typologies. In relation to this last feature, Lagendijk et al. (2014) argue that comparative studies of gentrification often focus on establishing a metric to actualize interpretations and practices across spatial contexts. Examples include studies by Ley (1986;1988; and Wyly and Hammel (1998;, which variously illustrate difficulties in constructing comparative metrics, including 'readily available secondary data' (Wyly and Hammel, 1998: 305) failing to map onto conceptual arguments and/or be available across localities being compared (Ley, 1996). Metric-based analysis could be characterized as fitting within McFarlane and Robinson's (2012: 767) description of 'quasi-scientific' research focused on the identification of a narrow range of comparative traits, an approach they claim is 'inappropriate' given the 'multi-dimensional, contextual, interconnected, and endogenous nature of urban processes'. In the context of gentrification research, Lees et al. (2015b: 9) similarly argue that structured comparative approaches 'flatten cases' through focusing on 'a limited number of factors or categories'. They make no use of metric-based analysis, but rather propose practices of transnational 'collegiate knowledge production' (Lees et al., 2015b: 13; see also López-Morales et al., 2016). However, Lagendijk et al. (2014: 362) utilize assemblage theory to propose that, rather than either foster the articulation of generalized metrics or reject them as being 'untrue to reality', comparative studies of gentrification might recognize their presence within the 'worlds of gentrification' and study their 'actualisation and counter-actualisation' within a range of localities. This is a productive position, although it implies that comparative studies would only examine spaces where metrics were present, which might severely limit the scope of such studies. A range of positions on the value of metrics and typologies to comparative studies are being advanced within gentrification studies, although, as yet, there remains little sustained discussion of their epistemological significance or the practices required for alternative strategies of comparison. There is a significant difference here between discussions of comparative studies of gentrification and the literature on comparative urbanism which contains much greater epistemological reflection, with Tilly's (1984) identification of 'individualising', 'universalising', 'encompassing' and 'variation-finding' strategies (Table 1) being widely cited (e.g. Brenner, 2001;Robinson, 2011). We demonstrate that these have applicability to gentrification studies and, hence, can advance the development of comparative studies of gentrification, although as the article develops we layer in other understandings of comparison, derived from comparative urbanism and studies of gentrification. comparative urban studies. The focus in the former is on comparing instances of a phenomenon to identify the particularities of each case. Gentrification examples include the comparisons of Carpenter and Lees (1995: 286) focused on a 'questioning of generalizations about the gentrification process and an emphasis on international differences', Musterd and van Weesep's (1991) examination of whether gentrification in Europe was an instance of a generalized process or involved specifically European dynamics, and Butler and Robson's (2003) study of neighbourhoods in London that emphasized the different compositions of gentrifiers in each locality. A recent, and epistemologically focused, example is Maloutas' (2012) criticism of the application of the concept of gentrification across contexts. He claims that this, first, leads to a decontextualization of the concept, which becomes increasingly abstract in order to be applicable across cases. An illustration is Clark's (2005: 258) creation, by 'realist abstraction', of 'an elastic yet targeted definition' of gentrification, an argument employed in developing the notion of 'generic gentrification' (Hedin et al., 2012). Maloutas (2012: 38-39) asserts, however, that abstract conceptions of gentrification produce a neglect of 'causal mechanisms and processes', in favour of a superficial focus on 'similarities in outcomes across contexts'. Second, Maloutas argues that while gentrification scholars have sought to decontextualize the concept, it remains marked by the context of creation. Specifically, he contends that the concept was developed in, and was of considerable significance in understanding changes within, cities such as London and New York. Attempts to make the concept travel to other time-spaces are, he claims, flawed because conditions in these contexts are different. Third, he argues that attempts to make gentrification travel are ideological, acting to project 'neoliberal framings' across contexts. Maloutas is an exponent of individualizing comparison, viewing concepts as inextricably linked to contexts. Such arguments when advanced within comparative urbanism have been subject to criticism, with Peck (2015: 179) commenting that such work can be particularist rather than comparative, promoting 'hermetically sealed' modes and sites of analysis. With respect to gentrification, Lees et al. (2015b: 7) state that Maloutas creates 'fossilisation not contextualisation', reifying the 'contextual epiphenomena' of gentrification, such as how it 'looked, smelled or tasted in some specific (North American and West European) contexts at very specific times', to create a simplified and static conception of gentrification that cannot be reasonably applied beyond its initial context. They add that while there are lessons to be learnt from comparative urbanism, 'we should not throw the baby out with the bathwater' (Lees et al., 2015b: 9) and seek to 'stand aside' from a 'flat ontology' dedicated to the appreciation of difference in favour of an ontology focused on 'social injustices and power relations'. It is further asserted, 'that a large number of well analysed cases help extract global regularities of the causes of gentrification' (Lees et al., 2015b: 6). While few gentrification researchers hitherto appear willing to fully embrace Maloutas' individualizing perspective, many studies implicitly employ it by drawing comparisons to pre-existing studies to emphasize the particularities of their study. Instances of variation-finding comparisons, which are identified by Tilly (1984) as strategies that seek to identify causes of variation across cases, include van Gent's (2013) 'contextual institutional approach', which, although focused on Amsterdam, explains variations from other studies of 'thirdwave gentrification' via institutional practices (see also Hochstenbach et al., 2015). Universalizing and encompassing comparisons While individualizing and variation-finding comparisons can be identified in gentrification studies, universalizing and encompassing perspectives have a stronger presence. Universalizing comparisons focus on establishing that instances share common, and generally independently constituted, properties, with change within them viewed as largely driven by dynamics internal to these cases. The approach generally enacts 'an incipient monism' (Leitner and Shepherd, 2016: 231) in that certain features are seen to be significant to all the identified cases, and universalizing comparisons also often adopt 'developmentalist perspectives', with differences between cases viewed as reflections of differential positions within a common path. Examples of universalizing perspectives can be identified within gentrification studies. Early decades of gentrification studies, for example, involved 'legislative' debates (Phillips, 2010) concerning the applicability of various monist conceptions of gentrification to a widening number of cases. For authors such as Lambert and Boddy (2002), the spatial extension of locations identified as undergoing gentrification stretched the term to encompass so much difference that, as per Moulatas, it lost any specific meaning. For others, commonalities could be discerned within such differences. Reference has already been made to Clark's (2005: 260-261) adoption of realist abstraction, and he sought to use this to identify both generic 'underlying necessary relations and causal forces' associated with gentrification and features which, while crucially significant in understanding the formation and impact of gentrification in particular localities, were contingent in character. Recent years have seen a series of applications of these arguments to comparative studies of gentrification (Betancur, 2014;Lees et al., 2016;López-Morales, 2015;Shin et al., 2016). A different, but related, perspective was work, such as Smith (2002b) and Lees et al. (2008), suggesting that the character of gentrification was itself changing, such that early definitions were now inappropriate to identify the presence, processes and varied forms of contemporary gentrification. Strands of continuity, such as class transformation, displacement and capital flows into built environments, were, however, also identified. In both sets of work, the universalism of identifying continuities and/or abstract commonalities was tempered, to a degree, by recognition that gentrification could take a range of different forms. This was evident in 'stage-theories' of gentrification (Clay, 1979;Gale, 1979;Hackworth, 2007;Hackworth and Smith, 2001). As discussed in Phillips (2005), these interpretations have been criticized for employing developmentalist logics, whereby gentrification is framed as a singular process impacting locations which move, or in some cases fail to move, through a predetermined series of stages, although attention has been drawn to differences in trajectories of change, to instances of non-development and to the multiplicity of gentrification forms present in a location at particular points in time (Ley and Dobson, 2008;Pattaroni et al., 2012;Van Criekingen and Decroly, 2003). Universalizing comparisons were also enacted in discussions of 'gentrification generalised', which often portrayed gentrification as a singular process 'cascading' both 'laterally' across national borders and 'vertically' down 'the urban hierarchy', until it reached 'even small market towns' (Smith, 2002b: 439) or 'unfurled to include rural settlements' (Atkinson and Bridge, 2005: 16). Such views encouraged the adoption of an implicit, 'imitative urbanism', whereby processes of urban gentrification are seen to have 'travelled to and been copied in the Global South' (Lees, 2012: 156). Such perspectives are viewed as 'western-centric' by comparative urbanists influenced by post-colonialism (e.g. Robinson, 2004;, as well as by gentrification researchers such as Maloutas (2012), Lees (2012) and Lees et al. (2015a, b), who highlight how such interpretations may act as 'deforming lenses' (Maloutas, 2012: 43), projecting occidental concerns and assumptions at the expense of recognizing specificities and differences. However, it can also be argued that these conceptions are overly urbancentric in their focus, viewing gentrification as originating in and diffusing from a selected number of metropolitan sites to other urban and, eventually, rural sites. This imagery neglects the identification of sites of rural gentrification soon after coinage of the term gentrification by Glass (see Phillips, 1993). Just as post-colonialists have highlighted how occidental concerns may be projected over cities of the South, researchers often position the urban as 'a privileged lens through which to interpret, to map and, indeed, to attempt to influence contemporary social, economic, political and environmental trends' (Brenner and Schmid, 2015: 155). Universalizing comparisons do not have to be coupled with diffusionist perspectives. Brenner et al. (2010: 202), for example, identify the possibility of 'accumulation of contextually specific projects', and Peck (2015: 171) argues for recognition of 'common, cross-contextual patterns and processes', while Robinson (2015) calls for examination of repetition as singular assemblings. In this perspective, repeated appearance is not seen as diffusion of a common process but as a series of singular outcomes of processes, practices and relations in operation within multiple localities. Such arguments resonate with urban gentrification scholarship. Lees et al. (2015a: 442), for example, argue for recognition of the 'transnational mobility of gentrification' and 'its endogenous emergence' in a range of locations, such that gentrification may be viewed as multiple and multicentric, although there are still said to be 'necessary conditions' (Lees et al., 2015b: 8) that need to be present before gentrification can be said to exist. A similar, and in our view more productive, way of framing such arguments is to suggest that universalizing comparisons be viewed as 'genetic comparisons' (Robinson, 2015), identifying singularly constituted transformations in locations across which there are some recurrent features viewed as constitutive of gentrification, but in each case, these will have been produced within that locality. These recurrent features might be viewed as the abstract 'generic' dimensions of gentrification outlined by Clark (2005), although within a genetic approach these elements would be viewed as contingently created as the other elements of each case, rather than identified as established through some form of necessary relationship. As such, the genesis of the generic dimensions requires explanation in each instance rather than being viewed as foundationally determinant. Furthermore, while each case may involve, or be stimulated by, movement of resources and agents into that locality from beyond, it is likely that there will be at least some spatially and/or temporally specific elements. Such an approach would counter the monism and developmentalism that has been the focus of criticism. The final form of comparison identified by Tilly is 'encompassing'. Here, the aim is to situate instances of a phenomenon in relationship to each other, in such a way that their form can be seen to be in large part determined by such relationships. Such understandings can be clearly identified within gentrification studies. Examples include Smith's (1982Smith's ( , 1996 conceptualization of gentrification as a facet of uneven development and the globalization of gentrification (Smith, 2002b). In this latter work, Smith argues that gentrification has become global as various forms of capital sought to restructure new localities in their search for continuing profitability, with the vertical and lateral dispersal of gentrification discussed earlier, being seen to stem from an 'influx of new capital' into gentrification projects and disinvestment and reinvestment of existing capitals from one area to another. Similarly, Atkinson and Bridge (2005) suggest that the 'unfurling' of gentrification in an increasing range of spaces, including rural areas, is the result of flows of finance, people, information and ideas from one gentrified area into another (see also Lees, 2006;. More recently, Lees et al. (2016: 13) have identified their examination of 'planetary gentrification' as 'a relational comparative approach' involving investigation of how instances of gentrification are 'increasingly interconnected'. Emphasizing connections rather than similarities between cases of gentrification, these studies can be viewed as advocating encompassing rather than universalizing comparisons, although failing themselves to recognize these differences. Attention also needs to be paid to the status of these connections, with Robinson (2011) promoting use of the term 'incorporating comparisons' to recognize the significance of what she would later describe as the genetic elements of relational connections, that is recognizing their genesis as well as consequences. Politics of comparison In addition to fostering discussion of epistemology, comparative urbanism also highlights the politics of comparison. McFarlane (2010: 726), for example, argues that comparison is a political mode of thought because it can be employed 'as a means of situating and contesting existing claims . . . expanding the range of debate, and informing new perspectives'. Comparative urbanism has been particularly associated with postcolonial perspectives (e.g. Robinson, 2004Robinson, , 2011, it being claimed that comparison fosters the creation of 'readings of theory and the city' (McFarlane, 2010: 735) less marked by the cities and urban theorists of the North. Lees (2012: 155-159) draws heavily upon this argument, claiming that 'gentrification researchers need to adopt a postcolonial approach'. She suggests that work is needed on the mobilities and consequences of ideas of gentrification and on forms and practices of contemporary gentrification, with a key focus being postcolonial informed studies of urbanism in the Global South, although adds that 'there remain important comparative studies to be made not just between the Global North and Global South' (Lees 2012: 157-158). The remainder of this article explores the potential and value of comparative studies of rural gentrification, which, as mentioned earlier, have been identified as a neglected other to the study of urban gentrification (Phillips, 2004). Indeed, while postcolonial comparative urbanists have challenged 'metrocentricity', where this is understood as involving a concentration of research on metropolitan centres in the Global North (Bunnell and Maringanti, 2010), the term might also be viewed in urban and rural registers as well. Thomas et al. (2011) have argued that 'a defining element of social science education for a former inhabitant of Rural America is an overwhelming sense that you are ignored by your discipline', a comment that echoes Lobao's (1996: 3) commentary, although she argued that the study of rural space was not only often marginalized as the 'non-metropolitan' but that such a positioning could be a location of 'creative marginality' from which to transform the mainstream. The following section considers how comparative strategies outlined with respect to urban gentrification relate to studies of rural gentrification. We then explore how these strategies can be deployed in comparative studies of rural gentrification in France, United Kingdom and United States, drawing on Latour's (1999) concept of 'circulatory sociologies of translation' to illuminate the geographies of gentrification and geographies of 'articulating gentrification'. Comparative studies of rural gentrification argue that rural gentrification studies are marked by localized case studies, with little examination of the distribution or processes of gentrification beyond these locations. This does not mean, however, that comparisons have been absent from rural gentrification studies. Reference has been made to the arguments of McFarlane (2010) that even localized studies make comparative claims, even if individualizing in character. Many rural gentrification studies include cautionary remarks concerning the transfer of ideas of gentrification from urban to rural contexts. Smith and Phillips (2001: 457), for example, coined the term 'rural greentrification', both to stress the 'demand for, and perception of, 'green' residential space from in-migrant' gentrifier households and to suggest that this feature 'stands in contrast to the 'urban' qualities which attract in-migrant counterparts in urban locations'. Smith (2011: 603) later argues that studies reveal 'more and more incommensurabilities between urban and rural gentrification', while Guimond and Simiard (2010) assert that while rural researchers have drawn inspiration from urban gentrification studies, 'important nuances must be taken into consideration when applying urban theories of gentrification to a rural context'. The significance of contextual differences has been highlighted not simply with respect to urbanity and rurality, but within the rural: Darling (2005Darling ( : 1015 argues that rural areas may be 'sufficiently differentiated to render the idea of an overarching, homogeneous "rural gentrification" suspect', indicating a need for 'a more refined and specific set of labels to indicate a variety of landscape-specific gentrification models'. Consideration might also be paid to the scale of landscape forms and how these connect to particular theorizations of gentrification. Contextual factors are significant to variationfinding as well as individualizing comparisons. The limited number of rural gentrification studies limits the scope for variation-finding comparisons, although it is possible to identify practices and processes that could cause variations in the gentrification of rural localities. As in urban contexts, governmental regulations and development controls are identified as agencies within the gentrification of rural localities (Gkartzios and Scott, 2012;Hudalah et al., 2016;Shucksmith, 2011) and clearly can be enacted differentially. Likewise, the nature and extent of rural space might condition the presence and/or form of rural gentrification (Darling, 2005;Phillips, 2005;Smith and Phillips, 2001), given differences are evident in the character of areas identified as experiencing rural gentrification: UK studies often focus on localities with extensive commuting, while North American studies tend to be in areas seen to be beyond extensive metropolitan influences (Figures 1 and 2). Nelson et al.'s (2010) and examinations of rural gentrification across the United States provide arguments for the adoption of both universalizing and encompassing comparisons. In connection to the former, review existing research on rural gentrification in the United Kingdom, Spain and Australia, in order to identify mappable indicators of gentrification in non-metropolitan areas. This strategy assumes that processes of gentrification have high uniformity across rural contexts, an approach also adopted in . However, this study also enacts an encompassing focus, identifying relational reasons for moving beyond localized case studies. Globalization is viewed as a major driver of rural gentrification because key constituents of urban to rural movements are middle and upper-middle classes who have benefited from globalized capital accumulation and rising land and property values. Nelson and Nelson argue that this positioning in global capital enables these classes to acquire the assets to locate in high-amenity destinations, with gentrification in these remote rural locations being consequential to relationships with, and within, a globalized economy. Nelson et al. (2015) repeat this argument, asserting that rural gentrification in amenity areas of the United States reflects a spatial fix of surplus capital accumulated in high wage urban-based careers in the globalized service sector. Similar arguments, albeit focused on UK rural restructuring through the settlement of a commuting 'service class', were advanced by Cloke and Thrift (1987), who claimed this movement was driven by changes in the international division of labour. Cloke et al. (1991) also drew attention to how movements of this class could connect into flows of exogenous 'footloose' capital, while Phillips (2002; stressed flows of capital from agriculture and service provision into the gentrification of properties, as well as flows of labour power, ideas and people. and Nelson et al. (2015) identify further global connections, with the gentrification of remote amenity locations stimulating movement of low-income Latino populations to, or more often in proximity to, these localities. Parallels with studies of service class migration to accessible UK rural areas can be seen, with Cloke and Thrift (1987: 328) arguing that rural service class growth entails 'growth of members of other classes and class fractions needed to service the service class'. Rural gentrification studies, like their urban counterparts, enact all four strategies of comparison identified by Tilly (1984: 145), an unsurprising finding given he argues that each strategy of comparison 'have their uses'. Both Ward (2010) and Robinson (2011) have asserted that individualizing comparisons are among the most widespread form of comparison conducted in urban studies, and this appears to be the case also in rural gentrification studies, in part because of the predominance of localized case studies. Adoption of such a strategy provides an implicit critique of universalizing perspectives, although such viewpoints are evident in rural gentrification studies, as are encompassing comparisons. Variation-finding perspectives on rural gentrification are least developed, due in part to the lack of studies from which this approach could draw. All the identified strategies of comparison, and reflections on the value of comparative studies of rural gentrification, could clearly benefit from explicit examples of comparative research. The final section of this article explores how such studies could be developed by considering how a comparative study of rural gentrification could be pursued in France, United Kingdom and United States. In undertaking this, it will draw upon the concept of sociologies of translation as outlined by Latour (1999). A comparative study of rural gentrification in France, United Kingdom and United States provides an opportunity to explore reasons for, and consequences of, differential use of this concept, and whether this connects to differences in the presence of the phenomenon or what, following Lagendijk et al. (2014: 358), might be described as 'geographies of the articulation of the concept' and 'geographies of the phenomenon' of rural gentrification. They suggest that assemblage theorizations foster comparative studies exploring 'variations and complexities' associated with use of the term gentrification. Such an approach has parallels with Latour's (1999) concept of 'circulatory sociologies of translation' employed in Phillips' (2007; explorations of the use of concepts of gentrification, class and counterurbanization within rural studies. Latour's concept provides an effective way of developing comparisons that recognize the limitations and potentials of travelling theories. Latour develops his concept of circulatory sociologies of translation as a way of 'enumerating' types of activities and actants that need to be enrolled in constructing concepts and knowledge. He argues that concepts are analogous to a 'heart beating in a rich system of blood vessels' (Latour, 1999: 108), being simultaneously at the centre of a circulating system and dependent on flows from other elements of the system. Drawing on this analogy, Latour argues that concepts be conceived as 'links and knots' at the centre of 'loops' of flow, or 'circulating sociologies of translation', which bring assets to sustain the development of the concept. These circulating sociologies are identified as autonomization, alliance building, public representation and mobilization ( Figure 3). Latour (1999) describes the enrolment of support for a concept or interpretation within worlds of academic activity and discourse as autonomization. Although there are no detailed sociologies of rural studies (although see Murdoch and Pratt, 1993), studies pointing to the significance of autonomization in understanding differential levels of engagement with the concept of rural gentrification in France, United Kingdom and United States can be identified. Kurtz and Craig (2009) and Woods (2009), for example, identify differential developments in UK and US rural geography. For Kurtz and Craig, the publication industry fostered differential engagements with theory, with UK rural studies being more theoretically inclined due to a focus on journal article and edited book production, while US rural studies were more empirically focused through an emphasis on regional book monographs. Woods (2009), while accepting this differentiation of rural geography, argued that processes of disciplinary institutionalization played an important role, creating in the United States a stronger theoretical orientation among rural sociologists than rural geographers, while UK rural geography became highly engaged in social theoretical debates in part because of institutional marginalization of rural sociology in this country. Thomas et al. (2011) provide a different account of the institutionalization of US rural sociology, stressing its severance from wider sociology. It is evident that geographers have more readily adopted the concept of rural gentrification than sociologists (although see Brown-Saracino, 2009;Hillyard, 2015), while its adoption within UK rural geography may reflect the significance of 'political-economy' perspectives in geography during the 1980s and 1990s. The subsequent turn towards culture that invigorated UK rural studies in the later 1990s also inspired considerations of the role of rural space as a motivator of rural gentrification (e.g. Phillips, 2002;Phillips et al., 2008;Smith and Phillips, 2001). Important disciplinary differences have been identified within French rural studies (Lowe and Bodiguel, 1990), although in both geography and rural sociology during the 1980s and 1990s, there was an emphasis on empirical studies, with limited engagement with social theory and epistemological reflections (Alphandéry and Billaud, 2009;Papy et al., 2012). This was despite notable French social theorists who have influenced gentrification studies in the Anglophonic world, such as Bourdieu and Lefebvre, undertaking early work in rural sociology (Elden and Morton, 2016;Phillips, 2015). Autonomization While differences in levels of theoretical reflection within disciplines at particular moments in time can influence engagement with conceptions of gentrification, other processes are also influential, including enrolment of other concepts. Fijalkow and (2007), for example, argue that gentrification's low uptake in France reflects a preference to use the concept of 'embourgeoisement', conjoined with concerns about the coherence and relevance of the gentrification concept within French contexts (cf. Rousseau, 2009). This preference may, however, have limited applicability within a rural context, where long-standing preoccupations with processes of agricultural change and the status of French peasants and small producers fostered disconnection with notions of embourgeoisement circulating in other social science discourses (Hervieu and Purseigle, 2008;Rogers, 1995). Préteceille (2006) and Préteceille Another influence on French rural studies was its framing of rural space as a passive subject of urban change. The countryside was viewed as losing its specificity (Berger et al., 2005), either becoming urbanized (sometimes described as rurbanization) or more differentiated, such that there were no clear lines of distinction between the urban and rural (Hervieu and Hervieu-Léger, 1979;Jean and Perigord, 2009;Kayser, 1990). Large areas are ascribed an urbanized identity, without consideration of landscape character or public perceptions (Mathieu, 1990;. These 'peri-urban' areas include accessible localities akin to those that formed the locus of UK studies of rural gentrification (Figure 1). Similarly, in the United States, conceptions of the exurban and the rural as simply non-metropolitan may contribute to rural gentrification being applied primarily in areas with low levels of urban commuting (Figure 2), although as mentioned previously, consideration might also be given to the differences in the scale of areas being characterized as rural: according to the OECD's (2016) 'national area classification', for instance, only 24.1% of the United Kingdom is designated as rural, compared to 77.8% and 40.9% of United States and France, respectively. Simultaneous with academic movements towards recognition of the peri-urban in France was growing public interest in issues of rural cultural identity (Bonerandi and Deslondes, 2008). Paralleling these changes was movement from quantitative assessments of population numbers/movements to qualitative consideration of how these connect to transformations in popular understandings of the countryside. These include studies of international migrants in French rural places (Benson, 2011;Barou and Prado, 1995;Buller and Hoggart, 1994;Diry, 2008), as well as a few studies explicitly referencing notions of rural gentrification (Cognard, 2006;Perrenoud, 2008;Puissant, 2002). However, across all three countries, discussions have generally been framed in registers other than gentrification, with terms such as amenity migration, counterurbanization, neo-ruralism, peri-urbanization, rural renaissance and social segregation and differentiation being preferred over gentrification. Phillips (2010) has discussed relationships between conceptions of rural gentrification and counterurbanization, arguing that in UK and US studies, the latter gained strength over the former not only through widespread circulation within academic channels of autonomization but also through the circulatory sociologies of alliance building and public representation. Counterurbanization, it is claimed, drew strength from alignments with the intellectual contours of governmental statistics production and policymaking, while also making use of 'social abstractions well embedded in, or highly commensurable with, public normative consciousness' (Phillips, 2010: 553). Consequently, counterurbanization circulated relatively easily within public discourses, with Halfacree (2001: 400) highlighting how it 'spun out into popular debate', particularly within the United Kingdom, where narratives of residential migration to the countryside are reproduced across television documentaries and dramas, newspapers and popular fiction. The concept of gentrification, on the other hand, has social connotations of class that may have limited its uptake in public and policy contexts, although at times feeding into both (Phillips, 2002;. Alliance building and public representations Applying such arguments to comparisons between the United Kingdom, France and United States suggests that circulatory sociologies of alliance building and public consciousness, as well as autonomization, may significantly differ. Reference has, for example, already been made to the significance of concepts such as peri-urbanism within French rural studies, and this concept gained significant academic impetus when included as a category in the Institut National de la Statistique et des E´tudes E´conomique official classification of French national spaces in 1996 (Le Jeannic, 1996). This change both reflected the conceptual success of the peri-urban within academic debates and institutionalized the peri-urban as a category of space deserving not only academic attention but also as a subject for political and public discourse, although with respect to the latter, notions of urban and rural space still predominate. Similar arguments can be made with respect to the US General Accounting Office that classifies land using categories (e.g. urban, urbanized, urban cluster, metropolitan, micropolitan, nonmetropolitan and rural) that effectively cast the rural and nonmetropolitan as residual classifications with no consideration given to their material character or public perceptions of these areas. In the United Kingdom, by contrast, governmental spatial classifications have, at least in England and Wales, demonstrated parallels to aspects of popular constructions of rurality since 2004 (cf. Bibby and Shepherd, 2004;Bibby and Brindley, 2016;Phillips et al., 2001). One consequence is that areas close to urban areas have been identified as locations of 'rural' gentrification ( Figure 1). There is evidence pointing to greater popular and policy engagement with the term gentrification in North America than in the United Kingdom or France. Guimond and Simiard (2010), for example, suggest that rural gentrification attracted the attention of television producers, as well as reporters, in Quebec's provincial and regional press. In the United States, rural gentrification research by Nelson figured in an article in the Wall Street Journal (Dougherty, 2008), while in relation to alliance building, the Housing Assistance Council, in cooperation with US Department of Housing and Urban Development, produced a high-profile report on rural gentrification (Housing Assistance Council, 2005). Furthermore, while the term rural might not be applied by academics and policymakers to areas with high commuting to large urban areas, there are numerous cases of literary and filmic representations of such spaces that enact motifs of rurality and gentrification. Part of the policy interest in rural gentrification within the United States links to what has been described in urban studies as 'positive gentrification' (Cameron, 2003), whereby state agencies perceive there to be benefits from processes of gentrification, such as the influx of capital-rich migrants whose consumption, skills and enterprise might stimulate local development and employment. While subject to considerable criticism within urban studies (Smith, 2002b;Slater, 2006), this conception of rural gentrification has resonances with studies of migration to non-metropolitan areas in the American West (Beyers and Nelson, 2000;Gosnell and Abrams, 2011;Nelson, 1999), to Stockdale's (2006; work on rural gentrification and the impacts of rural in-migration in Scotland, and to the activities of some French local authorities which have sought to attract particular in-migrants, such as entrepreneurs or other 'project backers' (Richard et al., 2014). In relation to public representations, Lamont (1992; and Bennett et al. (2009) suggest there is greater acceptance of notions of hierarchical differentiations in cultural value in France than in the United Kingdom or United States, and conversely, less receptivity to identities constructed around socio-economic distinctions. Such arguments are of clear importance to the study of gentrification given that research has suggested that symbolic distinctions are of crucial significance to its formation (e.g. Butler and Robson, 2003;Rofe, 2003). Furthermore, connections between cultural values and academic interpretations of society have been highlighted by Savage (2010), who presents an historical account of changing concepts of culture within the UK middle classes, connecting these to developments in the conduct of sociology. Among the studies used to develop this argument was Pahl's (1965) research on Hertfordshire villages, which has been viewed as constituting a study of rural gentrification by people such as Paris (2008), despite it making no use of the term. For Savage, Pahl's study represents both a description and enactment of technocratic middle-class culture (Phillips and Smith forthcoming). Circulatory sociologies of translation are, however, often far from direct: Although the concept of gentrification appears not to have translated readily into French public and academic discourse, the writings of French social theorists such as Bourdieu, Latour, Lefebvre and Waquant have exerted a profound influence on UK and US gentrification studies (e.g. Bridge, 2006;Butler and Robson, 2003;Phillips, 2010;, although not on French rural studies. Mobilization The final circulating sociology identified by Latour (1999: 108) relates to practices and processes of inscription and translation through which objects of study become 'progressively loaded into discourse'. This circulation has long been the focus of epistemological and methodological discussion about the ability, or not, of concepts to connect to objects or situations, issues that have been, and continue to be, a focus of debate within gentrification studies. While there have been claims that the ontological debates over the meaning of the concept of gentrification have declined in significance (e.g. Lees et al., 2008;Slater et al., 2004), the rise of comparative research has certainly challenged this, with Ghertner (2015: 552), for example, arguing that the concept 'fails in "much of the world"'. This argument, advanced in relation to studies of the Global South, has relevance even within the studies of the metropolitan North, given that there are both variegated understandings of the concept and numerous criticisms raised about its value. The complex geography to the adoption of the concept has been neglected both by its exponents and critics, as evidenced by use of the term rural gentrification, which not only is far from extensive in France but is also relatively limited even in the United Kingdom and United States. While processes of autonomization, alliance building and representation may profoundly influence the acceptance and development of the concept of rural gentrification, differential recognition of the concept in France, United Kingdom and United States may also reflect differences in the activities and dynamics of change occurring in the countryside in these countries. As such there is a need to conduct comparative research exploring if conceptions of rural gentrification provide differentially effective mobilizations of the rural 'pluriverse' (Latour, 2004: 40) in each country, or as it might also be expressed, to explore the geographies of the phenomenon, or phenomena, of rural gentrification, as well as its articulations. Clearly, given earlier discussions, there are a host of practical, methodological, epistemological and political issues to be considered in developing such comparative research. In the context of the present article, however, we will restrict ourselves to considering how Tilly's (1984) typology of strategies of comparison, along with Robinson's (2015) differentiation of genetic and generative comparisons, could assist in mobilizations of conceptions of gentrification applicable across rural France, United Kingdom and United States, as well as being of potential wider relevance in studies of gentrification. Genesis and generation within strategies of comparisons It has been argued that many studies of rural gentrification implicitly adopt an individualizing comparative perspective, although evidence of national differentials in the focus of studies (Figures 1 and 2) indicates potential for variation-finding comparisons exploring whether differences reflect the influence of contextual processes such as landscapes, planning regulations or property relations. Darling's (2005) work was discussed in relationship to the former, while UK studies have identified the latter two as important influences on the geography of rural gentrification, particularly its focus within smaller rural settlements (Phillips, 2005). Studies in the United States also highlight the significance of rural gentrification in transforming property and land-management practices Gosnell and Travis, 2005). Such work does not preclude identification of contextually specific understandings and practices and, when combined with analysis of the sociologies of translation operating within such contexts, can produce insights that speak back to prevailing conceptualizations of gentrification. Robinson (2015) argues for the development of comparative approaches that combine 'genetic' and 'generative' tactics of conceptual development. The former, as previously discussed, examine the genesis or emergence of seemingly common/repeated or related outcomes, while the latter explore how examination of 'different singularities or cases' generate insights and problems that provoke new lines of thought that can potentially be bought 'into conversation' with prevailing conceptualizations. These conversations might, as in individualizing comparisons, centre around differences between cases, although Robinson sees scope for generating connections which resonate across and from cases and hence can be of value within other strategies of comparison. Gentrification studies provide illustrations of such conversations. Focusing on the application of stage interpretations, a past conversation will be outlined, before considering a hitherto rather implicit one and one in need of development. In relation to the first, although, as previously argued, stage models are commensurable with universalizing and encompassing comparisons, they have been created generatively. Early-stage models of urban gentrification emerged from comparisons between innercity locations in North America (e.g. Clay, 1979;Gale, 1979). Later-stage models (e.g. Hackworth and Smith, 2001;Hackworth, 2007;Lees et al., 2008) drew on different theoretical understandings of gentrification and from recognizing forms of gentrification that differed from the 'classical' gentrification of the 1960s to 1980s, which came to be viewed as a 'pioneer' phase of gentrification, involving small-scale sporadic transformations of buildings. Pioneer/classical/sporadic gentrification became, and very much still act, as comparators to set against other forms of gentrification. A second generative conversation that gentrification studies should recognize is that stage interpretations are more multidimensional than often represented. Work of people such as Rose (1984) on 'marginal gentrification', for example, promoted differentiation of gentrification on the basis of assets or capital. Marginal gentrifiers, often associated with the onset of gentrification, were viewed as having limited amounts of economic capital yet relatively high levels of cultural capital. They were seen to be frequently displaced by an 'intensified gentrification', involving larger scale, more professional and capitalized agencies, and gentrifiers with more economic capital and, at least relatively, less cultural capital. In some locations, gentrification was seen to extend in scale to encompass not only large areas of residential properties but also other transformations, with Smith (2002b: 443) coining the phrase 'gentrification generalised' to refer to the formation of 'new landscape complexes' whereby not only housing but also 'shopping, restaurants, cultural facilities, . . . open space, employment opportunities' become gentrified. This form of gentrification was widely associated with the construction of new-build properties and heightened involvement of state agencies, but has also been connected, within the work of Ley (1996), Butler and Robson (2003) and Bridge (2001;, with a further decline in the significance of cultural capital as a 'channel of entry' (Phillips, 1998) into gentrified spaces. Some areas have also been identified as undergoing 'super-gentrification' (Butler and Lees, 2006) involving people with very high levels of economic capital. Concepts such as economic and cultural capital facilitate universalizing comparisons through simplifying or 'abbreviating' (Robinson, 2015) the complexity of everyday life by focusing on particular, repeated aspects. Given this, it is unsurprising that studies of the UK countryside have made comparisons between stages and assets identified in urban studies and processes of change observed in rural areas (Phillips, 2005;Smith, 2002a). It appears that many UK rural localities have experienced intensified and generalized gentrification, given their high levels of middle class residence (Phillips, 2007). In the United States, the 'American West' has been a focus of attention within rural gentrification studies (Figure 3), and according to , is an area where it appears most widely present, although also occurring more sporadically across rural areas in the Mid-West, the South and the Eastern seaboard. Even in the American West, however, rural gentrification is shown to be concentrated in a relatively small number of areas, with Hines (2012: 75) likening its geography to an 'archipelago' of change set within 'the midst of a relatively static, conservative, agricultural/ industrial "sea"'. In France, the progress of rural gentrification appears even more sporadic, as well as widely perceived via other process descriptors, such as international or neo-rural in-migration, tourism or peri-urban or new-build development. A study of the High Corbières has, however, suggested that neo-rural migration reflected an early sporadic phase of gentrification which was followed by inflows of people with both more economic assets and greater levels of cultural capital (Perrenoud and Phillips, forthcoming). Such research highlights that comparisons can generate connections between studies of rural gentrification and investigations framed through other concepts. They also point to how more multidimensional understanding of gentrification could be constituted by recognizing that economic and cultural capitals take a range of different forms. Ley (1996), for example, argues that gentrification can be associated with 'critical' or 'counter-cultural values'. As outlined in Phillips (2004), such arguments have rural counterparts, not least in the work of Smith and Phillips (2001) which highlighted the presence of what they characterize as 'New Age professionals'. Smith subsequently developed this argument further, highlighting how some areas are experiencing gentrification sparked and reproduced by householders seeking to realize a range of 'alternative' ways of living (Smith, 2007;Smith and Holt, 2005). These arguments chime with aspects of Hines' (2010; work in a North American context, as well as notions of neo-rural migration employed in France. Drawing on such arguments, it can be argued that some capital/asset-based analyses of gentrification employ what could be described as a three-dimensional differentiation of gentrifiers and gentrification (Figure 4). Three-dimensions, however, are insufficient, an argument that can be illustrated by considering the concept of 'super-gentrification'. This concept, which has been briefly discussed in a rural context by Stockdale (2010) and potentially has wider relevance, both within rural areas close to global cities such as London, Paris and New York and to remote amenity locations, has generally been used to describe people who are 'super-rich' in economic terms. However, studies suggest that there are a range of cultural dimensions that need fuller investigation. Super-gentrification, for example, has been identified with practices of conspicuous consumption, with Lees (2003Lees ( : 2487 arguing that it involves 'intense investment and conspicuous consumption by a new generation of super-rich "financifiers"'. As such super-gentrification can be seen to connect to objectified forms of cultural capital (Bennett et al., 2009;Bourdieu, 1986), which, as Phillips (2011) has observed, can be used to frame much of the analysis of culture and class conducted within UK rural studies in the 1980s and 1990s. Butler and Lees (2006), however, also suggest that, at least in the Barnsbury area of London, super-gentrifiers were predominately drawn from elite segments of the British education system (i.e. public or selective secondary schools and Oxbridge). As such, these gentrifiers had high levels of credentialed or institutional capital (Bourdieu, 1986) but also enact a range of embodied forms of cultural and social capital reproduced through this educational system (Bennett et al., 2009;Savage, 2015). Such connections are not universal, with Butler and Lees (2006) drawing contrasts between their study and the work of Rofe (2003) and Atkinson and Bridge (2005) on the habitus of gentrifiers in other global cities, which appear to be more cosmopolitan in origin and cultural orientation. Savage, in a series of works (Bennett et al., 2009;Savage et al., 1992Savage et al., , 2013, has argued for recognition of a range of different forms of cultural evaluation beyond the classical high-low distinction (see also Lamont, 1992;Warde and Gayo-Cal, 2009). In some contrast, Perrenoud and Phillips (forthcoming) argue that rural areas of southern France are experiencing gentrification by people connected to the production of Parisian 'high culture', and who might be described as 'super-gentrifiers' in a cultural sense, as well as being well endowed with economic assets. Even within the study of super-gentrification, there is a need to move analysis beyond three-dimensions, to recognize a range of different forms of cultural capital, an argument advanced more generally in relation to studies of rural gentrification by Phillips (2015). Also, earlier work (Phillips, 2004) on a 'composite' stage-interpretation of rural gentrification highlighting labour, property and finance capital flows, provides an example as to how multidimensionality can be applied to the concept of economic as well as cultural capital. Comparison holds the potential for fostering the creation of more multidimensional asset-based studies of gentrification. Petersen's discussions of cultural omnivores provide an interesting example of this, not only suggesting that the concept of people engaging in both high and mass cultural activities could link into gentrification (Petersen and Kern, 1996), but also highlighting its emergence from comparative work inspired by Bourdieu's writings and how it catalysed critiques and revisions of Bourdieu's conceptualizations of cultural capital (Petersen, 2005). Concepts of capital and flow point to relationality, which is a third generative conversation that gentrification studies should develop. As outlined earlier, relationality is central to encompassing comparisons. However, as Wright (2015) has observed, there is considerable variability of relationality evident within so-called relational perspectives. He, for example, argues that the capital-based theorization of class developed by Savage et al. (2013) is an example of an 'individual-attributes' based approach that pays insufficient attention to the way that holding and use of assets by one person can causally connect to those of other people. He identifies more relational perspectives focused around the hoarding/closure of opportunities and relations of domination/exploitation, but his analysis is explicitly centred on economic conditions and activities. Consequently, he does not provide a template for developing multidimensional asset-based studies of gentrification, but his discussion of forms of relationality are significant, not least because they highlight the need to situate analysis of assets held by individual agents of gentrification into examinations of their relationships within wider fields. The designation of levels of capital held or required for gentrification, for instance, clearly varies according to the context in which they are being deployed. Rural studies, for example, have routinely made reference to migration as an opportunity to maximize the purchasing power of financial assets held by householders, be this through voluntary or induced down-sizing or through up-sizing via purchasing housing in areas where prices are lower than at current place of residence (Smith and Holt, 2005;Stockdale, 2014). There are also less widespread references to the significance of the spatial transferability or fixity of cultural qualifications and competencies (Cloke et al., 1998a;Fielding, 1982). Connections could be forged between this work and wider discussions of migration and cultural capital, particularly those, such as Erel (2010), that highlight the need to consider not only amounts and forms of capital migrants move with, but also how these are reconfigured and created through interactions in new locations of settlement. Overall, there appears to be considerable value in recognizing the genetic and generative role of comparisons across all the strategies of comparison identified by Tilly and indeed to employ all these strategies when seeking to mobilize conceptions of gentrification in relation to rural France, United Kingdom and United States. Among the implications of this perspective is that there are variations in both the strategies of Tilly and tactics of Robinson, and careful consideration needs to be paid to how these fold into each other as comparative studies are developed. Conclusion Taking debates within urban studies about gentrification and comparison as a starting point, this article has investigated how comparative studies of rural gentrification can be advanced. Drawing attention to Tilly's (1984) identification of individualizing, universalizing, encompassing and variation-finding strategies of comparison, the article identified elements of each in studies of rural and urban gentrification, before exploring how they can be developed within a comparative study of rural gentrification in France, United Kingdom and United States. The article has compared the uptake of the concept of rural gentrification through Latour's (1999) concept of circulatory sociologies of translation. Attention was drawn to the emphasis on theory, and particularly political-economic theories, within UK rural studies as compared to France and United States during the late 1980s and 1990s, facilitating engagement with concepts such as class and gentrification. UK geography also underwent a 'cultural turn' that encouraged explorations of rural space as a motivator of in-migration and of contestations between different residential groups. Such concerns were not just relevant to conceptualizations of rural gentrification, and across all three countries, other concepts more successfully enrolled advocates. In part, this success stemmed from alignment with the demands of other circulatory sociologies connected to governmental statistics production, policymaking and popular discourses. Cross-national differences may be significant, with rural gentrification obtaining greater popular and policy engagement in United States than in France or United Kingdom. Such differences play key roles in concept development and application, and the extent to which gentrification articulates with or mobilizes the world. Described by Latour as the circulatory sociology of mobilization, this aspect of concept development can be framed in terms of relationships between geographies of the concept of rural gentrification and geographies of the phenomenon of gentrification. More specifically, differences in the recognition of rural gentrification in France, United Kingdom and United States might reflect differences in extent and form of gentrification occurring in these countries, as well as differences in the circulatory sociologies of autonomization, alliancebuilding and public representations. Addressing such issues requires consideration of strategies of comparison. While adoption of a variation-finding strategy is difficult due to the small number of rural gentrification studies, and indications of a preference for individualizing comparisons among recent rural studies are evident, it is possible to identify arguments for adopting variation-finding, relational and universalizing strategies of comparison in rural gentrification research. In relation to variation-finding comparisons, the value of comparing gentrification across different types of rural areas was noted, an argument that could be extended to encompass comparisons across urban and rural spaces. The benefits of investigating national differences in planning regulations and property relations, and their role in conditioning the geographies of rural gentrification, were also highlighted. Variation-finding comparisons involve acceptance of elements of commonality across the cases being investigated, both with respect to the identification of generic contours of processes and the formation of contextual variation. There are connections here to universalizing perspectives. While universalizing approaches have been criticized as decontextualist, reductionist and developmentalist, viewing then 'genetically', as repetitions whose emergence always needs to be explained, avoids establishing a universalizing approach that creates 'concepts without difference' or an individualizing approach that establishes 'difference without conceptualization' (Robinson, 2015: 17). Employing a genetic approach can not only reinvigorate universalizing comparisons but can also be incorporated into individualizing, variation-finding and relational or encompassing comparisons as well. Furthermore, Robinson's highlighting of the generative role of comparisons within studies of gentrification is valuable. Focusing on stage interpretations of gentrification, three examples of generative comparisons were discussed, linked to their significance in their emergence, their role in fostering multidimensional understandings of gentrification and the potential value of recognizing different forms of relationality. Such examples reveal that rather than adopting a singular strategy or tactic of comparison, there is a value in employing them in combination. This article is the first to reflect on the merits of comparative approaches to the study of rural gentrification. Although focused on the development of a cross-national study of rural gentrification, we have framed our explorations through comparative engagements not only with studies of rural space but also with ideas from urban and wider geographical studies. This framing reflects, in part, two aspects of comparison highlighted by McFarlane (2010: 725). First, it enacts 'comparison as learning', as we have drawn upon literatures addressing issues that are, as yet, largely omitted from the discourses of rural studies. Second, it also involves an ethico-political impetus for comparison, in that we hope that our discussion would indeed 'speak back' to centres from where we have drawn insight, not least in raising questions about the metrocentricity of contemporary discussions of comparative research. Declaration of Conflicting Interests The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article. Funding The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: This work was supported by the Economic and Social Research Council [grant number ES/L016702/1].
v3-fos-license
2019-05-13T13:05:11.318Z
2019-02-01T00:00:00.000
150769335
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "http://www.scielo.br/pdf/rpc/v46n1/0101-6083-rpc-46-1-0014.pdf", "pdf_hash": "6211807ac6da814ad2b2cc34cfc108bf4ae4098b", "pdf_src": "ScienceParseMerged", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41158", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "sha1": "6211807ac6da814ad2b2cc34cfc108bf4ae4098b", "year": 2019 }
pes2o/s2orc
Prevention of depression and anxiety in community-dwelling older adults : the role of physical activity Background: With the growth of the elderly population in Brazil and the increasing impact of depression and anxiety, the importance of preventing these disorders has been highlighted. Studies have shown an inverse relationship between rates of depression/anxiety and physical activity, pointing out its role as a possible protective factor. Objectives: To conduct a randomized study with elderly adults in the community, who present with subsyndromal depression and anxiety, that will evaluate the effectiveness of physical activity with a collaborative stepped-care strategy; and to compare the effectiveness of physical activity in preventing subsyndromal depression and anxiety, with regard to the usual care group. Methods: The article contains the methodological description of an arm of a large study entitled “Prevention and Treatment of Depression in Elderly”, in which 2,566 Brazilian older adults were screened to identify clinically significant depressive and anxiety symptoms. Those with clinically significant depressive or anxiety symptoms, not meeting criteria for depressive or anxiety disorder, will be invited to participate in a randomized clinical trial with 2 intervention groups: a step-by-step preventive care programme using physical activity, and usual care. The effectiveness of physical activity in the prevention of depressive and anxiety disorders will be evaluated. Discussion: New health policies could be implemented, aiming to reduce the number of elderly people with depression and anxiety in primary care. In addition, training may be implemented for family health teams so that screening tools could be used to make an early identification of individuals with (or at risk of developing) mental disorders. Alexandrino-Silva C et al. / Arch Clin Psychiatry. 2019;46(1):14-20 Introduction The significant increase in individuals aged 60 and over in the Brazilian population pyramid points to an unprecedented and inevitable reality.The ageing index of the population grew from 21.0 in 1991 to 44.7 in 2012 1 , while life expectancy at birth rose from 66.9 years in 1991 to 74.5 years in 2012 2 .Considering that a Brazilian's life expectancy was 33.7 years at the beginning of last century 3 , it has more than doubled in this timeframe. With the growing number of Brazilian older adults, the relevance of diseases that affect a significant portion of this population also increases.Depression and anxiety are among the major mental disorders in the elderly, and both are frequent causes of emotional suffering and lost quality of life 4,5 .These disorders in the elderly are associated with functional impairment, high costs of health care, and increased mortality [6][7][8] .Furthermore, it is known that depression can complicate the course and prognosis of cardiovascular diseases, stroke and other diseases 9,10 . Regarding the occurrence of depression in individuals aged 55 or more living in the community, the literature reports diverging prevalence rates, ranging from 0.4% to 35% 11 .However, when clinical manifestations of depression are investigated separately, it is observed that cases of major depression are infrequent (weighted average prevalence of 1.8%), while episodes of minor depression prevail (9.8%).When all depressive syndromes with clinically significant symptoms are grouped, the average prevalence reaches 26% 12 .These results suggest that, in the elderly, episodes of minor depression and depressive syndromes are particularly relevant, as opposed to the higher frequency of severe depressions that is observed in other age groups. Epidemiological studies have shown that anxiety disorders are among the most common psychiatric disorders among people aged 60 or over, with a lifetime prevalence estimated over 15.3%, higher even than the estimate for mood disorders (11.9%) 13 .In another population survey 14 , the prevalence of anxiety disorders in the elderly was 10.2%, with generalized anxiety disorder (GAD) being the most common (7.3%), followed by phobic disorder (3.1%), panic disorder (1.0%) and obsessive compulsive disorder (0.6%).In Brazil, there are few community studies investigating the occurrence of anxiety disorders in the elderly [15][16][17] . The prevention of mental disorders has been considered one of the most viable alternatives available to reduce the impact that the emergence of new cases would have both on the quality of life of patients and on the healthcare system.Studies have shown that prevention is possible, reducing the risk of occurrence of new cases of mental disorders, especially regarding interventions for patients with subclinical symptoms 18 . The effectiveness of an indicated preventive-intervention stepped-care programme focused on depressive and anxiety disorders in the elderly in primary care in the Netherlands was evaluated in a clinical trial with 170 subjects over 75 years old with subsyndromal symptoms of depression or anxiety 19 .Subjects were randomly assigned to a preventive stepped-care programme with therapy or to usual care.This intervention halved the cumulative incidence of major depressive disorders or anxiety disorders after 12 months (from 0.24 in usual care to 0.12 in the stepped-care group).The results of this study provided evidence that the risk of the occurrence of depression and anxiety in the elderly can be reduced through the implementation of structured interventions in a group of people at high risk of developing these disorders.As interventions with low cost were offered first -and only when they failed to keep the symptoms of depression and anxiety at acceptable levels were more intensive interventions were offered -this programme proved to be effective, both clinically and in terms of cost 20 . One low cost intervention not assessed in the Dutch study described above relates to the use of physical activity programmes. In a literature review in which the dose-response effect of physical activity on depression and anxiety was evaluated, the authors concluded that the cross-sectional studies indicate that physical activity is associated with reduced depressive symptoms, having less evidence on the reduction of anxiety symptoms 21 .Data on the prospective studies are inconsistent, especially with respect to prevention.Evidence from randomized controlled trials showed a reduction in symptoms of depression, which can be attributed to the practice of resistance and aerobic exercises; there was limited evidence regarding anxiety disorders 21 . The relationship between the role of physical activity in the prevention of depression suggests two strands: depression decreases the practice of physical activity, and physical activity may be useful in preventing depression.Given the physical and psychological benefits from physical activity, in general, and from physical exercise, in particular, its practice by depressed elderly adults without comorbidities may promote the prevention and reduction of depressive symptoms 22 . Evidence suggests that physical activity can be used as an adjunct in the prevention of depression in the elderly.However, there are few studies in relation to anxiety, and the 'dose' of physical activity needed to assist in the reduction of depressive and anxiety symptoms has not been sufficiently investigated.We also note that the effectiveness of physical activity to prevent the onset of depression and anxiety in the elderly has also not been adequately evaluated in clinical trials, mainly through a survey of population coverage and characteristics. Aims of the study a) To conduct a randomized study with elderly adults in the community, who are users of the Basic Health Units of the Butantã region and who present with subsyndromal depression and anxiety, that will evaluate the effectiveness of physical activity with a collaborative stepped-care strategy.b) To compare the effectiveness of physical activity in preventing subsyndromal depression and anxiety, with regard to the usual care group. Study design This is a naturalistic randomized parallel clinical trial, comprised of two arms: an intervention group (stepped-care programme with physical activity) and a control group (usual care). Participants Eligible subjects are aged 60 or more, with subsyndromal symptoms of depression or anxiety, who are enrolled in one of the selected Basic Health Units of the Butantã Region, and who are capable of giving informed consent and have sufficient knowledge of the Portuguese language.Participants will be defined as having subsyndromal symptoms of depression and/or anxiety, when he/she gets a score greater than or equal to 13 in the Center for Epidemiologic Studies Depression Scale (CES-D) but does not meet criteria for a depressive or anxiety disorder, according to Mini International Neuropsychiatric Interview (MINI) 23 .Individuals who meet the criteria for major depression, dysthymia, bipolar disorder, mild cognitive impairment or dementia, use of substances, and/or anxiety disorder, or who are unable to consent to participate in the study, or who do not have mastery of the Portuguese language will be excluded.Elderly adults will be interviewed in their homes. Ethics approval and consent to participate The Ethics Committee for Research Project Analysis of the University of São Paulo approved this study (CAAE-14693013.4.0000.0068),and written consent forms were obtained for all participating subjects. Sample size The main study was designed with power to detect a 25% difference in the cumulative incidence rates of depressive and anxiety disorders, according to MINI/DSM-IV, in the physical activity group compared to the usual care group.It is expected that the incidence rates of depressive and anxiety disorders will be 35% for the usual care group, based on previous longitudinal studies 14 .In the group that will participate in the physical activity intervention, we estimated that the incidence rate of subsyndromal depressive and anxiety will be 10%. Using the formula below 24 , we calculated that 32 participants will be required in each group, assuming α = 0.05 and power (1 -β) = 0.80. To ensure that the groups end the 12-month follow-up with an "n" greater than or equal to 32 participants per group, we added 10% due to possible losses, ending with an "n" equal to 35 participants per group. Recruitment Eligible subjects for this study will be selected from a pool of participants of a larger study entitled "Prevention and Treatment of Depression in Elderly", conducted in São Paulo, Brazil.When the main study was readied in 2009, the population of São Paulo, according to projections of SEADE, was 10,998,813 inhabitants, with 11.53% of people aged 60 years or older.The council was composed of 96 districts, from which Raposo Tavares and Rio Pequeno were chosen because their Basic Health Units (BHU) and the teams of the Family Health Program (FHP) were managed by the Faculty of Medicine Foundation, with the coordination of a board of directors composed of professors of the faculty of medicine of the University of São Paulo.The West Region Project, as it was called, included the units of the Raposo Tavares and Rio Pequeno districts, with 5 BHU and 29 FHP teams integrated into the design and 98,716 people registered since the end of 2009. Interviews were conducted with 2,673 elderly people in their homes by lay interviewers, who applied a structured psychiatric interview, after receiving specialized training from professionals involved in the research.Of the total number of interviewees, 153 subjects had failed screening: 17 subjects were less than 60 years old, and 136 elderly people had at least one item absent in the Center for Epidemiological Studies-Depression (CES-D) or in the Mini Mental State Examination (MMSE), the two core questionnaires for screening the individuals in the study.However, 46 subjects from those who had at least one item absent for one of these two scales received further medical evaluation, with reapplication of the questionnaires; in these cases, the elderly were included in the valid sample of the study, and the 17 subjects who were younger than 60 years and the 90 individuals who had incomplete data on at least one of the two scales and did not receive further medical evaluation were excluded.This left a valid sample of 2,566 elderly participants. The main objective of this first phase of the larger study was to calculate the prevalence of clinically significant depressive symptoms.Older adults with clinically significant depressive symptoms and who do not fulfil criteria for depressive and/or anxiety disorders will be invited to participate in a randomized clinical trial with 2 intervention groups, with 35 older adults in each one.Figure 1 presents the flowchart of study participants. Outcome measures of the present study The primary outcome will be the cumulative incidence of major depressive disorder or anxiety disorders, after 12 months, assessed with the Mini International Neuropsychiatric Interview (MINI).Secondary outcomes will be the reduction of depressive and/or anxiety symptoms, evaluated with the Center for Epidemiologic Studies Depression Scale (CES-D) and improvement in quality of life, assessed with the 36-Item Short Form Health Survey (SF-36). CES-D The Brazilian version of the CES-D 25 will be used for screening for clinically significant depressive and anxiety symptoms.This scale contains 20 items (scores range from 0 to 60) and evaluates behaviours and feelings that occurred in the last two weeks.The cutoff point used in most of the studies to identify individuals at risk of depression is greater than or equal to 16, and higher values suggest a direct relationship with depression severity 26 .However, this cutoff point is not a rule, and other studies have used (or have found) higher or lower cut-off points for screening of depressive symptoms, depending on the characteristics of the samples evaluated [27][28][29] .One study that evaluated the effectiveness of the CES-D in screening for clinical depression in a sample of 1,005 subjects aged 50 or more found that the cut-off point that maximized both sensitivity and specificity for the total sample was 12 -the area under the ROC curve (AUC) was 0.86, with 76% sensitivity and 77% specificity 29 .Different factors (including socioeconomic issues) may contribute to the identification of the most appropriate CES-D cut-off point for each study 30 .Since we investigated a population living under adverse socioeconomic conditions, we predicted that there would be a large proportion of elderly people with depression or presenting mild cognitive impairment/dementia.Considering this specificity of our studied population, we reduced the cut-off point of the CES-D scale to greater or equal to 13, rather than using the standard cut-off of ≥ 16. Correlations between GAI and CES-D Anxiety symptoms will be assessed through the following four CES-D questions: 1.I was bothered by things that don't usually bother me; 2. I did not feel like eating; my appetite was poor; 3. I had trouble keeping my mind on what I was doing; and 4. My sleep was restless.To establish the correlations between the two scales, we have traced linear correlations according to the Pearson method between the CES-D score and the Geriatric Anxiety Inventory (GAI) score, in addition to the four selected questions (described above).We found statistically significant correlations of "strong" intensity (r = 0,714; p < 0.001 Pearson correlation). D-10 D-10 is a screening scale for depressive symptoms with 10 issues (scores range from 0 to 10), which was developed by our group to be used in a community prevalence of dementia study, evaluating the presence of symptoms in the last two weeks.Six of the items are based on the "Geriatric Depression Scale" (GDS) 31 , one item is from the CES-D, and three additional items were chosen through a consensus of researchers.In a previous community study, the D-10 showed internal consistency, measured by Cronbach's alpha of 0.72, and high agreement with the GDS-5 (rs = 0.85; p < 0.001).Considering the GDS-5 as the gold standard, the D-10 had a sensitivity of 78.8%, a specificity of 95.5%, a positive predictive value of 60.3%, and a negative predictive value of 98.1% 32 . MINI diagnostic interview The MINI is a short, structured diagnostic instrument (15 to 30 minutes) used to identify psychiatric disorders according to the DSM-IV and ICD-10 33 .It has been used in several epidemiological studies and in clinical psychopharmacology 19 ; it was translated and validated for the Portuguese language and administered by medical residents in a family medicine programme 23 .The following MINI modules will be administered by trained physicians: depressive disorder, dysthymia, suicide risk, (hypo)maniac episode, panic disorder, agoraphobia, social phobia, alcohol abuse/dependence, and generalized anxiety disorder. Cognitive assessment Cognitive assessments will be made with the CAMCOG brief neuropsychological battery, which is part of the Cambridge Mental Disorders of the Elderly Examination structured interview (CAMDEX) 34 .CAMCOG (scores range from 0 to 107) was translated and adapted to the Portuguese language 35 ; it takes 20 to 30 minutes to be administered and has 67 items -including the Mini Mental State Examination (MMSE) and Verbal Fluency -which assess orientation, language, memory, praxis, attention, abstract thinking, perception and calculation.The MMSE 36 is the cognitive screening test most widely used worldwide, and it evaluates five areas of cognition: orientation, registration, attention and calculation, recovery, and language, with scores ranging from 0 to 30 points.In the initial screening of this study, MMSE ≥ 13 will be used as the cut-off point 37 , considering the low level of education and the large number of illiterates in the studied population.However, as the subjects become candidates for randomization, the MMSE will be evaluated according to schooling.In these cases, the following cut points will be used 38,39 : illiterates < 18; 1 to 4 years of schooling < 23; 5 to 8 years of schooling < 25, and more than 9 years of schooling < 26. Another instrument of cognitive evaluation that will be used in the study is the Informant Questionnaire on Cognitive Decline in the Elderly (IQCODE), which contains 26 questions (scores from 1 to 5 points), in which the informant evaluates the patient's current performance in different life situations compared to the performance observed 10 years ago.IQCODE is valid for dementia screening in the general population 40 , as well as in clinical practice 41 .The application of the long (26 items) and short (16 items) IQCODE by our group suggests that the two versions can be used for the screening of mildto-moderate cases of dementia in Brazil 42 .The short version of the IQCODE will be applied to the relatives/informants of the elderly, using a cut-off point ≥ 3.53 to identify possible cases. B-ADL Functional assessment of patients will be done using the Bayer Activities of Daily Living Scale (B-ADL).The B-ADL has 25 questions comprising 13 areas of daily living, and it is applied to a relative or informant.The B-ADL is a brief instrument and can be used by general practitioners and in primary care, both for tracing and evaluating the effects of treatment and the progression of dementia 43 .The Portuguese version was validated by our group, showing high internal consistency (Cronbach's alpha = 0.981) and the ability to differentiate elderly patients with mild-to-moderate dementia 44 . 36-Item Short Form Health Survey (SF-36) SF-36 measures the quality of life related to health 45 .The thirty-six items assess the events in the last four weeks and are classified into eight separate areas: functional capacity, physical functioning, pain, general health, vitality, social role, emotional role and mental health.This scale has been translated, validated, and adapted to Brazil 46 and has been widely used in clinical studies of patients with cancer and other chronic diseases 47 . Assessments The evaluation of physical activity to prevent depression and/ or anxiety in the elderly in primary care has not been properly investigated.The intervention programme to be applied in this study consists of four steps (described below), lasting three months each: Step 1 -Watchful Waiting Participants with scores on the CES-D scale ≥ 13 who have a negative MINI score for depression and/or anxiety and are not suspected of having mild cognitive impairment or dementia will wait 3 months after medical evaluation to be re-evaluated.This period of "watchful waiting" is indicated to observe if the individual does not present spontaneous remission of depressive and/or anxiety symptoms.In the reassessment, the following questionnaires will be applied: the CES-D, SF-36, MMSE, and Verbal Fluency.If the subject continues to present subsyndromal depressive symptoms (with CES-D ≥ 13), he/she will be randomized into one of the arms of the study (physical activity or usual care). Step 2 -Physical Activity Intervention 1 This step is taught by a physical educator and takes place in the home, with a duration of 50 minutes for each session, over a 3-month period.Twice a week, participants will be taught to practice physical exercises for strength and stretching (elongation); planned goals for this intervention include the practice of aerobic physical activity (walking) at least 3 times a week.The evaluation of frequency and duration of physical activity will be assessed with a pedometer. Step 3 -Physical Activity Intervention 2 After these 3 months of intervention, the CES-D and the MINI will be re-administered.The participant who continues to present scores on the CES-D ≥ 13 and have a negative MINI score for depression and/ or anxiety will receive a new period of 3 months of assisted physical activity, with 24 more meetings at home, lasting 50 minutes each. Step 4 -Referral to Primary Care After that, if there is still a CES-D score ≥ 13, a further period of 3 months will begin, but the elderly will not receive any intervention, as in step 1, when "watchful waiting" is performed.At the end of this period the following questionnaires will be reapplied: the CES-D, SF-36, MMSE, and Verbal Fluency.If the CES-D scores remain high, participants will receive guidance about the need to receive a specific medication that they can discuss with their doctors. In the first contact with the subject, physical educators will conduct an assessment to verify if he/she is able to perform scheduled activities.Strength tests and the short version of the International Physical Activity Questionnaire Short Form (IPAQ-SF) will be applied 48 .This latter instrument consists of seven open questions that will allow for estimating the time spent by the elderly in different physical activities (hiking and physical efforts of different intensities) the week before starting the proposed interventions; physical inactivity will also be assessed. At each meeting with physical educators, standardized exercises (contained in the protocol drawn up for this purpose) will be performed, lasting approximately one hour. Subjects will be instructed to walk at least 3 times a week for a period of 30 minutes, and they should write down on a weekly record if scheduled activities were completed. The elderly participants will receive a pedometer, which is a mechanical counter that registers movements performed in response to vertical acceleration of the body, and he/she will be informed about its purpose.They will also be instructed to place the unit at the waist and to use it all day, removing it only at bedtime.The pedometer measures the daily steps of the individual and the daily caloric loss and covered distance; data will be downloaded once a week by responsible staff. Regarding the usual care arm, subjects in this group will have unrestricted access to usual care for depressive and/or anxiety symptoms.Their use of health services and use of prescribed medications will be recorded.Assessments in this arm will be done with the same questionnaires and in the same timeframe that will be used for the physical activity group. The randomization process In this study, we will adopt a stratified randomization process according to sex and age.The random distribution list of the subjects will be generated by the online software "Randomization" (http:// www.randomization.com). To guarantee a balance between the two groups (physical activity and usual care) in relation to sex and age, we will use stratified randomization at the time of inclusion in the study.Therefore, a random distribution list of these two covariates will be generated for allocation into the two groups of the study. Control of the concealment of the sequence of randomization and stratification will be guaranteed by the use of REDCap software 49 .Initially, the allocation list previously developed in "randomization.com" software will be imported into REDCap; then, stratification according to sex and age will be enabled in REDCap; finally, to add a new patient into the study, he/she will automatically be allocated to the study treatment group in accordance with the characteristics defined in the model of stratified randomization.The advantage of using the REDCap randomization module lies in four aspects: 1) It is not possible to have knowledge of the random distribution list; 2) you cannot change the sequence of randomization; 3) all user actions can be monitored, thereby enabling the data security control and confidentiality of randomization; and 4) the database manager (REDCap) can define the hierarchy levels for access to the randomization module, thereby avoiding the access by data collectors and researchers to the randomization control tools.Thus, we can say that the method chosen for randomization is in accordance with the CONSORT standards 50 . Data analysis All data will be collected using specially designed questionnaires and uploaded to a Web-based data programme, "REDCap", which was developed at Vanderbilt University 49 .Data will be stored on a central server, enabling automatic periodic reports that will be created to check the quality and consistency of data.The data collection questionnaires will be designed to be compatible with international standards, which will allow our data to be combined in the future with similar databases of researchers in Brazil and abroad.Statistical analyses will be performed using SPSS 22.0 software for Windows and STATA 9.0. To test the hypothesis that the intervention with physical activity will be more successful than usual care in reducing the risk of depressive and anxiety disorders, logistic regressions of the outcome will be held (1= disorder and 0 = no disorder) in the treatment indicators (0 = usual care and 1 = intervention) to estimate the odds ratio (oR) and relative risks (RR), which will describe the reduction of the risk to present a depressive or anxiety disorder in the intervention group compared to the control group.The 95% confidence interval will be reported. The valid final sample was classified, into different strata of average household income, according to a classic Brazilian approach.This criterion takes into account the purchasing power of the subjects and their families, who live in urban areas, and present four strata of average household income 51 .Most of the participants (55.5%) were classified in an economic stratum of household income between US$ 543 and US$ 793 (the third level of the criterion).The second criteria level was represented by almost a quarter of the sample (26.7%), who presented household income between US$ 930 and US$ 1,792.Almost 8.7% of families (n = 225) were classified in lowest criteria level, with an average household income equal to or less than US$ 380.The highest level of household income (≥ US$ 3,295) was represented only by 0.8% of sample (n = 21) (Figure 2).It is interesting to note that two-thirds of the elderly sample declared themselves as the main providers of household income (69.5%). Scales of depressive symptoms and cognitive and functional tests In Table 1, we present the results of the evaluation scales of depressive symptoms and cognitive tests.Figure 3 shows the relative frequency of depressive symptoms (CES-D, D-10), general cognitive status (MMSE) and functional impairment (IQCODE) for the total sample (n = 2,566). The high percentage of subjects with clinically significant depressive symptoms (> 40.0%) deserves special mention.Compared with the results of a meta-analysis conducted by our group in elderly Brazilian adults living in the community 12 that found a prevalence of 26.0% for clinically significant depressive symptoms, the above data are astonishing. Discussion It is known that clinically significant depressive symptoms can increase the risk of developing depressive disorder by almost 40% 52 .Such symptoms are more frequent than major depression in elderly people living in the community (7.0 x 26.0%); are associated with significant psychosocial impairment; can increase the risk of physical incapacity, clinical diseases, and use of health services; and are not usually recognized by health professionals 12,18,53 .Nonpharmacological interventions may be effective in preventing the development of depressive disorders in the elderly and may be quite useful in primary care.A recent review of the literature has demonstrated that different therapy techniques (such as cognitive behaviour therapy, competitive memory training, reminiscence group therapy, problem-adaptation therapy, and problem-solving therapy) have been able to reduce depressive symptoms in individuals who 65 years old; this highlights their usefulness in clinical practice and the benefit of this type of interventions, which offers minimal risks of side effects, especially in the elderly, who constitute a group in which comorbidities are more frequent and pharmacological treatments increase the chances of drug interactions 54 .Consistent with these findings, a meta-analysis on brief psychotherapeutic interventions for elderly people with subsyndromic depressive symptoms has shown that they can reduce the incidence of depression by 30% 53 .Another type of non-pharmacological intervention, one practically not evaluated in representative population studies, is physical activity.Physical activity reduces the occurrence of problems of balance, coordination and agility; decreases bone mass loss; increases cardiorespiratory function; promotes muscle strengthening; and reduces the risk of falls and fractures in the elderly 55,56 .Another benefit that deserves to be highlighted is its role in forming social bonds, expanding contact networks and emotional support 57 .To the best of our knowledge, this is the first community-dwelling study, with the methodology described, aimed at the elderly population with subsyndromic depressive symptoms. In our randomized clinical trial, we may find that the elderly participants are resistant to perform physical activity with the necessary regularity.However, the presence of the physical educator in the participant's residence, the instructions provided and the benefits of the practice of the physical activity will be constantly reinforced; this will likely guarantee the minimum accomplishment of the exercises at least twice per week.In addition, the elderly participants will use a pedometer that will record their movements and caloric losses during the study period, allowing the team to monitor whether the exercises are being practised or not. Another problem that we need to consider is the drop-out rate for the study, both for those who are in the intervention arm with physical activity and for those in the control group.If this occurs, in addition to having a trained professional visit the residence of the individual to learn what happened and see if we can help to solve the problem, the medical team will also contact the elderly participants to solve any doubts, as well as to encourage him/her to continue in the study.We believe that with these initiatives we will reduce the number of participant losses during the period, strengthening the bond with the subject of the research. Regarding the socioeconomic characteristics of the sample studied, it is interesting to note that although most of the elderly are retired, approximately 70% of the sample declared themselves as the main providers of family income.This situation is commonly reported in times of economic crisis 58 .The frequency of classified subjects among the four categories of family income was relatively similar to the results reported for the metropolitan region of São Paulo 51 .Most of the subjects were classified as having an intermediate family income (US$ 543 -793 and US$ 930 -1,792), although it was possible to perceive a higher proportion of elderly people in the category (US$ 543 -793) when compared to the results reported for São Paulo.The frequency of subjects in the sample, classified into the highest category of the criterion, was also different from the results reported for São Paulo, with approximately five times less frequency of subjects/families classified in this category.Nevertheless, for the lowest category of family income, the frequency of the elderly was similar to the results reported for São Paulo.The less favoured socioeconomic characteristics of our sample reinforce the possible value of the study if it demonstrates the effectiveness of a cheap and scalable intervention in this type of setting in the prevention of depression in the elderly. Depending on the results that will be obtained in this study, new health policies could be implemented, aiming to reduce the number of elderly people with depression seen in primary care.In addition, training may be implemented for family health teams so that screening tools could be used to make an early identification of individuals with (or at risk of developing) mental disorders. Figure 3 . Figure 3. Relative frequency of depressive symptoms, general cognitive status and functional impairment (n = 2,566) Table 1 . Results of depressive symptom scales and cognitive tests (n = 2,566) Comparison between the results of our sample and the results of a population sample from the metropolitan region of São Paulo.
v3-fos-license
2017-06-27T23:02:22.415Z
2014-10-30T00:00:00.000
2297729
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://jeatdisord.biomedcentral.com/track/pdf/10.1186/s40337-014-0027-x", "pdf_hash": "69bf1608b8b0dcbbd2030ebf31adf46f5fcf71b7", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41159", "s2fieldsofstudy": [ "Psychology", "Medicine" ], "sha1": "0efab6633ca19d47af7ce3bee2ea351720c447a8", "year": 2014 }
pes2o/s2orc
Meta analysis on the efficacy of pharmacotherapy versus placebo on anorexia nervosa Background Anorexia Nervosa (AN) has a devastating impact on the psychological and physical well being of affected individuals. There is an extensive body of literature on interventions in AN, however more studies are needed to establish which form of pharmacotherapy is effective. The few meta-analyses that have been done are based on one type of medication only. This article is the first to present data on three different, most commonly used, forms of pharmacotherapy. The primary objective of this meta-analysis was to create an overview and to determine the efficacy of three forms of pharmacotherapy (antidepressants, antipsychotics, hormonal therapy) compared to treatment with placebo in patients with AN. Method A systematic literature search was performed to identify all randomized controlled intervention trials investigating the effectiveness of pharmacotherapy for AN within the following databases: PubMed, PsycINFO, Embase and Cochrane Library. In addition, 32 relevant reviews and meta-analyses were screened for additional intervention studies. A meta-analysis was performed on a total of 18 included studies (N = 869). Efficacy was measured in terms of weight gain or weight restoration. Results The pooled effect sizes indicating the difference between antidepressants and placebo, and between antipsychotics and placebo on weight were not significant. Because of the small sample size no meta regression and subgroup analyses could be conducted. The pooled effect size indicating the difference between hormonal therapy and the placebo condition on weight (all weight measures) at post-treatment was 0.42 (95% CI: 0.11 ~ 0.73), which was significant. For hormonal therapy heterogeneity was high (I2 = 64.70). No evidence for publication bias was found. Meta-regression analyses of the weeks of medication treatment (slope = −0.008) yielded a significant effect (p = 0.04). Conclusions In this study we found that hormonal therapy has a significantly larger effect on weight compared to placebo in the treatment of AN. However for these analyses heterogeneity was high, which means that these results have to be regarded with caution. We found that anti-depressants and antipsychotics had no significant effect on weight compared to placebo in the treatment of AN, although the power to detect significant effects was too low. Electronic supplementary material The online version of this article (doi:10.1186/s40337-014-0027-x) contains supplementary material, which is available to authorized users. Anorexia nervosa Anorexia Nervosa (AN) has a devastating impact on the psychological and physical wellbeing of affected individuals. The disorder is characterized by restricting intake of calories and energy which will lead to a low body weight, extreme fear of gaining weight and being preoccupied with behavior that will avoid gaining weight, and is usually accompanied by a significant disturbance in the perception of the shape and or size of a person's body. In addition, body shape and weight have an undue influence on the affected persons self-esteem and selfevaluation [1]. At least 90% of individuals with AN are female. The prevalence is 0,5% -1,0%. The course and outcome of AN are highly variable. AN has a high comorbidity with depressive symptoms, anxiety and obsessive compulsive symptoms. For a great part the etiology remains to be discovered. However, it is clear that a combination of multiple factors, genetic, neurobiological as well as psychosocial, lead to the development of this disease [2]. Treatment goals in AN include restoration of normal body weight, treat physical complications, a normal and healthy eating pattern, improvement of body image and self-esteem. Equally important is the improvement of other co morbid psychological symptoms. Because AN is associated with other psychopathology the treatment is often long and may involve several stages and intervention types [2]. Treatment Pharmacotherapy is often used in the treatment of AN. The fact that pharmacological interventions are established forms of treatment of several disorders that overlap with AN, has led many to conclude that pharmacotherapy may be useful in symptom reduction in AN [3]. According to Claudino et al. the rationale for pharmacological treatment of AN is based on neurobiological research into the control of appetite and food intake and on biological models of AN, on clinical observations and uncontrolled studies [4] The focus of pharmacological interventions in AN depends on the phase of illness. In the acute phase drugs are given to increase body weight and reduce AN symptoms (such as recurring thoughts about weight, caloric intake, depression, anxiety and obsessive/compulsive symptoms). In the second phase pharmacotherapy is expected to improve underlying psychopathology and prevent relapse [4,5]. Various types of medication have been studied in the treatment of AN; antidepressants, antipsychotics, nutritional supplementation and hormonal medication [3,[5][6][7]. Agras and Robinson conclude in their review that there are no evidence-based psycho-pharmacological treatments available for either adolescent or adult patients with AN [8]. Two recent meta-analyses have reported on the effectivity of antipsychotics on anorexia nervosa [9,10]. Two other meta-analyses have been published on the efficacy of antidepressants and estrogens preparations [4,11]. In the described meta analyses different outcome measures were used, for example not only weight, but also bone health [11]. The overlapping conclusion of these articles is that more research needs to been done and they recommend the use of similar outcome measures. In this article we have set a step in that direction. This is the first study to present meta analyses on three different, most commonly used, forms of pharmacotherapy (antidepressants, antipsychotics, hormonal therapy) and report on the primary outcome measure for anorexia nervosa; weight. Lack of evidence Considering the abovementioned findings, the results of pharmacotherapy compared to placebo treatment are scarce. Despite a considerable number of trials performed to elucidate the efficacy of three different forms of pharmacotherapy on AN, it remains unclear how far the results can be attributed to placebo effects. This can be explained by a number of factors; many patients with AN are difficult to engage in medical treatment and are unwilling to participate in randomized controlled trials, and many of these patients are so ill that they require a multiplicity of interventions [5]. As Powers & Santana state surprisingly few studies have been undertaken for AN and more over few studies have actually evaluated medications know to cause weight gain [12]. Although there are several studies published on AN as a symptom of other diseases, for example cancer, we chose to focus on AN as an eating disorder and on randomized controlled trials with a placebo condition. There are various systematic reviews exploring pharmacotherapy for AN but as far as we know only four meta-analyses have been published on the subject; one meta-analysis by Claudino et al. [4] that examines the effectivity of antidepressants for AN, one that reports on the effects of estrogen preparations [11] and two on the efficacy on antipsychotics [9,10]. The present study adds to the available body of evidence by providing an overview of the three most commonly used medicament treatments for AN: antidepressants, antipsychotics and hormonal medication. The meta-analysis of Sim et al. [11] doesn't report on weight as a primary outcome measure, but on bone density loss. They also included cohort studies with "no medication" control groups instead of placebos. Thus, the present meta-analysis is the first one to report on the effects of hormonal medication on weight restoration. Furthermore, this meta-analysis contains an up to date search; the meta-analyses on antidepressants and estrogen preparations [4,11] performed their searches until April 2005 and march 2008 respectively. We chose to focus on pharmacotherapy versus placebo conditions only, while previous meta-analyses have also included studies comparing pharmacotherapy with treatment-as-usual [9] and studies comparing pharmacotherapy with pharmacotherapy [4,10]. Finally, subgroup analyses were performed where possible; only two previous meta-analyses have performed subgroup analyses [9,10]. Search strategy This meta-analysis is part of broader meta-analysis project on eating disorders. An extensive electronic database search for open and randomized controlled trial (RCTs) was conducted within the following databases: PubMed, PsycINFO, Embase and Cochrane Library. The search terms (both text words and MeSH terms) included a wide range of combined terms indicative of eating disorders (e.g. anorexia nervosa, bulimia nervosa, binge eating disorder, eating disturbance) and therapy (e.g. psychotherapy, nutrition therapy, counselling). The Additional file 1 "PubMed search string" contains an example of the search terms used. The complete search terms and filters used are available on request from the corresponding author. The screening process consisted of a number of steps. During all screening phases, the references were rated by three independent researchers (JdV, GK, LH). Disagreements were discussed and resolved in consensus and in cases of unresolved disagreement a senior reviewer (JD) was consulted. Studies were then either included or excluded from further analysis. The first step consisted of the application of the inclusion criteria to the 9722 abstracts and titles. Studies were selected if they (a) reported an author/ had an abstract, (b) were about treatment of eating disorders, (c) were written in English or Dutch. In the second phase the database was split into three eating disorder groups (Anorexia Nervosa, Bulimia Nervosa and Binge Eating Disorder). In the third phase we screened 32 earlier reviews and meta-analyses concerning treatment of eating disorders for additional relevant studies. Selection of studies The next step was to proceed with the anorexia nervosa studies. For this meta/analysis we focus on randomized controlled trials. A total of 139 potential anorexia RCTs remained for a subsequent full-text screening. The database was split into two forms of treatment, pharmacotherapy and psychotherapy. In this meta/analysis, we included studies if they were a (a) randomized controlled trial, and (b) comparing pharmacotherapy with an placebo controlled condition and reported on (c) patients with Anorexia Nervosa with an age minimum of 12 years. Outcome had to be measured in (d) terms of weight gain. Studies in the acute and maintenance phase of treatment were both included. Meta-analysis We conducted meta-analyses on a) antidepressants, b) antipsychotics and c) hormonal medication. Those were the most commonly used pharmacological treatments. For all three we conducted a separate meta-analysis on weight, the primary measure, comparing pharmacotherapy versus placebo. Primary outcome measures Most randomized controlled trials report some kind of weight measure as primary outcome measures. Only published data were used. The selection of primary outcome measures for this meta-analyses includes a range of weight related variables: Efficacy at the end of treatment, measured in terms of weight gain or weight restoration as follows: Effect sizes (Cohen's d) were computed for each of the primary studies. The post-to post-pharmacotherapy effect sizes were calculated by subtracting the average post-treatment score of the pharmacotherapy condition from the average post-treatment score of the placebo condition and dividing the result by the pooled standard deviations of both conditions Effect sizes of 0 -0.32 are considered to be small, whereas effect sizes of 0.33 -0.55 are moderate, and effect sizes of 0.56 -1.2 are large [13]. To calculate the pooled mean effect size, we used the statistical (computer) program Comprehensive Meta Analysis (version 2.2.021; Biostat, Englewood, NJ, USA). Only measures explicitly describing weight at post treatment were used. When means and standard deviations were not presented, we used other statistics (e.g. t-value, p-value) to compute the effect size (n = 4). When neither the means and standard deviations nor a statistical test between the relevant scores was presented, the study was excluded because the data were not suitable for this meta-analysis. As an indicator of homogeneity, we calculated Qstatistics. A significant Q-value rejects the null hypothesis of homogeneity. We also calculated the I 2 statistic, which is an indicator of heterogeneity in percentages. A low value of 0% indicates that there is no observed heterogeneity, higher percentages indicate increasing amounts of heterogeneity, of which the highest is a value of 75% which indicates a high amount of heterogeneity (31). Publication bias was tested according to Duval and Tweedie's trim and fill procedure [14] using Comprehensive Meta-analysis. We ran the publication bias analyses on all primary outcome measures. Subgroup analyses Subgroup analyses were performed using the procedures implemented in Comprehensive Meta-analysis (version 2.2.021; Biostat, Englewood, NJ, USA). In this subgroupanalysis, studies were divided into two or more subgroups. For each subgroup the pooled mean effect size was calculated, and a test was conducted to examine whether the subgroups' effect sizes differ significantly from one another. We used the mixed effect model of subgroupanalyses, which pools studies within subgroups with the random effects model, but tests for significant differences between subgroups with the fixed effects model. We conducted subgroup analyses for the following characteristics: -Mono treatment (only pharmacotherapy or not) -Setting: inpatient or outpatient or other (not reported or a combination of in-and outpatient) The risk of bias quality assessments were based on the domains described by the Cochrane Collaboration [15]. The domains were evaluated for each study by two independent researchers. A code was given for each domain: yes (=the study reports correctly on this domain and there is no risk of bias), no (=the study reports on this domain but according to the description the domain could be biased) and unclear (=the study does not provide sufficient information to make an assessment). Disagreements were discussed and resolved in consensus and in cases of unresolved disagreement a senior reviewer (JD) was consulted. The domains that assessed the risk of bias are the following: -Sequence generation: describes whether there was a random component in sequence generation -Allocation concealment. describes whether assignment could be foreseen -Blinding. Were participants and/or personal blind for the treatment condition -Incomplete data. Did the study report on missing outcome data -Selective outcome. Were all expected outcomes reported -Risk of bias. Assesses total risk of bias Meta regression analyses were performed in order to assess whether pre-treatment mean weight (kg and BMI), mean age, duration of illness (in months) and n weeks predicted the effect sizes. A significant positive or negative slope suggests that the variable is associated with the outcome. Power calculation Because we expected only a limited number of studies, we conducted a power calculation to examine how many studies would have to be included in order to have sufficient statistical power to identify relevant effects. We conducted a power calculation according to the procedures described by Borenstein and colleagues [16]. We hoped to find a sufficient number of studies to be able to identify a small effect size of 0.3. These calculations indicated that we would need to include at least 20 studies with a mean sample size of 30 (15 participants per condition), to be able to detect an effect size of d = 0.30 (conservatively assuming a medium level of betweenstudy variance, τ2, a statistical power of 0.80, and a significance level, alpha, of 0.05). Alternatively, we would need 15 studies with 40 participants each to detect an effect size of d = 0.30, or 14 studies with 50 participants. Study characteristics All 18 studies were randomized controlled trials, reporting on a total of 869 subjects. The control conditions included 438 subjects and the experimental conditions of 431 subjects. Table 1 presents the study characteristics of these studies. The majority of the studies (N = 12) included adult patients, two reported on adolescents and three reported on both adults-adolescents. The number of patients in experimental conditions ranged from 7 to 55 per study. Fourteen studies recruited participants from clinical populations. In the remaining four studies a combination of clinica l and community recruitment was used. The larger part of the studies included patients with restricted AN and the binging/purging variant (N = 16), one study reported on the restricting type only and for one study it was unknown. Mean pre-treatment weight score ranged from 14 to 18.1 (BMI). This indicates that patients were severely underweight. A normal, healthy BMI ranges from 20 to 25. Six studies reported on inpatients, ten over outpatients, one of patients in a day care program and one unknown. Only two studies had male patients. The majority of studies reported on hormonal treatments (the comparisons were N = 10), six reported on antipsychotics and four reported on the use of antidepressants. A broad variety of adjunctive treatments were mentioned, such as individual therapy, group therapy, family therapy, behaviour therapy, caloric repletion, meal supervision, behavioural incentives and more. The quality of the 18 included studies was not optimal. Although all studies were randomized controlled trials, only five studies were assessed as having a low risk of bias based on the Cochrane domains [15]. The rest of the studies were assessed as having a high risk of bias. Sixteen trials reported blinding for the assessors and/or the patients and for two it was unknown. A random component in the sequence generation was reported in only nine studies; the rest of the studies used a non-random approach (N = 2) or did not report it (N = 7). Allocation concealment was done properly in only seven studies and with a high risk of bias in eleven studies. We also evaluated the use of incomplete data focusing on whether or not intention-to-treat analyses were performed. Incomplete data were adequately imputed in ten cases. Two studies performed completers-only analyses and six did not report it. Finally, fourteen studies reported all their outcomes, two studies reported only significant outcomes and for two studies it was unclear. Table 2 presents the quality assessment per study. Weight was assessed in various ways. We reported on combined ES. To assure that this did not influence the results we performed the meta/analyses including for example only BMI or kg. This did not have an effect on the results, we therefore report on all weights. Some studies had weight measures such as weight gain and IBW gain [17,23,26,27,29,33]. To ensure that this was a valid approach we looked at the equivalence of weight measures of the groups at baseline. Almost all studies (except Halmi, 1986) mentioned explicitly that there were no significant differences in weight at time of randomization. Pharmacotherapy versus placebo Before performing the three meta-analyses of the most commonly used pharmacotherapy, we performed a metaanalysis comparing all three forms of pharmacotherapy with placebo. There were 20 studies that reported outcomes on weight. Figure 2 presents the outcomes of the meta-analysis. The pooled effect size at post-treatment was 0.33, (95% CI: 0.14~0.52), indicating a significant effect. Heterogeneity was medium/ high (I 2 = 40.08). There was one outlier [25]; when removed the effect size decreased and became 0.25, still remaining significant. When the outlier was removed, heterogeneity became low (I 2 = 0). There was no evidence of publication bias. Subgroup analyses yielded no significant differences between subgroups. Meta regression analyses for mean age (slope = 0.014), duration of illness (slope = 0.014), mean weeks of Note. yes (=the study reports correctly on this domain and there is no risk of bias); no (=the study reports on this domain but according to the description the domain could be biased); unclear (=the study does not provide sufficient information to make an assessment). Antidepressants versus placebo There were 4 studies that reported outcomes on weight (see Figure 3). The pooled effect size indicating the difference between antidepressants and the placebo condition on weight (all weight measures) at post-treatment was 0.26 (95% CI: −0.03~0.56), which was not significant (Table 3). The heterogeneity for Antidepressants was low (I 2 = 0). The effect size comparing antidepressants with placebo at post-treatment was higher when adjusted for publication bias (d = 0.35; 95% CI: 0.06~0.64; number of trimmed studies = 1, right of mean), indicating a significant effect. Because of the small sample size no meta regression or subgroup analyses could be conducted. Figure 4 shows that there were 6 studies that reported the effects of antipsychotics on weight. The pooled effect size indicating the difference between antipsychotics and the placebo condition on weight (all weight measures) at post-treatment was 0.25 (95% CI: −0.09~0.60), which was not significant ( Table 3). Antipsychotics versus placebo The heterogeneity for Antipsychotics was low (I 2 = 0.00). No evidence for publication bias was found. Because of the Hormonal pharmacotherapy versus placebo There were 8 studies that reported on weight and 10 comparisons because of multiple conditions in some studies [24,25]; see Figure 5). Grinspoon et al. [24] administered 30 mg and 100 mg of recombinant human insulin-like growth factor I. In Grinspoon et al. [25] the therapeutic conditions consisted of recombinant human IGF-I and oral contraceptive administration. The pooled effect size indicating the difference between hormonal therapy and the placebo condition on weight (all weight measures) at post-treatment was 0.42 (95% CI: 0.110 .73), which was significant (Table 3). For hormonal therapy heterogeneity was high (I 2 = 64.70). No evidence for publication bias was found. There was one outlier [25]; when removed the effect size decreased and became 0.26, still remaining significant. When the outlier was removed, heterogeneity became a bit lower (I 2 = 23.15). Meta-regression analyses of the weeks of medication treatment (slope = −0.008) yielded a significant effect (p = 0.04). Meta regression analyses of BMI at the beginning of treatment (slope = −0.02326) and mean age (slope = 0.03932) yielded no significant effects (p = 0.83; p = 0.05 respectively). Subgroup analyses for hormonal therapy did not yield any significant results. In this meta-analysis we included two studies in which two hormonal treatments were compared with the same control group [24,25], thus resulting in multiple comparisons in the same analysis. Because those comparisons are not independent from each other, this may have resulted in an artificial reduction of heterogeneity and have influenced the pooled effect size. We therefore performed a sensitivity analysis by including only one effect size per study. First, we conducted the analysis including only the comparison with the largest effect size from that study and then we did the same with the smallest effect size. As can be seen in Table 3, excluding the highest effect sizes resulted to a smaller (still significant) effect size and a large reduction in heterogeneity (I 2 = 1.26). Main findings To our knowledge this is the first meta analysis focusing on three forms of pharmacotherapy in the treatment of AN. By analyzing these data we aimed to provide an overview of more detailed information of the effectiveness of When grouping all medication together, we found that pharmacotherapy is more effective than placebo. This grouping allowed an increase in power in order to perform subgroup and meta regression analyses. Unfortunately, they did not yield significant results. When performing meta-analyses for the three most common medicine apart, we found that hormonal therapy has a significantly larger effect on weight compared to placebo in the treatment of AN. This is a moderate effect size [13]. However for these analyses heterogeneity was high, which means that these significant results have to be regarded with caution. The sensitivity analyses support this conclusion. Meta regression analyses suggest that less weeks of hormonal treatment are associated with a better effect (a significant negative slope). It is possible that anorexia patients benefit on short term when it comes to (hormonal) medication, but fail to have a better recovery on long term. There are indications that for example alterations in the regulation of the hormone leptin may play a part in the persistence of anorexia nervosa. During recovery of anorexia patients, normalization of leptin levels seems to precede normalization of weight; this may be a contributing factor to the difficulties patients experience with maintaining normal weights, in this case for longer treatments [35]. Larger effect sizes for weight do not necessarily mean normalization of weight. Furthermore, weight goals differ depending on the treatment phase (acute phase or maintenance phase). In the acute phase the aim of treatment is mostly weight gain. However, weight gain alone cannot be considered a successful treatment of AN as other symptoms may continue to exist (e.g. absence of menstruation, preoccupation with body weight). Most trials focus on weight restoration; unfortunately this is only one aspect of the complex pathology of AN. We found that antidepressants had no significant effect on weight compared to placebo in the treatment of AN. [4] found that antidepressants did not only fail to improve weight when compared to placebo, but also eating related psychopathology. However when we adjusted for publication bias the effect became higher, and significant favouring antidepressants over placebo treatment. Claudino et al. however only included subjects in the acute underweight phase, while this meta-analysis had broader criteria (acute and maintenance phase). This meta-analysis also suggests that antipsychotics had no significant effect on weight compared to placebo in the treatment of AN. This latter is in line with the findings of Kishi et al. [9] and Lebow [10]. Kishi concludes on basis of their results that "taken together, the currently available evidence seems to tilt the risk-benefit balance against antipsychotics in patients with anorexia nervosa". Considering the grave consequences of AN all small steps that are helpful in treatment should be taken into consideration. However the results of this study should be interpreted within the study's limitations. Antidepressants and antipsychotics are the most commonly used medicamental treatments for anorexia nervosa in the Netherlands. Yet in both cases we failed to reveal efficacy when compared to placebo. There has been a lot of research trying to explain the placebo response, focusing on non/specific factors, expectancy and conditioning. Some studies have tried to identify "placebo prone personality types" with no consistent conclusions. There are some indications that use others as a healing resource and build positive relationships tend to benefit more from placebos, a hypothesis that supports continuity of care and effective interpersonal relationships to produce those placebo effects [36]. The limited findings of antidepressant and antipsychotic trials also raise an ethical dilemma: should those medications be used to treat anorexia nervosa in the acute phase? Clinicians should consider whether it is more beneficial to treat anorexia nervosa patients with pharmacotherapy in the stabilisation or prevention relapse phase, not aiming only at weight but also secondary symptoms as depression (see also the conclusion of Claudino et al. [4]). The design of the studies included also raise moral dilemmas. For example, how ethical is it to withhold treatment from anorexic patients and offer them a placebo instead? Furthermore, not all studies reported funding information. This aspect is essential when medication trials are published. Limitations and recommendations Several limitations of our meta analysis caution against over-interpretation of the results. One methodological shortcoming is that the quality of the 18 included studies was not optimal and some of the studies had very small sample sizes. Six out of eight studies of hormonal therapy for example had an unclear or high risk of bias. Yet those studies yielded positive findings. The power was an important problem, we could not include the number of studies that were necessary according to power calculations, which unfortunately made it impossible to run subgroup analyses for antidepressants and antipsychotics. The studies contain relatively small patient samples, lending the results to a narrow foundation. In addition there is major variety in treatment and patient groups. The heterogeneity in the comparison between hormonal therapy and placebo was large. Furthermore some patients in the experimental as well as control conditions received various forms of adjunctive treatment, varying from very intensive clinical treatment to weekly outpatient sessions, making it difficult to draw conclusions. Some studies included adolescents as young as 12 years old. Changes in weight could be attributed to growth and not medication, especially in longer RCT's. Other methodological shortcomings of the studies involve the outcome measures and reporting of outcomes. Although we've tried to report on many outcome measures and subgroups, this on the one hand reduced our sample sizes considerably, making it very hard to draw conclusions. On the other hand it lead to a lot of irrelevant results. Furthermore, the primary outcome measure was improvement of weight, as this is considered the first step in recovering from AN. Most of the trials therefore reported on weight, but unfortunately did so in various ways, making aggregation of the results very difficult, if not impossible. More over a number of studies only reported significant results, leaving out non significant but possibly relevant data and information. Because we inserted published data only, this may have caused a bias in our analyses. Some evidence of heterogeneity was found between studies. Two main reasons for this could be that studies were found to be pursuing similar objectives (improvement in the treatment of AN) but with differences in goals and in expected actions of the pharmacological interventions. Recommendations Although considerable research had been devoted to AN evidence considering treatment efficacy is scarce. As demonstrated in this MA, trials reporting on pharmacotherapy compared to placebo had major limitations. Therefore it is of extreme importance that researchers would systematically use the same outcome measures. Furthermore the use of dichotomous data or the use of categories of levels of improvement in symptoms would facilitate research in the future. There is an urgent need for more and better quality studies with a better operationalization of the terms of improvement for AN. RCT's need to focus not only on weight restoration but on a broader definition of improvement. In these future studies, researchers must also attend to issues of statistical power and research design.
v3-fos-license
2023-05-05T13:04:30.141Z
2023-05-04T00:00:00.000
258487037
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://mh.bmj.com/content/medhum/early/2023/05/03/medhum-2022-012491.full.pdf", "pdf_hash": "3b624d33b3c0cf57ef2397573ea45e2221377b6a", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41160", "s2fieldsofstudy": [ "Medicine" ], "sha1": "37a1775b0427067bae868e0b43064f0869a9e43c", "year": 2023 }
pes2o/s2orc
Authority and medical expertise: Arthur Conan Doyle in The Idler Arthur Conan Doyle’s medical and writing careers intertwined and his work has a history of being read in the light of his medical expertise. He wrote at a time when the professionalisation and specialisation of medicine had resulted in an increasing distance between the profession and the public, yet general practitioners relied financially on maintaining good relationships with their patients and popular medical journalism proliferated. A variety of contrasting voices often disseminated narratives of medical science. These conflicting developments raised questions of authority and expertise in relation to the construction of medicine in the popular imagination: how is knowledge constructed? Who should disseminate it? How and by whom is authority conferred? How can the general population judge experts in medical science? These are questions explored more widely in Conan Doyle’s writing as he examines the relationship between expertise and authority. In the early 1890s, Conan Doyle wrote for the popular, mass-market periodical The Idler: An Illustrated Magazine. His contributions to it address these questions of authority and expertise for a lay audience. First establishing the medical context of doctor/patient relationships in which these questions arose, this article undertakes a close reading of these mostly rarely studied single-issue stories and articles as a means of ascertaining how Conan Doyle and his illustrators identified the relationship between competing narratives, expertise and authority. It argues that rather than maintaining a distance between public and professional, Conan Doyle’s illustrated work demonstrates to his readers that there are ways to successfully navigate the appearance of authority and recognise expertise as they confront entangled representations of advances in medical science. INTRODUCTION When tubercular patients travelled across Europe to Berlin hoping to access Dr Robert Koch's experimental treatment, the scientific details of the remedy were being deliberately kept secret.An anonymous writer in WT Stead's Review of Reviews condemned this concealment: 'According to the rule of the profession, no cures wrought by secret remedies can ever be examined into.All dealers in secret remedies are quacks' (Anonymous 1890, 547).In spite of this convention, the profession expressed intense interest in the secret cure.Koch had no intention of making details of his discovery public while the remedy's efficacy was yet to be proven, but 'exaggerated' and 'distorted' reports forced him to provide some account (although he still did not reveal the origins and preparation of the remedy) (Koch 1890, 557).However, Koch's colleague, Professor Von Bergmann, was to lecture on the subject in November 1890 in Berlin.At the time a general practitioner, Arthur Conan Doyle set off to Germany to seek the evidence for himself.During this trip, he failed to gain admission to Von Bergmann's presentation or meet Koch, but did access patients receiving the treatment and discussed the lecture with others present.One result was a character sketch produced for the Review of Reviews expressing admiration for Koch, 'the noblest German of them all' (Conan Doyle 1890, 556).This episode in Koch's and Conan Doyle's careers highlights a conflict between expertise and authority in narratives of medical science.Political pressures forced Koch to announce the cure before he was ready; he was still developing his expertise away from the public.Once the cure was made public, the press, including Conan Doyle, constructed Koch as a hero.As Laura Otis puts it, 'the voice of central authority could outweigh scientific evidence' (Otis 1999, 25).This was a political central voice constructing narratives of national and imperial authority through a sympathetic press. Nonetheless, a feature of fin-de-siècle massmarket periodicals was 'to hold apparently contradictory discourses in suspension' (Tattersdill 2016, 19) and, when we examine Conan Doyle's presentation of Koch in the Review of Reviews, we find a variety of contrasting voices.The sketch is sandwiched between the aforementioned anonymous reflection on Koch (which connects his work to that of infamous quack Count Mattei) 1 and Koch's own defence of his decision to maintain secrecy around his discovery.One piece anonymously suggests potential quackery due to absent evidence, one portrays a heroic endeavour without having observed the hero, and one allows the expert himself to speak but he chooses to stay silent about the important details.Together these pieces do not comprise a definitive or cohesive presentation of Koch and his cure; rather, they leave the reader to draw their own conclusions based on evidence that has a range of obvious limitations.This mulitvocal, contradictory presentation of a key concern of medical science raises questions of authority and expertise in relation to the construction of medicine in the popular imagination: how is knowledge constructed?Who should disseminate it?How and by whom is authority conferred?How can the general population judge experts in medical science?These are questions explored more widely in Conan Doyle's writing as he examines the relationship between expertise and authority; those Original research with expertise do not always have authority in Conan Doyle's narratives, nor does authority (understood as the power to influence) always coincide with expertise. Researchers have examined the significance of the Koch episode for Conan Doyle and his contribution to popular understandings of medical science.Douglas Kerr argues that this was a key event in Conan Doyle's writing career; he was 'a writer in pursuit of a story' who achieved a 'journalistic scoop' that would be his first piece for a national newspaper (Kerr 2010, 47, 46).In also reporting this story in the Review of Reviews, he contributed to the development of the martial metaphor as a way of comprehending the threats of the microscopic world, as Servitje (2021, 169-72) details.Emilie Taylor-Pirie (2022) identifies the event as a catalyst for Conan Doyle's 'passion for the narrative romance of medicine' (144) and finds he positions 'scientific brilliance' opposing 'microscopic threat' (148).These readings recognise an admiration for the heroic Koch in this battle with bacteria.They also build on Laura Otis' depiction of this opposition as having an affinity with Conan Doyle's Sherlock Holmes stories, narratives of empire concerned with smallness as the hero detective confronts threats to the national body (Otis 1999, 96-100). Conan Doyle's role in a wider narrative of scientific heroism and empire is, then, well documented, predicated on the overlap between medical science and the consulting detective Holmes.There are, however, other facets to his shaping medicine in the popular imagination found elsewhere in his oeuvre, not least because his medical and writing careers intertwined.In his own time his work was read in the light of his medical expertise.A 'Pen Portrait' of him in the Windsor Magazine claimed that 'his medical experience has been of the utmost value to him' (Cromwell 1896, 367).In the same issue, MacLauchlan (1896, 369) details the effects of Conan Doyle's medical training on his authorship claiming that, '[t]he scientific touch must in some degree colour all his work.' (MacLauchlan 1896, 371).Conan Doyle himself would later, in Memories and Adventures, explain that his serial The Stark Munro Letters drew on his early years in medical practice, and that, famously, his tutor when he studied medicine, Joseph Bell, inspired the character of Sherlock Holmes (Conan Doyle, 1924, 52, 69). In addition to the studies related to Koch detailed above, further readings of Conan Doyle's work evidence arguments about nineteenth-century medical professionals.Peterson (1978, 93-7) reads The Stark Munro Letters as representative of the experience of a career in general practice; Furst (1998) uses some of Conan Doyle's stories, including 'The Doctors of Hoyland' (122-3) discussed below, to evidence the changing power dynamics in the doctor/patient relationship; and Kerr (2013, 41-78) is interested in Conan Doyle's presentation of divisions within the medical profession, in the way he contrasts consultant and generalist in order to investigate knowledge and the role of the expert itself. 2 Conan Doyle's work is also understood to have mediated Victorian cultural conceptions of the practice of medicine.For Moulds (2021), medical fiction played a part in the development of different professional medical identities and she demonstrates how a variety of Conan Doyle's stories contributed to the emergence of those subdisciplines.However, his work not only represented late-nineteenth-century medical practice.The Koch episode suggests a new avenue of investigation in the relationship between his work and medicine: entangled in the inclusive and diverse contents of popular periodicals, 3 his writing encouraged his reader to interrogate the construction of narratives around medical science through presentations of expertise and authority. His hagiographical presentation of Koch shifted over time (Kerr 2010, 47).By 1924 he wrote more critically of the scientist and with more empathy for the public who had pinned such hopes on his cure.Kerr views this shift as influenced, in part at least, by Conan Doyle's postwar opinion of Germany.But Conan Doyle's contempt for 'scientific arrogance' (Kerr 2010, 47) and sympathy for a public who learnt of medical advancements through popular media can be found much earlier than 1924: between 1892 and 1894 Conan Doyle published seven short pieces in The Idler: an illustrated magazine, a periodical which actively positioned its readers in close relationship with its contributors.Proximity between the public and experts had significance in medicine during this period: it was a contradictory time in the practice of medicine when professionalisation and specialisation had resulted in an increasing distance between the profession and the public, yet general practitioners' financial success depended on maintaining good relationships with their patients and popular medical journalism proliferated.It is unsurprising then that questions of authority and expertise were pertinent.Like the three pieces on Koch in the Review of Reviews, Conan Doyle's Idler publications both raise and illuminate such questions.They comprise four stories that would later appear in Conan Doyle's collection of medical fiction, Round the Red Lamp ('The Los Amigos Fiasco' (Conan Doyle, 1892c), 'The Case of Lady Sannox' (Conan Doyle, 1894a), 'Sweethearts' (Conan Doyle, 1894b) and 'The Doctors of Hoyland' (Conan Doyle, 1894c)), another story with a medical theme ('De Profundis' (Conan Doyle, 1892a)) and two non-fiction pieces ('The Glamour of the Arctic' (Conan Doyle, 1892b) and 'My First Book: VI.-Juvenilia' (Conan Doyle, 1893)). 4 There is a critical precedent for reading these pieces together in their periodical context.Jonathan Cranfield's study of Conan Doyle and the Strand Magazine (the enormously successful magazine that published his Sherlock Holmes short stories) emphasises the affordances of reading his fiction in its periodical contexts: recovering forgotten texts, and establishing a 'symbiotic' relationship between fiction and non-fiction (Cranfield 2016, 2-3).Here I explore texts that have received far less critical attention than the Holmes stories, and bring two of his better known medical tales ('The Case of Lady Sannox' and 'The Doctors of Hoyland') into dialogue with rarely discussed pieces, a dialogue that would have been attended to by regular readers of the Idler.My analysis of these texts in relationship with each other affords a new insight into the author's contribution to popular understandings of medical authority and expertise.Rather than emphasising a distance between public and professional, Conan Doyle's illustrated work demonstrates to his readers that there are ways to successfully navigate the appearance of authority and recognise expertise as they confront entangled representations of advances in medical science. RECOGNISING MEDICAL EXPERTISE AND AUTHORITY Questions of authority and expertise are evident in developments in the medical profession towards the end of the nineteenth century, in particular those developments affecting all manner of professional relationships.The professional structure of medicine crystalised during the latter part of the nineteenth century.Following the Medical Act of 1858, which created the General Medical Council and initiated the registration of qualified medical practitioners, the separate groups of physicians, surgeons and apothecaries further cohered as medicine evolved from a dependent occupation to a profession with its own authority towards the turn of the century.The development Original research most pertinent to my discussion was the changing relationship between the medical practitioners and their patients as medicine evolved.There was a distancing of medical practice from associations with trade, although many starting out in medicine found that economics 'was nevertheless the bottom line in professional survival' (Digby 1999, 4); there was also an emphasis on demonstrative professional courtesy and an attendant retreat of medical self-criticism from the public eye and thus public judgement (Peterson 1978, 250-55).Yet medical practitioners depended financially on maintaining good relationships with the public and this public was nonetheless interested in and familiar with medical developments.These relationships were complex, often mediated by the press.Frampton (2020) details the development of nineteenth-century medical periodicals and finds that the end of the century saw both specialised publications (by their nature distanced from laymen) and journals that aimed to 'mak[e] medical knowledge accessible and interesting to lay audiences' (Frampton 2020, 450).She explains that this worried established publications: the Lancet for example, raised the dangers of self-diagnosis (Frampton 2020, 450).But the Lancet itself was a journal that set out to make medical knowledge accessible, which suggests a conflict around who has the authority to distribute medical knowledge to laypeople and the effects of this distribution.Adopting new therapeutics that had been enthusiastically discussed in the press could effect public approval and accompanying success for the doctor using them: for example practitioners who made use of cocaine's anaesthetic qualities to treat patients absorbed some of the prestige surrounding the drug in the late nineteenth century (Small 2016).This in turn garnered a more general public perception of practitioners' heroism and 'the physician's unassailable moral primacy' (Small 2016, 4).Patients developed conflicting expectations of the doctor/patient relationship, demanding both the latest treatments and the comfort of more traditional practice (Furst 1998, 179). Successful doctor/patient relationships depended not only on treatment and the latest medical expertise, however.'Observing medical etiquette was conceived as an important route to achieving professional acceptance and success, particularly for those who lacked social contacts or capital' (Moulds 2019, 104).Etiquette guidance for doctors recognised that conduct towards patients could disguise other deficiencies too: 'if one is especially polished and elegant in manner, and moderately wellversed in medicine, his politeness will do him a great deal more good with many than the most profound acquaintance with histology, microscopic pathology and other scientific acquirements' (Cathell 1889, 42).More than once this guide advises that politeness and sensitivity impress the patient more than medical knowledge, and emphasises the importance of patients' perception and expectation of events.For example, it states that the physician should 'not make your visit so brief or abrupt as to leave the patient feeling that you have not given his case the necessary attention' (Cathell 1889, 46, my emphasis).What is important is the patient's own perception of requirements, rather than the medical requirements of the case, unsurprising when '[y]our professional fame is your chief capital' (Cathell 1889, 39).This advice encouraged doctors to perform an expected appearance of authority, an authority that could have little to do with medical expertise. These developments that made establishing the veracity of medical authority difficult for patients had potentially serious consequences.One problematic symbol exemplifies this.In the 1880s and 1890s, the red lamp, hung to advertise medical services, also symbolised the potential disjuncture between the appearance of authority and the reality of expertise and the difficulty of distinguishing between the two.Understanding what was at stake in this symbol reveals the significance of Conan Doyle's writing in the Idler.'Anybody […] is at liberty to hang a red lamp over his door and […] to sell drugs and poison without restriction, and to "make up" prescriptions with impunity' (Anonymous 1883, 642).With little other recommendation than the lamp, those seeking urgent care could find themselves receiving treatment from someone entirely unqualified to provide it.This was the case in 1882 when a child died due to misdiagnosis by a doctor's assistant; the child's parents believed him to be a doctor due to the red lamp outside his house (Anonymous 1882, 694).Following an 1894 inquest, a contributor to the British Medical Journal summarises the issue thus: In framing any attempt to suppress quackery the fact must be recognised that great numbers of the masses are quite willing to accept the red lamp as sufficient, while many of the classes, it is to be feared, take a certain sceptical pleasure in believing that non-qualification is at least one index of an open mind and a thing therefore rather worthy of encouragement.(Anonymous 1894(Anonymous , 1261) ) When Conan Doyle (1894d) published his collection of medicine-related short stories, Round the Red Lamp, readers would have recognised the title's associations with the disconnect between giving the appearance of authority and having the expertise to justify it.A reading of his medical stories in the Idler, however, recognises that concerns about and understandings of expertise and authority were part of wider cultural questions of knowledge creation, dissemination and consumption that nonetheless affected how the public related to medical practitioners. CONAN DOYLE AND THE IDLER While Conan Doyle framed Round the Red Lamp with a preface and final story that suggest a reading that emphasises a distinction between expert medical practitioner and general reader (a likely patient), the publication context of the Idler inflects his medical stories differently, bringing the public and practitioner into close proximity imaginatively.In the Idler the stories overlap with other contexts and particularly with the idea of the gentlemen's club. The Idler first appeared in February 1892 under the editorship of Jerome K Jerome and Robert Barr (Dunlap 1984, 178).It was announced with some fanfare: the Review of Reviews described it as a 'formidable rival' for the Strand Magazine (Anonymous 1892, 188).Published circulation figures suggest its popularity, 5 as does its persistence until 1911, although with slight changes to its title and with other editors (Dunlap 1984, 181-2).In the context of the Idler, illustration contributes to Conan Doyle's narratives providing emphases not available in other contexts.A reading such as mine, which places significance on the periodical context in creating meaning, must necessarily attend to illustration.A range of scholarship suggests how we might read nineteenth-century illustrated texts.Patten (2002, 91) explains that '[i]llustrations are not mimetic.They are not the text pictured.[…] To say, then, that an illustrator illustrates a text might mean that the artist enlightens the text'.In other words, they enhance our understanding of the narrative, rather than merely represent it.Sillars (1995, 17) explains how technological developments in print 'created a unified discourse of word and image', thus suggesting we read an illustrated text as a whole, all parts contributing to its meaning.And Julia Thomas has demonstrated the significance of reading these texts as unified discourse.She finds that '[m]eanings are generated Original research in the very interaction between the textual and the visual, the points at which they coincide and conflict' (Thomas 2004, 15).Illustrations work with the verbal text to make meaning, thus influencing our reading of Conan Doyle's work in the context of the Idler. The Idler was a journal comfortable with being seen to meet mass-market demand at a time when the New Journalism was elsewhere stigmatised (Fiss 2016, 418).Indeed, it featured a column in which its contributors 'treat [ed] the notion of exclusivity itself as a joke' (Fiss 2016, 420).This serial column was known as 'The Idler's Club'.Laura Kasson Fiss's study of it explains that '[m]agazine readership is commonly associated with mass consumerism, commercial transaction, and anonymous global distribution, whereas the club suggests a cozy friendship among select members that excludes everyone else' (415).Fiss finds the tone of the club column to foster a relaxed atmosphere in which the reader has some intimacy with literary celebrities; it 'imitated club talk' (420).Thus this column created 'an intimate imagined space for its mass readership' ( 415).If we are to read the column as Fiss suggests, as a 'microcosm of the journal as a whole' ( 426), then such an imagined space inflects the contents of the Idler with an intimacy that brings Conan Doyle's readers close to him when he publishes there and positions his work as already resistant to distance between professionals and laypeople.Rather than produce narratives emphasising an unassailable distinction between the medical practitioner and the layman, his writing in the Idler positions readers where they can consider what it means to be on the brink of the unknown and how knowledge is created.In this context, Conan Doyle's writing encourages his readers to challenge appearances of authority in medical practice, and to recognise instead the significance of expertise, with its dependence on experience and effort. DE PROFUNDIS 'De Profundis' illustrates the competing narratives of interpretation that can challenge expertise in medical science.Conan Doyle's first piece in the Idler concerns illness, diagnosis and death (Conan Doyle, 1892a).At first it seems to be a straightforward tale that suggests the folly of resisting professional help.The story features John Vansittart and his wife, and is narrated by his agent.The couple are due to sail to Columbo when Vansittart becomes ill.When he first announces he is unwell, the narrator tells Vansittart that 'a touch of the sea will set you right' and he responds: 'I want no other doctor' (151).Vansittart's behaviour and appearance suggest that he is drunk and indeed the narrator convinces himself of this stating '[s]ad it was to see so noble a young man in the grip of that most bestial of all the devils' ( 151).Yet we soon discover the narrator's misdiagnosis: Vansittart has 'not had a drain for two days' (152).Vansittart himself diagnoses London to be the cause and proposes being at sea as the cure.The narrator attempts to persuade Vansittart to see a medical doctor, but he refuses, preferring to trust in the presence of the ship's surgeon to ensure his safe journey.Once aboard, Vansittart's illness is diagnosed as smallpox and his subsequent death at sea implies the folly of self-diagnosis.Conan Doyle demonstrates the confident ease with which a layman will mistakenly interpret the evidence and the dangers of refusing to engage with experts.However, the story's conclusion complicates this. Anticipating the conclusion, the narrator's introduction imagines a mother 'seeing' her dying son and he states: 'Far be it from me to say that there lies no such power within us' (149).He acknowledges the possibility of the supernatural here, but firmly encourages scepticism: 'once at least I have known that which was within the laws of nature to seem to be far upon the further side of them' (149).The conclusion of Vansittart's story illustrates his point: the significance of the title is that the body of Vansittart rises out of the depths in what appears at first to be a vision.The narrator and Vansittart's wife are sailing to Madeira, thinking they will be able to meet up with and nurse the sick man there.When his body bursts out of the depths and sinks back below, and unaware that he has already died aboard ship, his wife believes she sees a vision signifying he has died at that very moment.When they catch up with the Captain of Vansittart's ship, they find in fact he had died 8 days previously and been buried at sea.This knowledge changes the narrator's interpretation of what they have seen; he concludes it must have been Vansittart's actual body.The vision appeared at the exact place of burial, the surgeon says the weight was not well attached, and the noises could be explained by the presence of sharks.Nonetheless, 'a clearer case of a wraith has seldom been made out, and since then it has been told as such, and put into print as such, and endorsed by a learned society as such' (157).So, in the case of Vansittart's illness, the narrator's and Vansittart's misdiagnosis suggests a need to rely on expert intervention, but in the case of Vansittart's death, the narrator's very plausible explanation drawing on a surgeon's expert evidence contradicts that of other experts.What 'De Profundis' does then, is at once emphasise the need for informed interpretation but reminds the reader that experts may interpret differently, that there are competing narratives to navigate in order to understand the world. THE GLAMOUR OF THE ARCTIC Conan Doyle's second piece for the Idler explores the difficulties of navigating competing narratives in the pursuit of knowledge more expansively.Although only its conclusion addresses medical science, 'The Glamour of the Arctic' demonstrates the effect of combining multiple interpretations and expertise from different sources as they intersect with the popular imagination in the construction of knowledge.Such a scenario reflects the general reader's encounters with medical science in the popular press, such as the Koch episode in the Review of Reviews.In 1880 Conan Doyle paused his medical studies to work as a surgeon on board the Arctic whaling ship Hope.Here he experienced an adventure he would describe in his autobiography as 'a strange and fascinating chapter of [his] life' (Conan Doyle, 1924, 41).Twelve years later, and four months after 'De Profundis', 'The Glamour of the Arctic' wove together scenes from life on a whaling ship and the whales they hunted (Conan Doyle, 1892b).Whaling was in decline and its practice altered from the way it was conceived in popular imagination.At the same time, the Arctic and its history were yet to be fully apprehended. The text wavers between expertise, mystery and imagination.Conan Doyle describes a useless (in commercial terms) rorqual whale is 'eighty foot' but its spray is 'like smoke' and 'where green is turning to black the huge flickering figure […] glid [es] under the ship' (627).It is both measurable, but insubstantial; he uses the word 'strange' here to describe both the sight and sound of the whale, yet he is able to compare the gullet size of different types (627).The same occurs when he comes to the harpooner.He is quick to dispel any notions his readers may have gathered from books; he explains that this 'gallant seaman, who stands at the prow of a boat waving harpoon over his head, with a line snaking out into the air behind him, is only to be found now in Paternoster Row' (627).This is a figure relegated to print, and, it seems, for good reason.Quite simply, 'one could shoot both harder and faster than one could throw' (627-8).However, at Original research the same time Conan Doyle wants to cling to the popular but outdated image, welcomes its outrageousness and impossibility in print at least.The glamour of the Arctic is not only a result of physically experiencing it, then; it is also effected imaginatively, through popular understandings perpetuated on the page.The popular imagination holds sway in the face of new expertise and mystery is stubbornly persistent even as factual understanding is attained. This imbrication of knowledge and awe also occurs when he relates the whaler's experience.The romance of the Arctic colours their expertise.Indeed, he notes that '[s]ingular incidents occur in those northern waters, and there are few old whalers who have not their queer yarn' (635).His inclusion of anecdote and tall tales only adds to the hazy glamour with which this text veils the Arctic; he weaves imaginary understandings with the facts he presents.Making use of anecdote also affords him the opportunity to suggest the possibility of finding a passage to the North Pole, though he gives the caveat that 'some little margin must be allowed, no doubt, for expansive talk over a pipe and a glass' (633).He writes of 'gnarled and rugged' old ice, 'impossible to pass', and relates an 1827 attempt where it seemed the impassable ice persisted to the pole (634).Then he adds the whaler's view that there have been times when they have seen no ice at a similar distance North to that 1827 voyage.These are differing experts' tales and the reader is left to wonder what could be the truth of it .The accompanying illustrations add to the sense of competing views. 'Blocked', an illustration by A Webb, intersects the different stories of the attainability of the North Pole (634.See figure 1). Here the ice obscures the ship and the crew are absent.Webb gives no hint of them among the rigging or on deck or attempting to man the boats.The edges of the drawing of the ship are sharp and straight, contributing to the impression of an absence of movement.The whalers' experience is thus diminished.The stillness of the ship contrasts with the looming of the ice jaggedly rising in the foreground.Its dominance and its apparent thwarting of the ship's progress make it difficult to imagine that the ice could clear.Thus the image casts at least some doubt on the whaler's account of ice-free seas. On the opposite page, however, the photograph of the Shetlander illustrates a suggestion made by Conan Doyle that the possibility of clear passage to the pole could be tested (635.See figure 1).He suggests that a crew comprising 'Scotch and Shetland seamen from the Royal Navy' could make yearly expeditions with a chance of finding clear sea (635).This photograph is sharp, solid.This Shetlander, titled so as to appear representative of a type, looks the reader right in the eye, his mouth set firm.There is an appearance of authority in this image; it appears to be a reliable image of confidence and determination, strengthening the plausibility of Conan Doyle's proposition.The reader is encouraged to judge authority from appearance. These two images across the spread of the two pages have a similar effect to Conan Doyle's narrative shifts.We move from an emphasis on impassable ice on one page to a type of man with the tenacity to overcome the growing obstacle on the next.The shifts between Webb's illustrations and the photographs as they imbricate with the text both allow for the precision and romance that create 'The Glamour of the Arctic' and give a sense of the spell cast by a whaling voyage among icy seas.The illustrated text as a whole suggests what Conan Doyle might mean when he writes 'You stand on the very brink of the unknown' (633).The competing narratives attendant on the production of new In contemplating that brink, Conan Doyle implies in his concluding paragraph that in writing the 'The Glamour of the Arctic' he has been dazzled.He states, 'There is a good deal which I had meant to say' as if somehow distracted from his purpose.He at once destabilises his authority to relate the things that 'throw the glamour over the Arctic' as he also acknowledges that these missing utterances have 'all been said much better already' (638).Nonetheless, his final move is to present himself as innovative in writing on the region, and he does so in a way that draws on his own area of expertise: he attends to the Arctic's 'medical and curative side'.The haze seems to lift: 'Davos Platz has shown what cold can do in consumption' and 'in the life-giving air of the Arctic Circle no noxious germ can live' (638).Straightforward and with clarity, Conan Doyle presents what he terms a 'safe prophecy': 'before many years are past, steam yachts will turn to the North every summer, with a cargo of the weak-chested, and people will understand that Nature's ice-house is a more healthy place than her vapour-bath' (638).It seems an ideal grounded scientifically in the Arctic air, not in its enchanting glamour.It is an ideal seemingly far from maintaining the romance that so appealed to the author. But Conan Doyle's words are not the end of the narrative, rather Webb's final illustration (638.See figure 2) brings us back to both the whale and the expert seamen with whom Conan Doyle was so in awe.This illustration differs greatly from Webb's other images.Where they reflected the Arctic haze through soft washes, here Webb hatches lines.There is no haze in his visual representation of his ideal end.And so both Conan Doyle and Webb conclude with a clarity emerging from the haze of the rest of the piece.However, Webb seems utterly at odds with Conan Doyle.In bringing us back to the whale and its potential for devastation, his illustration seems to mock Conan Doyle's idealisation of the medical benefits of an Arctic voyage where these creatures lie beneath the surface.The begging seaman, apparently injured by a whale, smiles nonetheless, perhaps encouraging us not to pity him but to laugh at Conan Doyle's proposition given the danger.And yet the seaman's illustration of his disaster returns us to Conan Doyle's earlier description of harpooning a whale.He wrote: 'should the whale cast its tail in the air, after the time-honoured fashion of the pictures, that boat would be in evil case, but, fortunately, when frightened or hurt it does no such thing, but curls its tail up underneath it, like a cowed dog, and sinks like a stone' (628).If that is the case, then the seaman's picture, the potential danger of whales, seems to be pure imagination.Conan Doyle's prophecy of curative voyages can exist with the other ideal, the romantic notions of the whale, and of whaling. The glamour evoked by Conan Doyle, Webb and whoever selected and set the photographs, layering experience and hearsay with the measurable and the elusive, suggests that expertise can sit comfortably with incomprehension and awe.'The Glamour of the Arctic' portrays a value in imaginative, uncertain ways of comprehending the world, and Conan Doyle's, and indeed the Idler's editors', authoritative weaving together of these very different types of expertise suggests the desirability of drawing on very different experts to make sense of experience. research THE LOS AMIGOS FIASCO 'The Los Amigos Fiasco' (Conan Doyle, 1892c) was listed separately from the list of medical stories in Conan Doyle's accounts diary (Conan Doyle, 1892d).However, its subsequent inclusion in Round the Red Lamp suggests Conan Doyle considered it to be related to the issues of medical authority exemplified by the lamp of the title.Where 'Glamour' explores the difficulties of untangling competing expert and imaginative understandings, 'Los Amigos' contrasts expertise with unearned authority.It is certainly not a serious story; it concerns a failed attempt at execution by electrocution, which, far from killing the prisoner, seems to have made him invincible.The only medical presence in this tale is the surgeon attending the execution and the prisoner's reference to a swathe of doctors puzzled by his troublesome arm (rendered 'good as new' by the electrocution).This humorous tale goes further than suggest that medical expertise is finite; it positively encourages its readers to laugh at authority, in other words to discount its appearance of power, when it is unearned. Significant to Conan Doyle's purpose here are the aspects of the tale dealing with the refusal of those in authority to accept the opinion of an amateur expert.The narrator is a journalist John Murphy Stonehouse, who, in the interest of financial reward, sets out to 'tell the true facts' about the case (548).Duncan Warner is condemned to be the first to be executed via the powerful dynamos of Los Amigos and the town council have appointed four experts to oversee proceedings.Three of these experts appear to be considered so because of professional success.The fourth, Peter Stupnagel, was 'regarded as a harmless crank who made science his hobby' who never published his results, although Conan Doyle clearly implies his expertise, for 'he was eternally working with wires and insulators and Leyden jars' (550, my emphasis); in other words he is continually gaining experience of that about which he will later explain.Stupnagel sits quietly through the meeting where the group plan the execution, but as it draws to a close he contradicts their decision to sextuple the charge from the strength previously used in New York.He states, 'Gentlemen Original research In the face of presumption, Stupnagel offers evidence about the effect of electricity on the human body, which is duly ignored and the committee outvotes him.The incredible failure of Warner's execution, rendering him apparently immortal, signifies that practical experience (in this case experiment) over authoritative position, is the reliable source of information.As the story concludes, a marshal declares to Stupnagel: 'You seem to be the only who knows anything' (556).Stupnagel's expert knowledge was what mattered, not the elevated professional positions of the others when that position was underpinned by ignorance.Conan Doyle suggests in this funny, but otherwise slight, story, that expertise can be achieved by anyone committed to developing it, and that those with authority may have unwarranted confidence in their own ability to theorise.Thus he provides his readers with a way to begin to untangle some of the competing narratives around medical science, like those depicted at the conclusion of 'De Profundis'.Certainly, new knowledge must draw on expertise gained by experience and not theory alone. MY FIRST BOOK: VI.-JUVENILIA If 'Los Amigos' contrasts expertise and authority, 'My First Book' connects them as Conan Doyle reflects on how he developed both, implicating the public in the production of professional authority.He explores the development of expertise (his own) in more depth and identifies the basis of authority to lie in public relation to others (Conan Doyle, 1893).This illustrated biography presents Conan Doyle as an authoritative story-teller from a young age.The illustrator, Sydney Cowell, depicts the schoolboy Conan Doyle sitting on a desk, 'elevated' as the author puts it, while smiling boys gaze up at him 'little boys all squatting on the floor' (635.See figure 3).Both Cowell and Conan Doyle thus make a distinction that confers authority on the story-teller.Furthermore, we learn that his school friends were willing to pay for his work: 'I was bribed with pastry to continue these efforts, and remember that I always stipulated for tarts down and strict business, which shows that I was born to be a member of the Authors' Society' (635).Of course, there is an intentional humour here but nonetheless, in describing this public authoritative performance, Conan Doyle is constructing an image of a member of a profession.He implies that authority and professionalism occur in public.Like Stulpnagel, Conan Doyle's expertise, however, is developed much more privately. That development is facilitated through practice and application, in this case reading.Conan Doyle's account describes the effect of reading as both conferring special knowledge and affording a visceral experience, even while he presents himself as exceptional.This in turn suggests his expertise as a story-teller.First, he describes years reading unusually intensely, joking about a rumour that 'a special meeting of a library committee was held in [his] honour, at which a bylaw was passed that no subscriber should be permitted to change his book more than three times a day'.And he continues to explain that he 'managed to enter [his] tenth year with a good deal in [his] head that [he] could never have learned in classrooms' (634).Here then, Conan Doyle is presenting himself as a prodigious reader who has learnt more from his private reading than at school.This depiction suggests that a personal motivation in developing knowledge is most productive. If that discussion of his development could seem perhaps arrogant, his discussion of learning his writer's craft and his humorous loss of a manuscript reveals he clearly did not feel himself to be immediately expert: 'Of course it was the best thing I ever wrote.Who ever lost a manuscript that wasn't?But I must in all honesty confess that my shock at its disappearance would be as nothing to my horror if it were suddenly to appear again-in print.If one or two of my earlier efforts had also been lost in the post my conscience would have been the lighter' (637).His humour is disarming; there is no arrogance here.With his openness about his failures, Conan Doyle portrays himself as possessing humility, but his persistence, and the use of the term apprenticeship, suggest a levity accompanied by determination.He may laugh, but no one could accuse him of not taking writing seriously.This is hard work. This focus on hard work and persistence from the outset suggests an inevitability to Conan Doyle's success as well.The article commences with a portrait of the adult author buttoned up in his jacket, tie at his throat, by George Hutchinson (see figure 4), and this is facing the first page of the narrative; the author's head is turned towards the text.Underneath is a facsimile of Conan Doyle's signature: 'yours very truly A Conan Doyle' lending his account authority and adding to the impression of intimacy that we have seen fostered in the Idler (632).Then, on the first page of the narrative, facing in the opposite direction is a smaller illustration, this time by Sydney Cowell: 'I was six' (see figure 4).Cowell depicts the young Conan Doyle in the act of writing and the author appears little different from his adult image: similar dress, similar hair.Visually, there is a direct connection between the two illustrations, implying a connection between his childhood and his position as a popular author.His personal, passionate learning from an early age has led, through determined practice, to the expert, authoritative writer George Hutchinson depicts.Thus 'My First Book' suggests that Conan Doyle's authority is deserved.This self-fashioning will inflect his medical stories in later issues of the Idler that explore authority and how it may be earned in a medical context. THE CASE OF LADY SANNOX Like 'The Los Amigos Fiasco', 'The Case of Lady Sannox' (Conan Doyle, 1894a) proposes to the readers of the Idler that at times the layperson is more competent than those with undeserved authority.This time though he makes the reader an active participant in that proposition and goes further to suggest that the skills associated with medical expertise are wasted if not brought to bear on wider experience.The story also suggests that morality has a significant impact on whether authority is deserved or not. He commences with a highly suggestive first scene that depicts the end of the story and is emphasised by the illustration 'Smiling pleasantly upon the universe' (see figure 5): the brilliant doctor has been left entirely without authority, 'smiling pleasantly upon the universe, with both legs jammed into one side of his breeches, and his great brain about as valuable as a cup full of porridge' (331).Conan Doyle contrasts the scale between the vast compass of the Doctor's gaze and the meagre capacity of his brain's value, alongside the lopsidedness of his immobilising dressing, and thus presents Doctor Stone as entirely incapacitated, at odds with himself.The illustration uses a (perhaps accidental) awkward scale to portray the doctor: his head appears too small when compared with his legs, while his crumpled shirt somewhat swamps his bent body.Here is a man whose arrogant failure to use his expertise has debilitated his mind.Conan Doyle reveals that Stone broke the moral code: 'two challenging glances and a whispered word' and he commences an affair with a married woman-the Lady Sannox of the title (332).Clearly Conan Doyle intends us to connect two incidences of disorder.His narrative then follows Dr Stone until he finds himself gruesomely disfiguring his mistress in a horrific scene.He receives a mysterious visitor, who turns out to be the cheated husband, but Stone's diagnostic skills fail him as he is deceived by the disguise in his hurry to oblige his visitor in order to earn an excessive sum of money quickly before he makes an illicit visit with his mistress.Haste and misreading mean he encounters (but fails to recognise) Lady Sannox before he expects to and the results not only disfigure his mistress, but also his mind. Doctor Douglas Stone is highly skilled yet an incompetent reader; he fails to take his analytical skills outside of the medical sphere and employ them more widely.In contrast, Conan Doyle clearly intends his readers to decipher his clues as he builds our expectations; the evidence suggests something terrible awaits.The clues Conan Doyle places throughout the text serve to provide the reader with a more complex message than one of simple morality.Doctor Stone exhibits an arrogance that results in misdiagnosis and his failure to read what is clearly evident to the reader causes the tragic ending.Conan Doyle makes his research clues readable a way as to position the reader as superior to the Doctor.He raises our suspicions as the narrative focuses on the episode with the mysterious visitor demanding his wife have part of her lip removed; in such a short fiction it must relate to the affair with Lady Sannox.Consequently we read the clues as pointing to the horror to come.The merchant visitor offers Stone an 'extraordinarily high fee' and yet takes him to a 'mean-looking house' the interior of which causes the doctor to 'glanc[e] about him in some surprise' at the lack of furniture or even carpet (335,338).Many things seem wrong with this set-up: the woman is hidden; the merchant insists that no anaesthetic is used; and he says to Stone, 'I can understand that the mouth will not be a pretty one to kiss' (339).When Stone 'grasp[s] the wounded lip with his forceps', the reader recognises, and potentially recoils from, what he fails to see: he is about to disfigure his lover (339).Not only is Doctor Stone's authority undeserved, the arrogance that accompanies it undoes him.When read alongside the other contributions Conan Doyle makes to the Idler, this story demonstrates that authority is something to be earned, but something that depends on more than just professional knowledge. THE DOCTORS OF HOYLAND 'The Doctors of Hoyland' (Conan Doyle, 1894b), the final single issue piece Conan Doyle produced for the Idler, returns to some of the themes of previous pieces; he demonstrates the folly of assumptions about recognising authority, and suggests that it should be conferred by expertise.At the same time, he is concerned with the impact of change on authority, addressing the question of what happens when expert knowledge is disseminated in such a way as to expand a profession, in this case the slow trickle of women into medical practice. Dr James Ripley visits a new doctor, Dr Verrinder Smith, who encroached on his territory by setting up a practice in his village.As with 'The Case of Lady Sannox', Conan Doyle writes clues for the reader, in this case clues to this interloper's identity as Ripley sees them: first, the hall of the new doctor's home (which also incorporates a consulting room) contains 'two or three parasols and a lady's sun bonnet'; second, the woman who greets him 'held a pince-nez in her left hand' as if she has just left off reading .Conan Doyle has immediately preceded this with a detailed description of the new doctor's books; he builds his clues carefully.And yet when this woman announces that she is the doctor he seeks, 'Dr Ripley was so surprised that he dropped his hat and forgot to pick it up again' (230).In this case, it is less arrogance than simple assumption-a failure to attend to change in one's own area of expertise.Ripley expected a man.Conan Doyle shows Ripley's utter disappointment as leading him to interrupt his strict adherence to social codes: he behaves rudely and resists addressing the other doctor correctly.Indeed, Conan Doyle says that Ripley 'felt that he had come very badly out of it' (232).And he has; his response is a failure to convey all the expected signs of authority in his profession.Conan Doyle portrays the new doctor as superior to Ripley, not only in her social behaviour but also in her medical practice, for example she cures patients of ailments that had previously seemed unremitting or hopeless.Like Stulpnagel and Conan Doyle himself, she develops her expertise through practice; she reads the Lancet and has more up-to-date pamphlets than Ripley (231).Her authority is conferred by her adoption of conventional professional behaviour and her application of new knowledge.Expertise must necessarily move with the times. CONCLUSION I began this essay with the questions raised by Conan Doyle's character sketch of Koch as one of three competing narratives of medical science entangled in the Review of Reviews for the reader to evaluate.These questions were concerns related to the professionalisation of medicine and the relationships between the medical expert and the layman in the 1890s, where general practitioners depended on affecting an appearance of authority recognisable to their patients, and where patients, even though they had access to a proliferating popular medical press, were still susceptible to quackery.Known for his medical expertise, Conan Doyle's illustrated work in the Idler bridges the gap between professional and layman as he writes in a publication that created an atmosphere of intimacy between its writers and readers, producing narratives that encourage their consideration of those questions. Conan Doyle's work invites his readers to consider the question of how knowledge is constructed.Both 'De Profundis' and 'Glamour' emphasise that competing narratives afford uncertainty, but that instability is part of the romance of being on the brink of the unknown.Imagination and mystery are difficult to resist and expertise does not negate them.Knowledge in these texts seems to involve holding these narratives together rather than conceiving of them as in competition.When it comes to the dissemination of knowledge, expertise trumps authority for Conan Doyle in the Idler, and if authority (understood as the power to influence) is pompous and laughable, then he suggests we should indeed laugh.His own authoritative influence is clearly connected to his hard-earned expertise, yet laughter is important again as a way to evade arrogance.This emphasis on expertise offers his readers a way to approach medical practitioners that might change the doctor/patient relationship from the deceptive one advised in Cathell's guidance to doctors, to something more useful for the patient.Not that his writing in the Idler suggests character is unimportant; Dr Stone's downfall implies that morality is a significant indicator of reliable medical authority and Dr Verrinder Smith conforms to expected professional behaviour.Most of all, Conan Doyle's Idler narratives encourage the reader to have confidence in their judgements of the authority of their medical practitioners: the intimacy inherent in its periodical form (indicating the public are not so far removed from authority) and Conan Doyle's careful placement of readable clues suggest his audience's superiority over fallible medical practitioners.As the dissemination of medical knowledge was expanding within the medical profession, Conan Doyle's writing connected in the Idler considers what it means to be a layman confronted with this evolving profession.He finds that the romance of discovery resists being consumed by expertise, but expertise is of the utmost importance and that the layperson can and should challenge unearned authority. Twitter Anne Chapman @anniechapman Contributors AC is the sole author of this work. Figure 1 Figure 1 Double page from 'Glamour of the Arctic' (adapted from Conan Doyle, 1892b, 634-5) featuring the illustration 'Blocked' by A Webb and a photograph of 'A Shetlander'. researchknowledge can fail to provide clarity, but the desire to achieve it is intoxicating. […] you appear to me to show an extraordinary ignorance upon the subject of electricity.You have not mastered the first principles of its action upon any human being' (551).He continues to question their 'assumption', asking 'Do you know anything, by actual experiment, of the effects of powerful shocks?' (551).Their 'pompous' response sets up Conan Doyle's argument about what constitutes expertise: 'We know it by analogy' (551), in other words they lack experience. Figure 4 Figure 4 Double page of 'My First Book-Juvenilia' (adapted from Conan Doyle, 1893, 632-3) featuring a portrait of Conan Doyle by George Hutchinson and Sydney Cowell 'I was Six'.
v3-fos-license
2016-05-12T22:15:10.714Z
2015-11-26T00:00:00.000
16600839
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://dmsjournal.biomedcentral.com/track/pdf/10.1186/s13098-015-0105-5", "pdf_hash": "b894836ff3ac70be62dcfd8c8a446bf7a45ced73", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41162", "s2fieldsofstudy": [ "Medicine" ], "sha1": "b894836ff3ac70be62dcfd8c8a446bf7a45ced73", "year": 2015 }
pes2o/s2orc
Trends in mortality due to diabetes in Brazil, 1996–2011 Background Over recent decades, Brazilian mortality registration has undergone increasing improvement in terms of completeness and quality in cause of death reporting. These improvements, however, complicate the description of mortality trends over this period. We aim to characterize the trend in diabetes mortality in Brazil and its five regions in adults (30–69 years), from 1996 to 2011 after corrections for underreporting of deaths and redistribution of ill-defined causes and “garbage codes”. Methods Starting with official data from the Brazilian Mortality Information System (SIM) for adults aged 30–69 in the period 1996 to 2011 for diabetes (ICD-10 codes E10-14), we redistributed garbage codes using methods based on the Global Burden of Disease Study (2010), redistributed ill-defined causes based on recent Brazilian investigations of similar cases and corrected for underreporting using official estimates of deaths. Results With these corrections, age-standardized mortality fell approximately 1.1 %/year for men and 2.2 %/year for women from 1996 to 2011. The rate of decline first accelerated and then decelerated, reaching stable rates in men and minimal declines in women from 2005 onward. Regional inequalities decreased during the period in both relative and absolute terms. Conclusion Mortality due to diabetes declined in Brazil from 1996 to 2011, minimally in men and considerably in women. The lesser declines in recent years may reflect the increasing prevalence of diabetes, and suggest that current efforts to prevent diabetes and minimize the impact of its complications need to be reinforced to ensure that declines will continue. Background Diabetes, a state of hyperglycemia defined by greater risk of microvascular damage (retinopathy, nephropathy and neuropathy), is increasingly recognized as an important public health problem. Diabetes reduces life expectancy, augments morbidity due to microvascular and macrovascular complications (ischemic heart disease, stroke and peripheral vascular disease), increases premature mortality, and diminishes the quality of life [1]. It has been estimated that 387 million people had diabetes in 2013 and that 592 million will have the disease in 2035. Approximately half of those with diabetes are under 60 years of age, and 77 % of those with diabetes live in low-and middle-income countries. This scenario is of great concern given that Type 2 diabetes, the most common form of the disease, is likely to continue to rise as a consequence of population ageing and urbanization, as well as of the current obesity epidemic, resulting in very high direct and indirect costs to individuals and to society [2]. Although not fully understood, the epidemic of diabetes may result not only from increased incidence, but also from improved survival. Improved survival has been demonstrated in some developing countries [3]. The Global Burden of Disease (GBD) Study reported a 9.0 % increase in standardized mortality from diabetes between 1990 and 2013, with diabetes progressing from the 26th to the 17th leading cause of years of life lost globally. In 2013 diabetes was the 7th cause of years of life lost in Brazil [4]. Scant data exist with respect to trends in diabetes mortality for Brazil, a high-middle income country in which known diabetes prevalence is 6.2 % [5], and in which approximately 50 % of diabetes is estimated to be undiagnosed [6]. Increases in diabetes mortality have been reported in recent decades [7][8][9], possibly due in part to a greater diabetes prevalence and better recognition of diabetes as a cause of death. These analyses, however, have not incorporated corrections for problems in mortality reporting. Analyses of trends in mortality in Brazil remain a challenge. Although the Brazilian Mortality Information System (SIM) is universal and consolidated, coverage of deaths and quality of information on causes of death are unequal across space and time, with sub-enumeration of deaths and a high proportion of ill-defined causes among registered deaths in some areas [10,11]. Thus, analyses of trends require the accounting of these factors to avoid bias in comparisons across regions and over time. Analyses incorporating these corrections have found a sharp decrease in the age-standardized mortality for non-communicable diseases in recent decades [12], mainly due to falls in cardiovascular and chronic respiratory diseases. This could result in improved survival among diabetic individuals, and therefore contribute to the increased diabetes prevalence in Brazil. However, over the same period, declines in diabetes mortality were modest [13]. This study aims to characterize further the trend in diabetes mortality in Brazil and its five regions in adults (30-69 years), from 1996 to 2011, using the method for correction of under-registration of deaths currently recommended by the Ministry of Health together with a new method of reallocating ill-defined causes, redistributing not just those formally declared as ill-defined but also those initially reported within the so-called "garbage codes". Methods We used the Brazilian Mortality Information System (SIM) for the period 1996 to 2011 to obtain the reported numbers of deaths of adults aged 30-69 years from the public website of the Ministry of Health [14]. Procedures used were similar to those employed by the GBD2010 [15], unless indicated. Deaths whose underlying cause was diabetes were selected using all ICD-10 diabetes codes (E10.0-E14.9). After reallocating the small fraction of deaths with missing information for sex and age of death, data were corrected following three steps. First, we considered as "garbage codes" all deaths from nonspecific causes within ICD-10 chapters of defined causes (i.e., all chapters except Chapter XVIII). We defined the fraction of each specific ICD-10 garbage code to be redistributed to "target" diabetes codes, separately by sex, age and region, adapted from the list of garbage codes of GBD-2010, adding deaths redistributed from these garbage codes to those directly reported as due to diabetes [15,16]. Second, we redistributed codes from ill-defined causes of deaths (Chapter XVIII of ICD-10). We did this separately by sex, five-year age group and region, in the proportions similar to those found for diabetes during routine investigations carried out by state and local health departments in the country since 2006 [17].These proportions, here called IDC redistribution coefficients (RD-IDC), were defined for each year, region and sex. We used data from the same year's investigation for redistribution in the years between 2006 and 2011, and the mean RD-IDCs over 2006-2011 for the period 1996-2005, which preceded this investigation of ill-defined causes. Third, we corrected the numbers produced in the second step for underreporting of deaths for the years 1996 to 2011, by applying the inverse of the ratio of reported/ estimated deaths by the Ministry of Health [18]. This step produced the corrected number of total deaths in each sex and five-year age group, in each geographical region. We next produced mortality rates by applying population denominators obtained from the 1991, 2000 and 2010 Brazilian censuses from IBGE to these numbers of deaths. Intercensus population estimates by age and sex were obtained by logarithmic interpolation of the census population. We then performed direct standardization to the 2010 Brazilian population to produce yearly age standardized death rates (/100,000 population) overall and by region and sex. We investigated time trends in these standardized mortality rates, from 1996 to 2011, with a linear regression model which assumes a constant (linear) trend over the series, to test the hypothesis of a positive or a negative trend (slope different from zero). To adjust for the presence of first order autocorrelation, the residuals of the regression were modeled as a first order autoregressive process [19,20]. It was then possible to test if the mortality series presents a significant increasing or decreasing trend. Finally, to explore nuances in trends, we used a state space model [21], which does not assume trends to be fixed but rather variable over time. The Ethics in Research Committee of the Hospital de Clínicas de Porto Alegre (No. 100056) has approved the use of information from surveillance databases for the investigation of chronic diseases by the Collaborative Center for the Surveillance of Diabetes, Cardiovascular and Other Chronic Diseases of the Federal University of Rio Grande do Sul. Given that databases employed had no personal identifiers, no patient consent was necessary. Results A total of 294,203 deaths due to diabetes were officially reported between 1996 and 2011 (Table 1). The type of diabetes was unspecified for the vast majority (91.2 %) of deaths. Acute complications (ICD code final digit .0 or .1) were responsible for 10.6 % of reported deaths, renal complications 19.1 %, peripheral circulatory complications 6.1 %, other complications 12.4 %, while deaths "without complications" corresponded to 51.9 % of the total deaths. We present on Table 2 the proportions of each group of "garbage codes" redistributed to diabetes. Non-specified causes of renal failure were the most likely to be so redistributed: 57.3 % of deaths due to this cause (2979 deaths in 1996 and 2911 in 2011) were redistributed to diabetes. Table 3 demonstrates the fraction of ill-defined (chapter XVIII-ICD10) codes that were then redistributed to diabetes. In all, 55,195 deaths officially coded as illdefined were redistributed to diabetes. The percent that was redistributed varied considerably by region and sex, being less in the Center-West region and women having approximately double the percent redistributed of men. As a result of correction for underreporting of deaths, a total of 41,472 deaths were then added, the net addition being most pronounced for the North and the Northeast regions. Table 4 shows the effect of all these corrections on the total number of deaths and on the age-standardized mortality rates for 1996 and 2011, the first and last years analyzed. In 1996, the corrections resulted in 11,616 additional deaths due to diabetes, an increment of 85 % in total deaths. With improvements in mortality reporting, less deaths were redistributed by 2011, 7334, representing an increment of 30.7 % in total deaths due to diabetes. Correction exerted a greater effect on rates in the North and Northeast, regions with the highest diabetes mortality rates in 2011 for both men and women. Figure 1 shows the trend in mortality due to diabetes for Brazil, from 1996 to 2011. Comparing the uncorrected (dashed lines) with the corrected (solid lines) rates, we observe that a picture of low and if anything rising rates is transformed into one of higher rates in decline, especially for women. The decline is approximately twice as large in women (1.01 deaths/100,000/year, 2.2 %/year, P < 0.001; red line) as men (on average .49 deaths/100,000/year, 1.1 %/year, P < 0.001; blue line), leading to a 30.5 % drop for women, and a 14.3 % drop for men, over the 15 year period. Figure 2 shows this decline, separately in each of Brazil's five regions. Declines were largest in both relative and absolute terms in the Northeast and Southeast. In the Northeast, rates declined 1.9 %/year and 38.6 % overall for women and 0.74 %/year and 17.7 % overall for men. Figure 3 show the evolution of the trend over time according to the state-space model, which estimates the change in rate in each year compared to the previous year. The trend over time is one of annual declines of varying size since the beginning of the series. In the period from 1998 to 2006, the decrease from 1 year to the next was more accentuated for women. Since 2007, little change in rate has been observed for both men and women. Discussion Our findings demonstrate a decline in standardized diabetes mortality (ICD-10 codes E10-E14) of approximately 1 %/year for men and 2.2 %/year for women from 1996 to 2011 in Brazil. This decrease in mortality due to diabetes became apparent only after corrections for illdefined causes of death and under registration. The rate of decline first accelerated and then decelerated over the period. The trend was observed in all regions, and attenuated regional inequalities in diabetes mortality in relative and absolute terms. These findings highlight the need to incorporate the progressive improvements in the mortality system in Brazil over the last two decades when describing trends during the period. At the end of our series, in 2011, correction for remaining mortality system deficiencies resulted in an increment of approximately 30 % to deaths reported as being due to diabetes. In 1996, the increment was approximately 85 %. Without these corrections to account for improvements in the quality of mortality reporting, the decline was hidden by the increasing coverage and the increasingly correct attribution of diabetes as the underlying cause in the mortality registry. The inclusion of these corrections, as seen in Fig. 1, changed the interpretation of trends over the period from one of stability in women and a slight increase in men, to declines in age-adjusted mortality in both sexes. Earlier studies of diabetes mortality in Brazil, focused on state capitals to minimize the limitations of the mortality information system for Brazil as a whole and covering the initial years of our series, have found varying declines in mortality in some capitals in the Northeast and Southeast [8,9]. Analyses of mortality trends in Brazil taking into account the variability across space and time of the insufficiencies of the system has received great attention over the past 5 years. Applying corrections for under reporting and ill-defined causes of death to mortality due to the four main non communicable diseases in Brazil revealed a sharp decrease for cardiovascular and chronic respiratory diseases and a modest decrease for diabetes and cancer [12]. Progressive refinements in the methods for these corrections also revealed modest declines in standardized mortality due to diabetes: 0.89 %/year from 2000 to 2009 [22]; 1.7 %/year from 2000 to 2011 [13]; and 1.64 %/year in women and 0.40 %/year in men from 2000 to 2011 [23]. Our findings, more detailed and focused in diabetes, were based on more updated correction algorithms, including redistribution of deaths initially coded in garbage codes. The trend we found, producing a U-shaped curve of rate change, particularly among women, with a deceleration from 2005 onward, not previously reported, deserves reflection. These rates, and their change over time, summarize the effects of competing forces within a very complex epidemiologic picture. Over this period, the prevalence of diabetes has increased considerably. Data are sparse with respect to the prevalence of diabetes in the 1990s in Brazil, almost always being based on self-report. The self-reported prevalence, estimated from a national survey in 1998 of those 20 or older, was 3.3 % [24]. It increased almost 100 %, to 6.2 % in the Brazilian National Health Survey (Pesquisa Nacional de Saúde, or PNS) of those 18 or older, conducted in 2013 [5]. The increase in prevalence may have resulted from greater diagnosis as well as from greater incidence of diabetes. Thus, a considerably larger fraction of the population was at risk to die of diabetes and to have this reported as a cause of death in more recent years. As such, improving mortality among those with diabetes competes with the growing prevalence of diabetes to define the mortality trend. Further declines in diabetes mortality may be difficult to achieve if the growing prevalence of diabetes persists. The fact that the decline was greater in women may, in part, reflect that the prevalence of diabetes is not Table 4 Effect of correction for underreporting of deaths and ill-defined causes of death on the number of deaths and age-adjusted mortality rates (/100,000)* due to diabetes in adults, by region and sex SIM, Brazil, 1996 increasing as quickly in women during this period. Comparing results from nationally representative household surveys demonstrate that the annual rate of increase in the prevalence of self-reported diabetes was 9 % in men while only 6.3 % in women from 1998 to 2013 [5,25]. The regional differences we observed in trends, particularly in the Northeast where absolute declines in women were double the national average and in men approximately 60 % greater, suggest that actions by the SUS, the Brazilian National Health System, to reduce inequities are being effective in terms of the care of those with diabetes. It is worth nothing that Alves et al., who investigated trends by state instead of region, found heterogeneity across states within the same region [23]. Possible reasons for the declines in mortality due to diabetes should be considered. Over this period the SUS expanded its coverage greatly, particularly in terms of primary care. In the 1990´s a National Diabetes Plan worked especially to guarantee greater access to insulin. In 2001, the national Plan to Reorganize Care of Hypertension and Diabetes Mellitus was instituted, focused on redirecting the care of diabetes from the hospital to primary care [26]. The National Program of Pharmaceutical Provision for Hypertension and Diabetes was created in 2002. This Program, along with subsequent laws and regulations, has resulted in a progressively larger distribution of medicines and medical supplies, free of charge, to those with diabetes [27]. Mortality from the acute causes of diabetes has fallen 71 % from the beginning of the 1990s to 2010 [28]. As these acute causes of death are those most sensitive to access and availability of insulin and other medications, they most likely result from the above-mentioned actions as well as the increasing organization of emergency care facilities, transport, and hotline support systems [29]. Undoubtedly, the increased standard of living, the rise of the Brazilian middle class, decreasing poverty and efforts to eradicate severe poverty such as the cash transfer program bolsa família may have also played a difficult-to-estimate but important role in the decline [30]. Unfortunately, given that diabetes type was "unspecified" for 91 % of deaths, the data do not permit the description of declines for specific types of diabetes. Additionally, and also in part due to the above mentioned reasons, mortality from chronic diseases in general and cardiovascular diseases in particular, the major causes of diabetes deaths, has fallen considerably [12]. Declining rates of smoking, a major risk factor for complications of diabetes, also fell considerably over this period. Improved care of diabetes has been postulated to explain documented increased survival among those with diabetes in various high income countries, including Sweden, the UK and Taiwan [3]. In the US, findings from the National Health Interview Survey (1997-1998, 1999-2000, 2001-2002, and 2003-2004 for adults aged 18 years and older show that among diabetic adults, the CVD death rate declined by 40 % (95 % CI 23-54) and all-cause mortality declined by 23 % (10-35) between the earliest and latest samples. The excess CVD mortality rate associated with diabetes (i.e., excessive when compared with rates of nondiabetic adults) decreased by 60 % (from 5.8 to 2.3 CVD deaths per 1000) while the excess all-cause mortality rate declined by 44 % (from 10.8 to 6.1 deaths per 1000) [31]. Thus, our findings of decreasing mortality due to diabetes are likely to result from improved care to diabetes in Brazil over the last two decades. Yet, the deceleration observed more recently may indicate that further declines will only occur with further strengthening in the organization of care of those with diabetes. Moreover, primary prevention efforts, including population-oriented public health actions such as food and agricultural policies aimed at making healthy choices easier, are much needed to stem the increase in diabetes incidence. Strengths and limitations of our investigation merit comment. Among the strengths is the use of methodologies to correct for deficiencies in the Brazilian system of death registry which are more in consonance with the GBD project, most specifically the incorporation of a recent GBD approach to garbage code distribution. The GBD also is evolving in its methodologies, and future changes in the redistribution of garbage codes are anticipated. A limitation of this study is the fact that deaths coded in E10-E14, when considered within the broader framework of the multicausality of disease, are only part of the overall mortality that can be logically attributed to diabetes. Many deaths from complications which the epidemiologic literature suggests can be attributed to diabetes-heart disease, stroke, renal failure, and even cancer [32] and some infectious diseases such as tuberculosis- [33] will never formally be indicated in mortality registry systems as due to diabetes. Studies suggest that mortality attributable to diabetes could be from 50 % to as much as three times greater than that calculated using diabetes as the underlying cause of death [34][35][36][37][38]. Since we have only analyzed the underlying cause of death, without considering the remaining causes listed in part I of the death certificate, frequent direct causes such as myocardial infarction, stroke or pneumonia were not included in this report. Further, due to the process of coding, even if such direct causes of deaths were present on the death certificate, the diabetes ICD code chosen would most likely end in .9 ("without complications"), as no specific additional digit is available for the coding many of the important specific complications. Nevertheless, we believe that the ICD, inadequate as it is, still provides the basic information necessary to describe trends in diabetes mortality, the objective of our manuscript. Another limitation is related to the methods used to assess completeness of death registration using indirect demographic methods with some controversial assumptions like absence of migration, and constancy of incompleteness across all ages. These assumptions could potentially affect measures of completeness and the estimates of mortality rates [39]. In conclusion, our data suggest that standardized premature mortality due to diabetes, based on death certificate coding, has declined in Brazil over the 15 years from 1996 to 2011. The rate appears to have stabilized in the later years of this series, suggesting that the effect of the increasing prevalence of diabetes now threatens to reverse this trend by outweighing the gains made through better patient care. These data suggest that for the decline to continue, solutions must be found not only to improve diabetes care but also to prevent the current epidemic increase in the incidence of diabetes.
v3-fos-license
2024-04-18T06:17:42.635Z
2024-04-16T00:00:00.000
269186434
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://pubs.acs.org/doi/pdf/10.1021/acsbiomaterials.4c00070", "pdf_hash": "3f2fdf19061d3c5687ae0782b3f28b0df9c881f1", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41163", "s2fieldsofstudy": [ "Materials Science", "Engineering" ], "sha1": "e1e872555c7d4c1f48c1aa1ee50c9267e6c794d6", "year": 2024 }
pes2o/s2orc
Nanomechanical and Microstructural Characterization of Biocompatible Ti3Au Thin Films Grown on Glass and Ti6Al4V Substrates Ti–Au intermetallic-based material systems are being extensively studied as next-generation thin film coatings to extend the lifetime of implant devices. These coatings are being developed for application to the articulating surfaces of total joint implants and, therefore, must have excellent biocompatibility combined with superior mechanical hardness and wear resistance. However, these key characteristics of Ti–Au coatings are heavily dependent upon factors such as the surface properties and temperature of the underlying substrate during thin film deposition. In this work, Ti3Au thin films were deposited by magnetron sputtering on both glass and Ti6Al4V substrates at an ambient and elevated substrate temperature of 275 °C. These films were studied for their mechanical properties by the nanoindentation technique in both variable load and fixed load mode using a Berkovich tip. XRD patterns and cross-sectional SEM images detail the microstructure, while AFM images present the surface morphologies of these Ti3Au thin films. The biocompatibility potential of the films is assessed by cytotoxicity tests in L929 mouse fibroblast cells using Alamar blue assay, while leached ion concentrations in the film extracts are quantified using ICPOEMS. The standard deviation for hardness of films deposited on glass substrates is ∼4 times lower than that on Ti6Al4V substrates and is correlated with a corresponding increase in surface roughness from 2 nm for glass to 40 nm for Ti6Al4V substrates. Elevating substrate temperature leads to an increase in film hardness from 5.1 to 8.9 GPa and is related to the development of a superhard β phase of the Ti3Au intermetallic. The standard deviation of this peak mechanical hardness value is reduced by ∼3 times when measured in fixed load mode compared to the variable load mode due to the effect of nanoindentation tip penetration depth. All tested Ti–Au thin films also exhibit excellent biocompatibility against L929 fibroblast cells, as viability levels are above 95% and leached Ti, Al, V, and Au ion concentrations are below 0.1 ppm. Overall, this work demonstrates a novel Ti3Au thin film system with a unique combination of high hardness and excellent biocompatibility with potential to be developed into a new wear-resistant coating to extend the lifetime of articulating total joint implants. INTRODUCTION Titanium (Ti) and its alloys like Ti 6 Al 4 V are widely used in total joint implant applications, such as knee, hip, shoulder, and elbow joints, because of their excellent biocompatibility, corrosion resistance, strength to weight ratio, osseointegration, and low ion formation in aqueous media. 1−4 However, these alloys exhibit poor wear performance when subjected to repetitive articulating motion in load bearing joints, leading to the unwanted release of aluminum and vanadium particles, which are highly toxic and have been linked to numerous adverse health effects. 5,6Increased failure of implants made from these alloys has led to an increase in reconstruction surgery at heavy financial cost. 5−8 Therefore, a suitable coating system, which can enhance the mechanical performance while maintaining the excellent biocompatibility of Ti, is being extensively researched. 9ecently, Ti−Au intermetallics have been found to exhibit excellent biocompatibility combined with extremely high mechanical hardness with the emergence of the superhard β-Ti 3 Au intermetallic phase. 10Svanidze et al. found that bulk samples of Ti−Au alloy formed by an arc melting process exhibit a monotonous increment in mechanical hardness with increasing gold (Au) concentration, reaching a peak value of 800 HV (∼7.85 GPa), when approaching a stoichiometric ratio of Ti 3 Au. 11Au has a dense valence electron arrangement compared to other biocompatible elements, which leads to very high mass density, and when alloyed with Ti, this causes an increased bond strength, which in turn leads to higher hardness. 11The Ti 3 Au intermetallic exists in two distinctive phases, denoted as alpha (α) and beta (β). 12The α phase has a smaller unit cell (lattice parameter ∼4.1 Å) with Ti atoms arranged in 12-fold coordination, whereas the β phase has a larger unit cell (∼5.1 Å) with Ti atoms existing in 14-fold coordination, making it much denser.The higher density packing of the β phase presents a higher energy barrier for slipping of dislocations, thereby resulting in higher hardness. 10−14 These factors lead to enhanced hardness and low coefficient of friction for the β-Ti 3 Au intermetallic, making it an ideal candidate as a wear-resistant coating over implants.Thin film depositions of Ti−Au and Ti 3 Au intermetallics have mostly been carried out on silicon (Si) substrates.Silicon was preferred as substrate because the surface is extremely smooth and the properties of Si are well-known.which allows the substrate background to be easily removed, for example, when analyzing thin films using X-ray characterization techniques or performing an AFM scan. 12,14Recently, Karimi and Cattin achieved an elastic modulus of 201 GPa and a mechanical hardness of 12.5 GPa for Ti 3 Au thin films deposited on Si substrates. 12However, to gain a realistic understanding of Ti 3 Au intermetallic coatings, it is critical that their performance is studied on real-world substrate materials like Ti 6 Al 4 V, that is used to manufacture total knee and hip prosthesis. 15n our previous preliminary work, 16 we demonstrated that Ti−Au thin films with superior mechanical performance and excellent biocompatibility can be achieved by carefully controlling the Ti and Au atomic ratio and thermal activation process.In the current work, we study the effect of the underlying substrate type and its surface conditions and temperature on the mechanical performance and biocompatibility of Ti 3 Au thin film coatings sputter deposited on glass and Ti 6 Al 4 V substrates.Thin film samples on glass are used for accurate analysis of the elemental composition and microstructure pattern, while those deposited on Ti 6 Al 4 V substrates under the same conditions allow us to understand the potential of Ti 3 Au as a mechanically hard, biocompatible thin film coating.This work also explores the effect of the nanoindentation measurement technique employed to measure the mechanical properties of Ti 3 Au thin films, helping to isolate the substrate and surface size effect.Both variable load and fixed load protocols were applied to understand the measured mechanical properties.Therefore, this paper strives to cover the void in understanding the effect of substrate type and temperature and measurement technique on the combination of mechanical and biocompatibility properties of Ti 3 Au intermetallic thin film coatings with the potential to extend the lifetime of the articulating surfaces of total joint implants. MATERIALS AND METHODS 2.1.Thin Film Deposition.Sputter deposition of Ti 3 Au thin films was performed by using a Moorefield NanoPVD deposition suite.The chamber was loaded with 2-inch diameter circular targets of Ti and Au of 99.999% purity supplied by Pi-Kem limited, UK, with the Ti target connected to a DC source and the Au target to a RF source.Laboratory-grade soda lime glass slides and commercially procured Ti 6 Al 4 V strips measuring 76 × 26 mm and a thickness of 1 mm were used as substrates.The Ti 6 Al 4 V substrates were rigorously polished using SiC paper with grit values of 240, 320, 600, 1200, and 4000 to achieve a mirror-like surface finish with roughness values better than 40 nm, when measured using an Alicona Infinity Focus surface measurement system.The polished Ti 6 Al 4 V substrates were cut into 4 rectangular coupons, each measuring 25 mm × 19 mm, before being thoroughly cleaned, together with the glass substrates, using a DECON 90 surface cleaner in a 5:1 ratio with water, followed by an ultrasonic bath in DI water, IPA cleaning, and acetone wiping and a second ultrasonic bath in DI water, before finally being dried with a jet of nitrogen.The cleaned substrates were loaded onto the deposition plant substrate holder at a target to substrate distance of 100 mm and rotated at a constant speed of 5 rpm, and then the chamber was evacuated to a base pressure better than 5 × 10 −4 Pa.For the sputtering runs, a constant working pressure of 0.6 Pa was achieved by introducing 10 sccm of Ar gas in the chamber and the DC to RF power ratio required to achieve a 3:1 ratio of Ti:Au was established.It is known that the β-phase of Ti 3 Au crystallizes better at higher substrate temperature, 12 and therefore, two sets of samples were deposited: one with the substrate temperature set to ambient (∼25°C) and the second one with the substrate heater set to achieve 275 °C on the substrate surface. 2.2.Structural, Morphological, and Mechanical Characterizations.The crystal structure of the deposited Ti 3 Au films was characterized by the X-ray diffraction technique using a Rigaku Smartlab II diffractometer, employing Cu Kα radiation in a parallel beam configuration.The reflection patterns were collected between 2θ values of 10 and 80°with a step size of 0.01°and a scan rate of 4°/ min.The peaks were indexed using the supplied database and crossreferenced with the files from the ICSD database.Surface and crosssectional features of the deposited thin films were captured using a MIRA III scanning electron microscope (SEM) from TESCAN Systems, operating at 5 kV and a close working distance of 5 mm from the tip of the e-beam lens.An X-Max 150, energy dispersive X-ray (EDX) spectroscopy detector from Oxford Instruments, in-built within the SEM, was used to analyze the elemental composition of the deposited thin films.Surface scans were performed on a 3 μm 2 area of the thin films using a Nanoveeco Dimension 3000 atomic force microscope (AFM), and the scans were analyzed using Gwyddion software to measure the surface roughness and feature sizes.Nanoindentations were performed by a Hysitron TI900 triboindenter nanomechanical testing system, employing a 3-sided Berkovich diamond tip.Two sets of indentations were performed: one with a variable load and the other with a constant load.For the first set, the load was varied from 2000 to 500 μN in steps of 100 μN.For the second set, the load was kept at a constant value to achieve a total indentation depth of 10% of the thin film thickness under test.For each sample, 16 indents were made in a 4 × 4 pattern, with a 10 μm gap between each indent and a 10−10−10 s load−dwell−unload segment time.Following Oliver and Pharr's methodology, 17,18 the force−displacement curve was plotted and the unloading segment was analyzed to extract mechanical hardness and elastic modulus values of the thin films.The average value of the 16 indents for each sample is presented with the standard deviation. Cytotoxicity and Biocompatibility Analyses.The biocompatibility of the deposited Ti 3 Au thin films was analyzed in accordance with the ISO 10993 standard by measuring their in vitro cytotoxicity and ion leaching potential.L929 cells (murine fibroblasts) were acquired from Deutsche Sammlung von Microorganismen and Zellkulturen (DSMZ − Braunschweig, Germany) and cultured in Dulbeccos's Modified Eagle Medium (DMEM), high glucose, supplemented with 10% fetal bovine serum (FBS), 2 mM L-glutamine, 100 U/mL penicillin, and 100 μg/mL streptomycin.L929 cells were cultured under humidified conditions at 37 °C and 5% CO 2 , grown as monolayer cultures.When confluency reached 80−90%, cells were subcultured for a maximum of 20−25 passages, before new vials were used.Cell culture media and reagents [FBS, antibiotics, trypsin, L- glutamine, phosphate buffer saline (PBS)] were procured from Biosera (Kansas City, MO, USA).Resazurin sodium salt was obtained from Fluorochem (Derbyshire, UK), and cell culture plastic ware was supplied by Corning (NY, USA).Ti 3 Au coating extracts were prepared by immersing the thin film test coupons into 6-well plates containing 6 mL of DMEM culture media for 72 h in a humidified incubator at 37 °C and in 5% CO 2 .A second set of extracts were created by incubating coupons for additional 96 h (total 168 h), before the leached culture media were used for cytotoxicity tests against L929 cells.Similarly, extracts were also prepared from a blank polished Ti 6 Al 4 V substrate, as well as from a polished copper (Cu) substrate of similar size, used as negative and positive cytotoxicity controls, respectively.A light agitation at the beginning and the end of leaching periods (72 and 168 h) was performed in six-well plates containing the Ti 3 Au films, in order to efficiently obtain ion leaching, before their use in cytotoxicity experiments. The cytotoxicity profile of the Ti 3 Au films on L929 mouse fibroblast cells was tested by using the Alamar blue Assay.Specifically, L929 cells were seeded at a density of 2000 cells/well in 100 μL/well into 96-well plates and left overnight to attach.The following day DMEM cell culture media were removed, and the cells were incubated with culture media containing extracts from the Ti 3 Au films, following either 72 or 168 h leaching periods, as previously described, for a total of 72 h.Complete DMEM media (Control), as well as leached media from the blank Ti 6 Al 4 V substrate, were used as negative controls, while Cu substrate leached extracts and 10% DMSO were used as positive control samples.At the end of 72 h exposures, 10 μL of resazurin (1 mg/mL final concentration) was added to each well, and cells were incubated in a humidified incubator for 4 h at 37 °C and 5% CO 2 .Finally using an absorbance plate reader (Labtech LT4500, UK), absorbance measurements were performed at 570 and 590 nm (reference wavelength) and optical density was measured as the difference between the intensity measured at 570 versus 590 nm, while cell viability levels were calculated and expressed as a percentage (%) of untreated (BLANK, control) cells. The remaining quantities of extracts prepared from the Ti 3 Au films and Cu positive control substrates were tested for leached ion concentrations using a PerkinElmer Optima 8000 inductively coupled plasma optical emission mass spectrometer (ICP-OEMS).Standards were prepared for the range of 1−10 ppm to identify dissolved concentrations of Ti, Al, V, Cu, and Au ions leaching out from the underlying Ti 6 Al 4 V substrate, Ti 3 Au thin films, or the Cu positive control. Chemical and Structural Results. The results from the elemental composition analysis and cross-sectional film thickness measurements are presented in Table 1.Thin films deposited at room temperature (S RT ) and elevated substrate temperature of 275 °C (S 275 °C) both exhibit Ti:Au composition very close to the required 75:25 at% ratio.This composition is shown to be most ideal for the development of the β phase of the Ti 3 Au intermetallic. 12The thickness of films, measured from the cross-sectional images in Figure 3, show that the thin film deposited at room temperature registers a thickness of 533 nm compared to 676 nm for the film deposited at a substrate temperature of 275 °C.−21 The microstructure of the deposited Ti 3 Au films was studied by X-ray diffraction, and the resulting reflection patterns are presented in Figure 1. Figure 1a presents the diffraction patterns for thin films deposited on glass substrates with and without substrate heating, compared against standard peak positions for the β phase of the Ti 3 Au intermetallic (dashed blue line) from the ICSD (collection no.58605).The thin film sample deposited without substrate temperature S RT (black line) is seen to represent a quasi-crystalline structure with a very broad peak spanning from 36 to 42°, with its peak positioned at 37.8°.It is known from ICSD (collection no.58604) that the (111) plane of α-Ti 3 Au has its peak positioned at 37.5°, so we can assume that some strained Ti 3 Au intermetallic phase begins to emerge for this sample, but the energy associated with adatoms in the absence of any thermal process is not sufficient to form well-crystallized peaks. 22owever, for sample S 275 °C, deposited at an elevated substrate temperature of 275 °C, it can be seen that crystallization improves drastically with very sharp peaks.The peaks located at 35.27°, 39.6°, and 74.57°align very well with the (200), (210), and (400) planes of β-Ti 3 Au, whereas the peak at 37.5°s uggests the coexistence of the (111) plane of α-Ti 3 Au.The elevated substrate temperature increases the energy associated with adatoms arriving on the substrate surface and helps them to diffuse effectively, leading to better crystallization of the growing thin film. 23Figure 1b presents the X-ray characterization of Ti 3 Au thin films deposited on Ti 6 Al 4 V substrates, together with the background reflections expected from the underlying blank Ti 6 Al 4 V substrate (green line) and the expected peak positions for the β-Ti 3 Au intermetallic phase.By comparing the reflections from the bare Ti 6 Al 4 V substrate and the thin film grown without substrate temperature, S RT , it can be clearly seen that this sample does not register any peaks, other than broadening of the peak at 38.5°, which originates from the Ti 6 Al 4 V substrate.This peak broadening again suggests that the thin films deposited on Ti 6 Al 4 V substrates without substrate heating also exhibit a quasi-crystalline nature.Similar to the results on glass substrates, the thin film samples deposited at an elevated substrate temperature of 275 °C (S 275 °C) exhibit very clear peak positions belonging to the α and β phases of the Ti 3 Au intermetallic, in addition to the reflection peak originating from the underlying Ti 6 Al 4 V substrate, showing that improved crystallization occurs with higher adatom energies. Morphological Results . Surface images of Ti 3 Au films deposited on glass substrates, with and without substrate heating, are presented in Figure 2,b, respectively, along with their high magnification version in the inset.The surface of the thin film sample deposited without additional substrate heating, S RT , has a smooth glass-like texture with a very fine random structure.This type of surface texture is typical of poorly crystallized microstructures and is in agreement with the very broad peak seen in the XRD pattern for this film (Figure 1a). 22The presence of very fine and randomly distributed structures can be related to alignment of the XRD peak centralized at 37.8°, and together, these results strengthen the emergence of the α phase of Ti 3 Au when thin films are deposited in the correct stoichiometry, even without substrate heating.However, Figure 2b shows that the thin film deposited on glass at elevated substrate temperature, S 275 °C, has wellorganized, oval shaped grains, distributed uniformly through- out most of the film surface, [better visualized in higher magnification image in Figure 2b inset].This pattern of oval grains is broken by intermediate patches of glass-like texture, as seen before for sample S RT .The presence of oval shaped grains on the surface of S 275 °C can be correlated with the emergence of the sharp XRD peaks representing β-Ti 3 Au seen for this film (Figure 1a), while the patches of featureless regions can be assigned to the sparsely distributed α phase of Ti 3 Au, which also appears in the XRD pattern as a single peak at 37.5°. Figure 2c,d shows the surfaces of the Ti 3 Au film samples deposited on Ti 6 Al 4 V substrates with and without substrate temperature, respectively.A key difference in these images compared to their glass counterparts is the presence of large polishing grooves (indicated by red arrows in Figure 2c,d) across the surface of the Ti 6 Al 4 V substrates.Even though the surface was polished to a mirror finish with surface roughness values better than 40 nm, the Ti 6 Al 4 V substrates are still very rough when compared to glass, which has typical roughness values of less than 2 nm.Apart from these polishing grooves, the surface features of the thin films deposited on Ti 6 Al 4 V substrates look very similar to those on glass.The thin films deposited without substrate temperature, S RT , appear much smoother, lacking any uniformly distributed pattern, whereas samples deposited at higher substrate temperature, S 275 °C, depict oval-shaped grains distributed uniformly across the surface with some regions devoid of these shapes.These images together with the XRD results confirm that elevated substrate temperature aids the improved crystallization of β-Ti 3 Au on both glass and Ti 6 Al 4 V substrates. To gain better understanding of the microstructure, the Ti 3 Au thin films deposited on glass substrates were fractured and the exposed cross-sections were characterized using SEM (see Figure 3).The sample deposited at room temperature (S RT ) exhibits tapered columnar features extending through the partial film thickness (Figure 3a).Thornton's structural zone model (SZM) predicts such open-voided and tapered features to be resultant of low adatom mobility on the substrate surface in the absence of substrate heating and argues that such thin films will be amorphous in nature. 23,24−29 On the other hand, the cross-section of the thin film deposited at an elevated substrate temperature of 275 °C (S 275 °C) exhibits well-organized, dense, and broader columns with small dome-shaped surfaces (Figure 3b).Thornton's SZM predicts that when the substrate temperature is increased, it leads to higher diffusion of adatoms along the surface as well as along the grain boundaries, which reduces the intercolumnar space, giving a dense appearance. 22This enhanced surface diffusion of energetic particles promotes preferred orientation growth in the columns and leads to higher crystallinity, observed as the emergence of the dominant β phase of the Ti 3 Au intermetallic in the XRD patterns seen in Figure 1.Surface AFM scans of Ti 3 Au thin films deposited on glass substrates, with and without substrate heating, are presented in Figure 4a,b, respectively.The sample deposited without substrate temperature, S RT , shows a very fine-grained structure, with the tallest feature sizes of around 19 nm (Figure 4a).However, the sample deposited with an elevated substrate temperature, S 275 °C, registers a drastic increment in feature size to around 37 nm (Figure 4b).The surface roughness of the thin films, measured from the AFM scans in Figure 4, shows that sample S RT has a roughness average value of 1.7 ± 0.1 nm, whereas sample S 275 °C registers a 2-fold increase in surface roughness to 3.4 ± 0.1 nm.This increment in surface feature height and roughness presents a measurable effect of β-Ti 3 Au phase growth taking place with an increased substrate surface temperature. Mechanical Results. Load−displacement curves from nanoindentations made on Ti 3 Au thin films deposited on glass and Ti 6 Al 4 V substrates are presented in Figure 5a,b, respectively.To show the effect of surface roughness on mechanical testing, two examples (black and red curves) are presented from samples deposited at room temperature (S RT ) on each substrate type, and to compare the effect of β phase growth at elevated temperature (S 275 °C), one example (blue curve) is presented.For the sake of comparison, all of these examples are for nanoindentation performed at a peak load of 800 μN in the variable load mode.The loading and unloading segments are very smooth with no "stair step" disruption, which suggests the absence of the staircase phenomena, also known as displacement excursions. 30,31Such disruptions in indentation curves are normally associated with surface contamination encounter, phase transition, or oxide breakthrough events during the indentation process. 32If the discontinuities under the indenter do not separate, from the underlying film, no step features will appear as the film continues to support the indenter thereby preventing it from making sudden progress into the film. 33Depositing thin films at elevated substrate temperature rather than externally heat treating in an open furnace avoids formation of discontinuities like surface oxide layers while also providing better distribution of the β phase of Ti 3 Au, thereby achieving a smoother load− displacement curve when measuring mechanical properties using the nanoindentation technique. For indents performed on the smoother glass substrate, the loading rates of the two independent room temperature samples (S RT − black and red curves in Figure 5a) look identical and the only difference arises in their unloading curve, which also looks very similar except that their trajectories give rise to a slight variation in the final indentation depth of 51 nm (red curve) and 55 nm (black curve).On the other hand, the loading rate for the two room temperature samples deposited on Ti 6 Al 4 V substrates (S RT − black and red curves in Figure 5b) looks very different, even though the load specification for these indentations is identical.This variation arises because of the higher surface roughness of the underlying Ti 6 Al 4 V substrate, presenting a different topography of hills and valley-like features in the path of the approaching indenter tip and thereby affecting the loading rate in a different way each time an indent is made.This leads to a greater difference in the measured contact depth for the two samples, 64 nm (red curve) and 55 nm (black curve), and higher scatter in the mechanical results obtained.The thin film samples deposited at elevated substrate temperature, S 275 °C, show lower indentation depth at the same peak load, suggesting that the films are becoming harder to penetrate due to the development of the β phase of Ti 3 Au.The area under the load− displacement curve represents the work done during the load−dwell−unload cycle and accounts for energy lost due to plastic deformation, 18,34 and this mechanical hysteresis is known to decrease with heat treatment of Ti thin films due to development of crystalline phases, 18 resulting in harder and stiffer films.These observations are also reflected in the measured mechanical properties of these thin films in Figures 6 and 7. The hardness values of Ti 3 Au thin films deposited on glass and Ti 6 Al 4 V substrates, with and without substrate heating, are presented in Figure 6.Each of these four film samples were tested using both variable and fixed load nanoindentation techniques.In variable load mode, 16 indents were made by varying the indentation load from 2000 to 500 μN in a 4 × 4 square pattern.For the fixed load method, a second set of indents were made in a similar 4 × 4 pattern, but the load was kept constant.The fixed load required to maintain the indentation depth at a value of 10% of the film thickness was determined for each sample. 35It can be seen from the variable load method results in Figure 6 that sample S RT deposited at room temperature on glass registers a hardness value of 4.8 ± 0.4 GPa, which increases to 8.9 ± 1.3 GPa for sample S 275 °C, deposited at an elevated substrate temperature of 275 °C.This increase in hardness could be assigned to emergence of the superhard β phase of the Ti 3 Au intermetallic due to the elevated thermal energy of adatoms. 12,14When measured with the fixed load technique, the same S RT and S 275 °C samples report similar hardness values of 5.1 ± 0.2 and 8.9 ± 0.4 GPa, respectively, but have significantly smaller deviation (error bars) when compared to the results from the variable load measurement method.On Ti 6 Al 4 V substrates, the hardness values of the S RT and S 275 °C films reduce slightly to 4.2 ± 0.8 and 7.3 ± 2.1 GPa, respectively, and the measurement deviation increases when compared to their glass counterparts.−39 While the thin films deposited at 275 °C are expected to report higher hardness due to better crystallization of the Ti 3 Au intermetallic, it is interesting to see that these samples also have a significantly larger spread of results compared to those deposited at room temperature, irrespective of substrate type or measurement technique.This rise in scatter could be explained by the combined effect from increasing surface roughness of the thin film at elevated substrate temperature, as seen from the AFM results (Figure 4) and the coexistence of two different phases of the Ti 3 Au intermetallic, as seen from the XRD results (Figure 1).The β phase of the Ti 3 Au intermetallic is known to exhibit higher hardness than its softer α phase, because of its denser unit cell arrangement, arising from 14-fold coordination of Ti atoms. 11,12This distinction between harder and softer phases of Ti 3 Au arises at higher temperatures and hence could explain the increase in the range of hardness measurements observed for thin films deposited with substrate heating. The reduced elastic modulus values of the Ti 3 Au films deposited with and without substrate heating on glass and Ti 6 Al 4 V substrates are presented in Figure 7.The quasicrystalline sample, S RT , deposited on glass reports an elastic modulus of 88 ± 5 GPa when measured with the variable load method, but with a fixed load, the same film gives a slightly higher value of 97 ± 3 GPa, with an observable reduction in the measurement scatter.For the samples deposited at elevated substrate temperature on glass, the value of elastic modulus increases to 113 ± 10 GPa in variable load mode due to development of the harder crystalline β-Ti 3 Au phase.The value remains above 100 GPa, but the error bars reduce by more than 5 times when measured with a constant load.This higher spread of results is also observed for samples deposited on Ti 6 Al 4 V substrates and, like the hardness results, can be correlated with higher substrate surface roughness, which will lead to an indenter size effect, causing larger scatter in results. 37,38But irrespective of substrate type, the results from both samples are more consistent around 97−101 GPa when measured with a fixed load.These values are much lower than those observed at 200 GPa in previous works for Ti−Au films deposited on Si-based substrates. 12,14It is known that the volume of elastic field interaction for nanoindentation tests extends much deeper than for hardness, and therefore, the values of elastic modulus are greatly affected by the underlying substrate, even when the indentation depth is maintained below 10% of the total film thickness, and this effect increases with the decrease in film thickness. 40,41Therefore, the lower elastic modulus of the substrates used in this work (Ti 6 Al 4 V ∼ 114 GPa, glass ∼73 GPa) explains the resulting lower elastic modulus of the Ti−Au thin films (∼113 GPa) when compared to the value of 200 GPa observed for these films deposited on Si-based substrates with higher inherent elastic modulus (Si ∼172 GPa). 40,41In the real world, Ti 6 Al 4 V is one of the key material systems utilized for the fabrication of artificial joint implants, and therefore, it is much more beneficial and practical to understand the behavior of superhard β-Ti 3 Au thin films deposited on this substrate system. 15The lower elastic modulus values of the Ti 3 Au coating material observed on Ti- purple-colored Resorufin of the assay.However, for the pure DMEM control and bare Ti substrate, which are known to be noncytotoxic, the 570 nm peak increases due to colorimetric conversion occurring within the viable L929 cells.A similar trend can be observed for the extracts prepared from the Ti 3 Au thin films (S RT and S 275 °C) deposited on Ti 6 Al 4 V substrates.The optical density values shown in Figure 8c very clearly validate this trend by plotting the difference between absorbance measurements at both wavelengths.It is seen that the known cytotoxic controls (DMSO and Cu) exhibit negative optical density values for both 100% and 50% concentration extracts obtained from 72 and 168 h of extraction, while the Ti 3 Au thin films (S RT and S 275 °C) show positive values similar to those exhibited by the known noncytotoxic DMEM and Ti controls.Figure 8d shows the change in color of the extracts following 168 h of extraction.The extract from Cu substrate appears green/bluish in color due to significant leaching of toxic Cu ions, while the extracts from the Ti substrate and Ti 3 Au thin films (S RT and S 275 °C) show no noticeable change in color, suggesting that these materials do not leach into the surrounding extract medium. Figure 9a shows viability levels of L929 mouse fibroblast cells following incubations with 72 and 168 h leached extracts from Ti 3 Au thin films deposited on Ti 6 Al 4 V substrates, compared against positive (Cu, 10% DMSO) and negative (Ti) controls.It can be seen that pure DMEM media, as well as Ti subleached extracts, have a safe cytotoxic profile, as viability levels were minimally affected, reaching values near or above 100%.On the contrary, exposures of fibroblast cells to Cu substrate extracts and 10% DMSO both caused a dramatic decrease in L929 cell viability levels, suggesting that excessive leaching of Cu ions into the extract can be as harmful as known toxic concentrations of 10% DMSO. 43On the other hand, all tested Ti 3 Au thin film extracts (S RT and S 275 °C), obtained from 72 and 168 h of leaching/extraction procedure, have a safe cytotoxic profile.Specifically, in the case of the S RT samples, incubations with leached extracts led to a slight decrease (approximately 20%) in L929 cell viability levels, even in the case of media obtained from a prolonged leaching period of 168 h.In this context, an even better biocompatible profile was observed in the case of samples deposited at an elevated substrate temperature, S 275 °C.Specifically, cell viability levels were observed to be 86% following incubations with 72 h leached media/extracts, while a slight improvement/increase of viability levels to 92% was seen in exposures with 168 h leached film media.According to the ISO 10993 standard, extracts registering cell viability rates above 70% after a minimum of 24 h exposures against mouse cells can be considered as noncytotoxic, indicating a potential biocompatible profile.Therefore, the above results from the Ti 3 Au thin films highlight their great potential to be safely used in biomedical applications. 44CPOEMS tests were performed to measure leached ion concentrations in the Ti 3 Au thin film sample extracts but did not detect any significant elemental traces.In all the samples, the Ti concentration was found to be less than 0.1 ppm, whereas Au, Al, and V ion concentrations were below detection limits.The open void structure of the S RT sample provides more surface area for ions to leach out compared to the extremely dense columnar arrangement of the S 275 °C sample and could therefore explain the slight reduction in cell viability for thin films deposited without substrate heating. 45,46In contrast, the Cu positive control had a leached Cu ion concentration greater than 112 ppm and it is wellknown that Cu becomes cytotoxic above 10 ppm concentrations. 47,48Moreover, we have observed significant morphological modifications in L929 cells exposed to Cu substrate leached extract treatments; see Figure 9b.Specifically, Cu substrate extracts (168 h of leaching) caused shrinkage of L929 cells, dramatically reducing their confluency, thus indicating a strong cytotoxic effect.On the other hand, for incubations with Ti substrate and S RT and S 275 °C thin film extracts (168 h of leaching), no morphological changes were observed, when compared to untreated (Control) L929 cells, confirming their potentially excellent biocompatible properties and safe cytotoxic profile. CONCLUSION This work investigated the combined mechanical and biocompatible performance potential of β-Ti 3 Au intermetallic thin films as a future coating system for the articulating surfaces of total joint implants.The Ti 3 Au thin films show quasi-crystalline nature when deposited at room temperature, but with an increase in substrate temperature to 275 °C, a mixture of α and β phases of Ti 3 Au develops.This difference is reflected in their mechanical properties, with an increase in hardness from 4 to 5 GPa for room temperature samples to 7− 8 GPa for samples deposited at elevated substrate temperature.Deviation in hardness results is found to be adversely affected by increase in surface roughness of the underlying Ti 6 Al 4 V substrate and the coexistence of softer α and harder β phases and can be reduced by preferential growth of the β phase through substrate heating during deposition.Varying the indentation load also leads to substantial scatter in the results, while using a fixed load optimized to reach an indentation depth of 10% of film thickness improved repeatability of the results.The Ti 3 Au thin films are also observed to be noncytotoxic, irrespective of the deposition temperature or substrate type, with L929 cell viability levels above 80% and leached ion concentration levels lower than 0.1 ppm, following 72 h of incubation with 168 h leached extracts.Overall, this work helps to understand the effect of varying substrate type and temperature on the combined mechanical behavior and biocompatibility potential of β-Ti 3 Au thin films.Our future work will focus on further assessing the in vitro and in vivo mechanical wear resistance and biocompatibility of this unique TiAu intermetallic thin film system to help pave the way for the development of a superhard biocompatible coating material to extend the lifetime of articulating total joint implants. Data Availability Statement The data sets used and analyzed during the current study are available from the corresponding author on reasonable request. Figure 1 . Figure 1.Diffraction patterns of Ti 3 Au thin films deposited on (a) glass and (b) a Ti 6 Al 4 V substrate. Figure 2 . Figure 2. Surface morphology of Ti 3 Au thin films deposited on glass substrate at (a) room temperature and (b) substrate temperature of 275 °C and on Ti 6 Al 4 V substrate at (c) room temperature and (d) substrate temperature of 275 °C [higher magnification image of each sample provided in the inset].Figure 3. Cross-sectional imaging of Ti 3 Au thin films deposited on glass substrate at (a) room temperature and (b) substrate temperature of 275 °C. Figure 3 . Figure 2. Surface morphology of Ti 3 Au thin films deposited on glass substrate at (a) room temperature and (b) substrate temperature of 275 °C and on Ti 6 Al 4 V substrate at (c) room temperature and (d) substrate temperature of 275 °C [higher magnification image of each sample provided in the inset].Figure 3. Cross-sectional imaging of Ti 3 Au thin films deposited on glass substrate at (a) room temperature and (b) substrate temperature of 275 °C. Figure 4 . Figure 4. AFM scans of Ti 3 Au thin films deposited on glass substrate at (a) room temperature and (b) substrate temperature of 275 °C. Figure 5 . Figure 5. Load−displacement curves of nanoindentations made with a 800 μN load on Ti 3 Au thin films deposited on (a) glass and (b) Ti 6 Al 4 V substrates at room temperature and substrate temperature of 275 °C.All indents are at a constant 10−10−10 s load−dwell−unload segment time. Figure 6 . Figure 6.Comparison of mechanical hardness of Ti 3 Au thin films deposited on glass and Ti 6 Al 4 V substrates at room temperature and substrate temperature of 275 °C, when measured with variable load and fixed load nanoindentation methods. Figure 7 . Figure 7.Comparison of elastic modulus of Ti 3 Au thin films deposited on glass and Ti 6 Al 4 V substrates at room temperature and substrate temperature of 275 °C, when measured in variable load and fixed load nanoindentation methods. Figure 8 . Figure 8. Absorbance measurements at 570 and 590 nm from L929 mouse fibroblasts exposed to film extracts for (a) 72 and (b) 168 h.(c) Optical density measured from the difference between the intensity of light absorbance at 570 and 590 nm.(d) Optical images of extracts from Cu and Ti substrates and S RT and S 275 °C thin film samples following 168 h of extraction in DMEM culture media. Figure 9 . Figure 9. (a) Viability levels of L929 mouse fibroblast cells, following incubations with leached extracts (72 and 168 h) from S RT and S 275 °C thin films deposited on Ti 6 Al 4 V substrates, compared against positive (Cu, 10% DMSO) and negative (Ti) controls.(b) Morphological changes of L929 cells following incubation with Cu substrate (positive control) and Ti substrate (negative control) and S RT and S 275 °C thin film samples, compared to untreated (control) L929 cells.Images were acquired using an inverted Kern microscope with an attached digital camera and 10× lens. Table 1 . The Film Thickness and Elemental Composition of Ti 3 Au Thin Film Samples
v3-fos-license
2014-10-01T00:00:00.000Z
1999-05-15T00:00:00.000
1371329
{ "extfieldsofstudy": [ "Biology" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://gsejournal.biomedcentral.com/track/pdf/10.1186/1297-9686-31-3-255", "pdf_hash": "65759b1a595b0f410f9be69faa9c4db2b4773064", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41165", "s2fieldsofstudy": [ "Biology" ], "sha1": "65759b1a595b0f410f9be69faa9c4db2b4773064", "year": 1999 }
pes2o/s2orc
Optimal use of genetic markers in conservation programmes Monte Carlo simulations were carried out in order to study the benefits of using molecular markers to minimize the homozygosity by descent in a conservation scheme of the Iberian pig. A selection criterion is introduced: the overall expected heterozygosity of the group of selected individuals. The method to implement this criterion depends on the type of information available. In the absence of molecular information breeding animals are chosen that minimize the average group coancestry calculated from pedigree. If complete molecular information is known the average group coancestry is calculated either from markers alone or by combining pedigree and genotypes with the markers. When a limited number of markers and alleles per marker are considered, the optimal criterion is the average group coancestry based on markers. Other alternatives, such as optimal within-family selection and frequency- dependent selection, are also analysed. © Inra/Elsevier, Paris INTRODUCTION Molecular markers are being advocated as a powerful tool for paternity exclusion and for the identification of distinct populations that need to be conserved (1!. Here we focused on a different application, namely the use of markers to delay the loss of genetic variability in a population of limited size. In a previous paper [12] we conclude that a conventional tactic, such as the restriction of the variance of family sizes, is the most important tool for maintaining genetic variability. In this context, frequency-dependent selection seems to be a more efficient criterion than selection for heterozygosity, but an expensive strategy with respect to the number of genotyped candidates and markers is required in order to obtain substantial benefits. For this reason, we have considered a new criterion of selection: the overall expected heterozygosity of the group of selected individuals. The implementation of this criterion depends on the type of information available, either from pedigree or from molecular markers. A new type of conventional tactics, optimal within-family selection (OWFS) recently proposed by Wang (14!, is also considered. SIMULATION The breeding population consisted of N, = 8 sires and N d = 24 dams. Each dam produced three progeny of each sex. These 72 offspring of each sex were candidates for selection to breeding of the next generation. This nucleus mimicked the conservation programme carried out in the Guadyerbas strain of the Iberian pig (11!. The techniques of simulation of the genome, marker loci and frequencydependent selection have been previously described (12!. Here, we introduced a new criterion, the average expected heterozygosity of the group of selected individuals, implemented by three different methods depending on the type of information available: a) average coancestry, including reciprocal and selfcoancestries, calculated from pedigree (GCP); b) average coancestry for the L n markers (GCM), which can be calculated using 1 -L LP7k, where pik is bersome even for a small nucleus. It can be solved using integer mathematical programming techniques, whose computational cost would be feasible in most practical situations but not for simulation work, where the algorithm should be used repeatedly. For this reason we used a simulated annealing algorithm [10] that, although not assuring the optimal solution, was generally shown to exhibit a very good behaviour when dealing with similar problems [5, 8!. Besides the basic situation of no restriction on the family sizes, two types of restrictions were considered: a) within-family selection (WFS) where each dam family contributes one dam and each sire family contributes one sire to the next generation; and b) optimal within-family selection (OWFS): among the N d dams mated with each sire, one is selected at random to contribute N, one son, another one to contribute two daughters and the remaining C N d J -2 B! / contribute one daughter each !14!. The values of true genomic homozygosity by descent and inbreeding of evaluated individuals at each generation were calculated together with the expected genomic homozygosity of individuals selected from the previous generation and averaged over 100 replicates. The various situations analysed were also compared according to their rate of homozygosity per generation calculated from Ho(t) -Ho(t -1) . __ , , generation 6 to generation 15 as OHo -Ho t -Ho t -1 where Ho (t) is 1( ) ot-1 1) ) W'!'here Ho t is the average homozygosity by descent of individuals in generation. The rate of inbreeding was calculated in a similar way. No molecular information or complete molecular information Several cases were considered for two extreme situations: the absence of molecular information or the complete knowledge of the genome. The relative ranking of the methods was maintained for all generations and the results of generation 15 are shown in table L With no molecular information, the true homozygosity values were almost identical to those calculated from pedigrees. Optimal within-family selection [14] was substantially (about 15 %) more efficient than classical within-family selection. The restrictions on family size distribution are unnecessary if the method of minimum average group coancestry of selected individuals (GCP) is used. The commonly accepted measure of genetic variability of a population is the expected heterozygosity [9] under the Hardy-Weinberg equilibrium (1 -EP 2 ). In the absence of molecular information the average group coancestry measures the expected homozygosity by descent [4] and therefore the best method for choosing breeding animals should minimize the average group coancestry calculated from pedigree [2][3][4]7!. If only full and half-sib relationships are considered, the criterion would lead to the optimal within-family selection method proposed by Wang !14!. When using complete molecular information for selection, the best method was still the same although now the true coancestry for all of the genome was known. In this case, the inbreeding coefficient did not reflect the true homozygosity, and the discrepancy could have been considerable. Furthermore, the rate of advance in the true homozygosity, unlike the rate of inbreeding, does not attain an asymptotic value after a short number of generations but decreases continuously. The method of minimum average group coancestry using all the molecular information (GCM) reduced the rate of homozygosity by almost a half, although the algorithm utilized did not warrant the attainment of the optimal solutions. The impact of imposing additional restriction on family size was negligible. In a balanced structure, the minimization of average coancestry is mainly attained, as previously explained, by selecting individuals from different families. Frequency-dependent selection, very easy to apply, can also be efficient as a conventional tactic, although not being theoretically justified and therefore lacking generality. The results of frequency-dependent selection depended on family size restrictions. Without restrictions, the results were almost as bad as when the molecular information was ignored owing to an increasing tendency to co-select sibs [12]. But, after optimal family size restrictions were imposed, the method was as good as the group coancestry method, since the differences were not significant. Limited number of markers and alleles per marker The relative utility of the number of markers and alleles per marker is presented in table II, where values of the true genomic homozygosity and inbreeding are given for three situations: average group coancestry criterion (GCM), used either without restriction or with optimal family size restrictions, and frequency-dependent selection with optimal family size restrictions. The cases of complete or null marker information are also presented for comparison. As the number of markers and alleles per marker increased, the genome homozygosity attained at generation 15 decreased although it was not adequately reflected in the inbreeding coefficient. This also confirmed our previous finding [12] that the value of a marker is related to the number of alleles: two markers with ten alleles are as valuable as six markers with four alleles. The results also indicated that the use of the method of minimum average group coancestry (or expected heterozygosity) based only on ,molecular data without family restrictions was not a good criterion even with a huge amount of molecular information. The use of this method while applying the optimal restrictions on family sizes emerged from table II as a better criterion (10 % of advantage). Our results, not shown here, also confirmed that slight improvements in the conventional tactics could have an important impact on the maintenance of genetic variability. Thus, OWFS with three markers/chromosome and four alleles/marker was as efficient as WFS with ten markers/chromosome and four alleles/marker (14.80 of genome homozygosity at generation 15 in both cases). Finally, frequency-dependent selection with optimal family restriction, which was previously analysed in more detail (12!, provided good results, and was more easy to implement. Finally, table III shows a comparison of the values for genome homozygosity when using the method of minimizing average group coancestry for markers (GCM) together with restrictions on family sizes with the theoretically optimal method of minimizing average group coancestry based on marker information (GCPM). In order to diminish the high computing cost of the analysis of pedigree involved in the last method, the genome size has been reduced to just one chromosome of 100 cM. Due to this smaller genome size, selection was more efficient and the results of the method of the average group marker coancestry with optimal restrictions were now better than those shown in table II. Results shown in table III also indicated that the method of average group coancestry based on the markers was 20-30 % more efficient. This comparison was only strictly valid for the genome size considered, but it can be safely concluded that the last method could contribute substantially to the efficiency of a markerassisted conservation programme. Although the conclusions obtained through simulation probably have some generalities, it should be recognized that some theoretical developments on marker-assisted conservation are needed. In recent years, substantial work has been carried out on the joint prediction of inbreeding and genetic gain when selecting for a quantitative trait (see [15], for the latest development of the theory). However, predictions on the rate of advance of the true homozygosity by descent when the selected trait is the heterozygosity itself, measured either by molecular or pedigree information, is lacking. The use of an optimal method enhances the prospectives of the application of molecular markers in conservation programmes, although the future will depend critically on DNA extraction and genotyping costs. Microsatellite DNA markers have been considered until now as the most useful markers, especially when multiplex genotyping is used, but in the near future other DNA polymorphisms such as SNP could be the most adequate for routine scoring [6]. It is also interesting to emphasize that the adequate use of molecular tools requires increasingly sophisticated methods of Monte Carlo analysis of pedigree and more powerful methods of combinatorial optimization.
v3-fos-license
2016-06-17T21:56:19.061Z
2014-04-03T00:00:00.000
11876128
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.frontiersin.org/articles/10.3389/fphar.2014.00065/pdf", "pdf_hash": "c9c2cb2f6086f6320f46469234b39211512308ca", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41168", "s2fieldsofstudy": [ "Medicine" ], "sha1": "c9c2cb2f6086f6320f46469234b39211512308ca", "year": 2014 }
pes2o/s2orc
Physiologic and pharmacokinetic changes in pregnancy Physiologic changes in pregnancy induce profound alterations to the pharmacokinetic properties of many medications. These changes affect distribution, absorption, metabolism, and excretion of drugs, and thus may impact their pharmacodynamic properties during pregnancy. Pregnant women undergo several adaptations in many organ systems. Some adaptations are secondary to hormonal changes in pregnancy, while others occur to support the gravid woman and her developing fetus. Some of the changes in maternal physiology during pregnancy include, for example, increased maternal fat and total body water, decreased plasma protein concentrations, especially albumin, increased maternal blood volume, cardiac output, and blood flow to the kidneys and uteroplacental unit, and decreased blood pressure. The maternal blood volume expansion occurs at a larger proportion than the increase in red blood cell mass, which results in physiologic anemia and hemodilution. Other physiologic changes include increased tidal volume, partially compensated respiratory alkalosis, delayed gastric emptying and gastrointestinal motility, and altered activity of hepatic drug metabolizing enzymes. Understating these changes and their profound impact on the pharmacokinetic properties of drugs in pregnancy is essential to optimize maternal and fetal health. INTRODUCTION Prescription and over-the-counter medications use is common in pregnancy, with the average pregnant patient in the US and Canada using more than two drugs during the course of their pregnancy (Mitchell et al., 2001). One reason for this is that some women enter into pregnancy with pre-existing medical conditions, such as diabetes, hypertension, asthma, and others, that require pharmacotherapy; and for many others, gestational disorders (hyperemesis gravidarum, gestational diabetes, preterm labor) complicate women's pregnancies and require treatment. Moreover, virtually the majority of organ systems are affected by substantial anatomic and physiologic changes during pregnancy, with many of these changes beginning in early gestation. Many of these alterations significantly affect the pharmacokinetic (absorption, distribution, metabolism, and elimination) and pharmacodynamic properties of different therapeutic agents (Pacheco et al., 2013). Therefore, it becomes essential for clinicians and pharmacologists to understand these pregnancy adaptations, in order to optimize pharmacotherapy in pregnancy, and limit maternal morbidity because of over-or under-treating pregnant women. The purpose of this review is to summarize some of the physiologic changes during pregnancy that may affect medication pharmacokinetics. CARDIOVASCULAR SYSTEM Pregnancy is associated with significant anatomic and physiologic remodeling of the cardiovascular system. Ventricular wall mass, myocardial contractility, and cardiac compliance increase (Rubler et al., 1977). Both heart rate and stroke volume increase in pregnancy leading to a 30-50% increase in maternal cardiac output (CO) from 4 to 6 l/min (Figure 1; Clark et al., 1989). These changes occur primarily early in pregnancy, and 75% of the increase will occur by the end of the first trimester (Capeless and Clapp, 1991;Pacheco et al., 2013). CO plateaus between 28 and 32 weeks gestation, and then does not change significantly until delivery (Robson et al., 1989). During the third trimester, the increase in heart rate becomes primarily responsible for maintaining the increase in CO (Pacheco et al., 2013). This increase in CO is preferential in which uterine blood flow increases 10-fold (17% of total CO compared with 2% prepregnancy) and renal blood flow increases 50%; whereas there is minimal alterations to liver and brain blood flow (Frederiksen, 2001). In addition, when compared with nulliparous women, multiparous women have higher CO (5.6 vs. 5.2 l/min), stroke volume (73.5 vs. 70.5 mL), and higher heart rate (Turan et al., 2008). During labor and immediately after delivery, CO increases as a result of increased blood volume (300-500 mL) with each uterine contraction, and then secondarily to "auto-transfusion" or the redirection of blood from the uteroplacental unit back to the maternal circulation after delivery (Pacheco et al., 2013). As CO increases, pregnant women experience a significant decrease in both systemic and pulmonary vascular resistances (Clark et al., 1989). Secondary to the vasodilatory effects of progesterone, nitric oxide and prostaglandins, systemic vascular resistances, and blood pressure decrease early in pregnancy, reaching their lowest point at 20-24 weeks, and leading to physiologic hypotension. Following this decrease, vascular resistances and secondarily blood pressure begin rising again, approaching the pre-pregnancy values by term (Clark et al., 1989;Seely and Ecker, 2011). This is especially important in patients with preexisting hypertension and who are on antihypertensive drugs (Pacheco et al., 2013; Table 1). www.frontiersin.org FIGURE 1 | Alterations in heart rate (HR, beats/min) and stroke volume (SV, mL) during pregnancy. The X-axis represents gestational ages in weeks. NP represents the non-pregnant state (Figure adapted from Robson et al., 1989). Starting at 6-8 weeks of gestation and peaking at 32 weeks, maternal blood volume increases by 40-50% above non-pregnant volumes (Hytten and Paintin, 1963). This, coupled with drop in serum albumin concentration, leads to decreased serum colloid osmotic pressure and hemodilutional anemia. Because of the increased compliance of the right and left ventricles in pregnancy, the pulmonary occlusion and central venous pressures remain fixed (Bader et al., 1955). While exact origin of the increased blood volume is not fully understood, the mechanism may be through nitric oxide mediated vasodilatation and increased arginine vasopressin production and mineralocorticoid activity, with water and sodium retention, leading to hypervolemia (Winkel et al., 1980). The pregnancy induced hypervolemia is thought to provide survival advantage to the pregnant women, protecting her from hemodynamic instability with the blood loss at the time of delivery (Carbillon et al., 2000;Pacheco et al., 2013). The increase in total body water, blood volume, and capillary hydrostatic pressure increase significantly the volume of distribution of hydrophilic substrates. Clinically, a larger volume of distribution could necessitate a higher initial and maintenance dose of hydrophilic drugs to obtain therapeutic plasma concentrations. Additionally, because of the decrease in serum albumin concentrations and other drug-binding proteins during pregnancy; drugs, that are highly protein bound, may display higher free levels due to decreased protein binding availability, and thus higher bioactivity. For example, if a drug is highly (99%) bound to albumin in non-pregnant patients, a small drop in protein binding to 98% in pregnancy translates into doubling of the drug's active fraction in pregnancy. Digoxin, midazolam, and phenytoin are examples of medications primarily bound to albumin (Pacheco et al., 2013). RESPIRATORY SYSTEM Due to the increase in estrogen concentrations in pregnancy, the respiratory system undergoes anatomic changes leading to increased vascularity and edema of the upper respiratory mucosa (Taylor, 1961). This may explain the increased prevalence of rhinitis and epistaxis during pregnancy. Although it is a theoretical risk and no studies have shown increased toxicity, inhaled medications, such as steroids used to treat asthma, may be more readily absorbed by pregnant patients (Pacheco et al., 2013). Pregnancy is associated with increase in tidal volume by 30-50%, which starts early in the first trimester. While the respiratory rate is not different compared to non-pregnant state, minute ventilation (the product of respiratory rate and tidal volume) is significantly increased, similarly, by 30-50%. These changes are mainly driven by the increase in progesterone concentrations in pregnancy (Elkus and Popovich, 1992;McAuliffe et al., 2002). In addition, the diaphragm is pushed 4-5 cm upward due to the increased intra-abdominal pressure from the enlarging uterus and fluid third spacing. This leads to bibasilar alveolar collapse, basilar atelectasis, and decreased in both functional residual capacity and total lung capacity decrease by 10-20% (Baldwin et al., 1977;Tsai and De Leeuw, 1982). The decrease in functional residual capacity may predispose pregnant patient to hypoxemia during induction of general anesthesia. The vital capacity remains unchanged, as the decreased expiratory reserve volumes are accompanied with increased inspiratory reserve volumes (Baldwin et al., 1977;Pacheco et al., 2013). When evaluating blood gases in pregnancy, it is important to note that the arterial partial pressure of oxygen (PaO2) is normally increased to 101-105 mmHg and that of carbon dioxide (PaCO2) decreased to 28-31 mmHg. These changes are mainly driven by the increase in minute ventilation described above. The drop of PaCO2 in the maternal circulation creates a gradient between the PaCO2 of the mother and fetus, which allows CO2 to diffuse freely from the fetus, through the placenta, and into the mother, where it can be eliminated through the maternal lungs (Pacheco et al., 2013). In addition, maternal arterial blood pH is slightly increased to 7.4-7.45 and consistent with mild respiratory alkalosis. This alkalosis is partially corrected by increased renal excretion of bicarbonate, leading to reduced serum bicarbonate level between 18 and 21 meq/L, and reduced buffering capacity (Elkus and Popovich, 1992;Pacheco et al., 2013). This partially compensated respiratory alkalosis slightly shifts the oxy-hemoglobin dissociation curve rightward, thereby favoring dissociation of oxygen and facilitating its transfer across the placenta, but it also may affect protein binding of some drugs (Tsai and De Leeuw, 1982). RENAL SYSTEM The effects of progesterone and relaxin on smooth muscles are also seen in the urinary system leading to dilation of the urinary collecting system with consequent urinary stasis, predisposing pregnant women to urinary tract infections (Rasmussen and Nielse, 1988). This is more common on the right side secondary to dextrorotation of the pregnant uterus, and the right ovarian vein that crosses over the right ureter. Both renal blood flow and glomerular filtration rate (GFR) increase by 50%, as early as 14 weeks of pregnancy (Davison and Dunlop, 1984). The mechanisms behind the increase in GFR are probably secondary to vasodilation of afferent and efferent arterioles. The increase in GFR leads to decreased serum creatinine concentrations, so that when serum creatinine concentration is above 0.8 mg/dL during pregnancy, it may indicate an underlying renal dysfunction (Pacheco et al., 2013) The increase in renal clearance can have significant increase (20-65%) in the elimination rates of renally cleared medications leading to shorter half-lives. For example, the clearance of lithium, which used to treat bipolar disorder, is doubled during the third trimester of pregnancy compared with the nonpregnant state, leading to sub-therapeutic drug concentrations (Schou et al., 1973;Pacheco et al., 2013). Other drugs that are eliminated by the kidneys include ampicillin, cefuroxime, cepharadine, cefazolin, piperacillin, atenolol, digoxin, and many others (Anderson, 2005). The kidneys are also mainly involved in water and sodium osmoregulation. Vasodilatory prostaglandins, atrial natriuretic factor, and progesterone favor natriuresis; whereas aldosterone and estrogen favor sodium retention (Barron and Lindheimer, 1984). Although elevated GFR leads to additional sodium wasting, the higher level of aldosterone, which reabsorbs sodium in the distal nephron, offsets this wasting (Barron and Lindheimer, 1984). The resulting outcome is one of significant water and sodium retention during pregnancy, leading to cumulative retention of almost a gram of sodium, and a hefty increase in total body water by 6-8 l including up to 1.5 l in plasma volume and 3.5 l in the fetus, placenta, and amniotic fluid. This "dilutional effect" leads to mildly reduced serum sodium (concentration of 135-138 meq/L compared with 135-145 meq/L in non-pregnant women) as well as serum osmolarity (normal value in pregnancy ∼280 mOsm/L compared with 286-289 mOsm/L in non-pregnant women; Schou et al., 1973). Another consequence of this volume expansion is reduced in peak serum concentrations (Cmax) of many hydrophilic drugs, particularly if the drug has a relatively small volume of distribution. GASTROINTESTINAL SYSTEM In pregnancy, the rise in progesterone leads to delayed gastric emptying and prolonged small bowel transit time, by ∼30-50%. Increased gastric pressure, caused by delayed emptying as well as compression from the gravid uterus, along with reduced resting muscle tone of the lower esophageal sphincter, sets the stage for gastro-esophageal reflux during pregnancy (Cappell and Garcia, 1998). In addition, these changes alter bioavailability parameters like Cmax and time to maximum concentration (Tmax) of orally administered medications (Parry et al., 1970). The decrease in Cmax and increase in Tmax are especially concerning for medications that are taken as a single dose, because a rapid onset of action is typically desired for these medications (Dawes and Chowienczyk, 2001). Drug absorption is also decreased by nausea and vomiting early in pregnancy. This results in lower plasma drug concentrations. For this reason, patients with nausea and vomiting of pregnancy (NVP) are routinely advised to take their medications when nausea is minimal. Moreover, the increased prevalence of constipation and the use of opiate medications to ease pain during labor slow gastrointestinal motility, and delay small intestine drug absorption. This may lead to elevated plasma drug levels postpartum (Clements et al., 1978). The increase in gastric pH may increase ionization of weak acids, reducing their absorption. In addition, drug-drug interaction becomes important as antacids and iron may chelate co-administered drugs, which further decreases their already reduced absorption (Carter et al., 1981). The increase in estrogen in pregnancy leads to increase in serum concentrations of cholesterol, ceruloplasmin, thyroid binding globulin, and cortisol binding globulin, fibrinogen and many other clotting factors (Lockitch, 1997). Serum alkaline phosphatase is elevated during pregnancy as it is also produced by the placenta, and its levels in pregnant women may be two to four times those of non-pregnant individuals; therefore limiting its clinical utility when liver function or enzymes are assayed (Lockitch, 1997;Pacheco et al., 2013). The rest of liver function tests such as serum transaminases (SGOT, SGPT), lactate dehydrogenase, bilirubin, and gamma-glutamyl transferase are not affected (Lockitch, 1997). Drug metabolism is also altered in pregnancy in part secondary to elevated sex hormones and changes in drug metabolizing enzymes including those involved in phase I (reduction, oxidation, or hydrolysis) or phase II (glucuronidation, acetylation, methylation, and sulfation) metabolism (Evans and Relling, 1999). Cytochrome P450 (CYP450) represents a family of oxidative liver enzymes, and is a major route of drug metabolism for many drugs. For example, CYP3A4 exhibits a broad substrate specificity that includes nifedipine, carbamazepine, midazolam, and the anti-retroviral drugs saquinavir, indinavir, lopinavir, and ritonavir as well as many other drugs (Evans and Relling, www.frontiersin.org 1999; Schwartz, 2003;Mattison and Zajicek, 2006). Because CYP3A4's abundance and activity increase in pregnancy, the clearance of its substrates is also increased, requiring dose adjustment (Little, 1999). Examples of changes in phase II metabolism include increased activity of the conjugating enzyme uridine 5 -diphospho-glucuronosyltransferase (UGT) 1A4, which leads to increased oral clearance of lamotrigine, one of its substrates (de Haan et al., 2004;Pacheco et al., 2013). HEMATOLOGIC AND COAGULATION SYSTEMS White (WBC) and red blood cell (RBC) counts increase during pregnancy. The first is thought to be secondary to bone marrow granulopoiesis; whereas the 30% increase in RBC mass (250-450 mL) is mainly driven by the increase in erythropoietin production. The higher WBC count can sometimes make diagnosis of infection challenging; however normally the increase in WBC is not associated with significant increase in bands or other immature WBC forms (Pacheco et al., 2013). Despite the increase in RBC mass, and as previously described, plasma volume increases significantly much higher (∼45%), which leads to "physiologic anemia" of pregnancy. Anemia usually peaks early in the third trimester (30-32 weeks) and may become clinically significant in patients already anemic (iron deficiency, thalassemia, etc.) at entry to pregnancy (Pritchard, 1965;Peck and Arias, 1979). This physiologic hemodilution may provide survival advantage to women during pregnancy and childbirth, since the less viscous blood improves uterine and intervillous perfusion, while the increased red cell mass, coupled with increased uterine blood flow, optimizes oxygen transport to the fetus, and at the same time the blood lost during delivery will be more dilute (Koller, 1982;Letsky, 1995;Pacheco et al., 2013). The increase in RBC mass is accompanied by increased in maternal demand of iron by an additional 500 mg during pregnancy. This is coupled with an additional 300 mg of iron that is transferred to the fetus and 200 mg that is required for normal daily iron losses, making the total iron requirement in pregnancy around 1 g (Pacheco et al., 2013). Pregnancy is a hypercoagulable state secondary to blood stasis as well as changes in the coagulation and fibrinolytic pathway such as increased plasma levels of clotting factors (VII,VIII,IX,X,XII), fibrinogen, and von Willebrand factor. Fibrinogen increases starting in the first trimester and peaks during the third trimester in anticipation of delivery. Prothrombin and factor V levels remain the same during pregnancy. Whereas, protein S decreases in pregnancy, protein C does not usually change and thus can be assayed if needed in pregnancy. Free antigen levels of the protein S above 30% in the second trimester and 24% in the third trimester are considered normal during pregnancy (Pacheco et al., 2013). Anti-thrombin III levels do not change, however, plasminogen activator levels are decreased and those of plasminogen activator inhibitor (PAI-1) levels increased by 2-3 fold, leading to suppressed fibrinolytic state in pregnancy. Platelet function and routine coagulation screen panels remain normal. This hypercoagulable state may offer a survival advantage by minimizing blood loss after delivery, but it also predisposes pregnant women to higher risks for thromboembolism (Hehhgren, 1996;Pacheco et al., 2013). ENDOCRINE SYSTEM Plasma iodide concentration decreases in pregnancy because of fetal use and increase in maternal clearance of iodide. This predisposes the thyroid gland to increase in size and volume in almost 15% of women. In addition to anatomic changes, the thyroid gland increases production of thyroid hormones during pregnancy. This is due to the up-regulation of thyroid binding globulin, which is the major thyroid hormone binding protein, by almost 150% from a pre-pregnancy concentration of 15-16 mg/L to 30-40 mg/L in mid-gestation. This massive increase is driven by the hyper-estrogenic milieu in pregnancy and reduced hepatic clearance. The net result is increase in total tetra-iodothyronin and tri-iodothyronin hormones (TT4 and TT3) in pregnancy. Despite the increase in total T4 and T3, the free forms of the hormones (fT4 and fT3) remain relatively stable or slightly decreased but remain within normal values and these patients are clinically euthyroid (Glinoer, 1997;Glinoer, 1999;Pacheco et al., 2013). The increased thyroid hormones production takes place mostly in the first half of gestation, plateauing around 20 weeks until term. Clinically, due to these changes, the use of total T4, total T3 and resin triiodothyronine uptake is not recommended to monitor thyroid hormone status in pregnancy as they will be increased (TT4, TT3) and decreased (rT3U), respectively. For patients with hypothyroidism and who require levothyroxine replacement in pregnancy, it is recommended that they increase their levothyroxine dose by 30% early in pregnancy, be monitored during pregnancy, and to decrease the dose in the postpartum period (Alexander et al., 2004). Thyroid stimulating hormone (TSH) decreases during the first half of pregnancy due to negative feedback from peripheral T3 and T4 secondary to thyroid gland stimulation by human chorionic gonadotropin (hCG). During the first half of pregnancy, a normal value of TSH is between 0.5-2.5 mIU/L (as compared to an upper limit of normal value for TSH of 5 mIU/L in the non-pregnant state). Other factors that affect thyroid hormones metabolism and levels in pregnancy include: (1) the increase in maternal renal iodine excretion (secondary to increase in GFR), (2) the higher maternal metabolic demands and rate during pregnancy, (3) the thyrotropic action of hCG which shares a similar α subunit with the TSH receptor and has a weak thyroid stimulating activity, (4) the increase in thyroid hormones transplacental transport to the fetus early in pregnancy, and (5) the increase in activity of placental type III 5-deiodinase (the enzymes that converts T4 to the inactive reverse T3; Glinoer, 1997;Glinoer, 1999;Pacheco et al., 2013). CONCLUSION Profound physiologic and anatomic changes occur in virtually every organ system during pregnancy. These have significant consequences on the pharmacokinetic and pharmacodynamic properties of various medications when used by pregnant women. Data are lacking on the implications of these changes on variety of therapeutic agents, and future research is desperately needed.
v3-fos-license
2018-10-12T00:01:40.527Z
2016-11-16T00:00:00.000
19793991
{ "extfieldsofstudy": [ "Engineering" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://doi.org/10.15436/2379-1705.17.1265", "pdf_hash": "203686495a6ef5861a1cd7779d4fc846ba3f391f", "pdf_src": "MergedPDFExtraction", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41170", "s2fieldsofstudy": [ "Medicine" ], "sha1": "dba1b1eb28a80cd4b107f1ec82698a59e611c6cb", "year": 2017 }
pes2o/s2orc
Millennials and the Future of Dentistry Millennials are a larger population with a very different in attitude, than Baby Boomers and should have a greater impact than Baby Boomers on the traditional American society and dentistry in particular. There are many reasons for their altered attitude, one of which is their brains have changed morphologically from the excess digital screen time at moment when the frontal lobes were maturing. They will be very good in a dental office with computers, CAD/CAM, robots, internet and internet-of-everything, but will have difficulty with thinking about complex problems, e.g. the complex medical problems of the aging population and how it affects dentistry and how dental diseases are affecting systemic health of the patient. Dental treatment is becoming more complex because of the aging population with more systemic illnesses that uses more pharmaceuticals and requires more understanding on the part of the practitioner. They are more sophisticated and value oriented than the in past. This complex medical understanding is further complicated with recent developments that describe the relationships of periodontal disease and dental caries to being related to systemic disease. To deliver high quality value oriented treatment requires digital dental equipment and the talents of the Millennials, however, they do not like to think about complex diagnostic medical and dental problems. For a variety of reasons dentistry may be at a turning point where it will be necessary to train some dentists to behave more like technicians of digital equipment, while others should be trained more like internal medicine physicians. The systemic illness aspect of dentistry and the more complicated medical problems of our population may require more education than is currently provided in most dental educational settings, while at the same time traditional dental education is simplifying and reducing clinical experiences. This may provide a separation of dental care providers into 1. “Dentist-physician”, who is relationship based with patients/clients and global thinking minded, and 2. “dentist-technician” who is non relationship based and shallow thinking minded using more computers and robotics to perform dentistry. The analogy in the medical field would be in the arena of eye care; ophthalmologist (MD) and optometrist (O.D). Dentists who train for oral surgery frequently also get a MD degree. Why not have dentists-physicians trained in internal medicine? This would be a conventional way to bridge this area of dental disease causing systemic illness and the aging population with complicated medical conditions and dental treatment. *Corresponding author: George E. White, Professor of Pediatric Dentistry, Nova Southeastern Univer¬sity, Health Professions Division, College of Dental Medicine, Post Graduate Pediatric Clinic, 819 East 26th Street, Fort Lauderdale, FL 33305, USA, Tel: 954-567-5650; Fax: 877-991-4957; E-mail: gwhite1@nova.edu Citation: White, G.E. Millennials and the Future of Dentistry. (2017) J Dent Oral Care 3(1): 14. J Dent Oral Care | Volume 3: Issue 1 Millennials and the Future of Dentistry Copyrights: © 2017 White, G.E. This is an Open access article distributed under the terms of Creative Commons Attribution 4.0 International License. 1 George E. White* Received date: December 12, 2016 Accepted date: January 27, 2017 Published date: February 03, 2017 White, G.E DOI: 10.15436/2379-1705.17.1265 Introduction Now is a critical period for American dentistry. This topic can be discussed in four different areas, namely, people, providers, payment and policy [1] . This paper is concerned mostly with the people and providers. Diringer et al. [1] found that the population of the U.S. is getting older, more diverse, leading to different disease patterns, care seeking behavior, and have the ability to pay. While providers are increasing as more dentists are trained, but mounting debt load and changing demographics anew, altering the practice choices for new dentists. At the same time there are pressures for increasing expanded duty personnel to provide for prevention and restorative procedures, which is an area of concern for this paper. Is dentistry going to retain its professional status by training dentists to understand the medical and dental health relationships and treat patients accordingly or is the pathway downward to continue to a dentist-technician and/or to train an expanded duty personnel to a level where understanding is minimal and digital technical skills are the aim of the training?. With the increased demand for value in dental care spending, practices will need to become more efficient. This occurring in larger, multi-site practices, which are sometimes corporate. Health care reform and Medicaid expansion with an increasing emphasis on outcomes and cost effectiveness will encourage alternative models of dental care [1] . Public health models of delivery dental care will not be examined, but our attention will be focused on private and corporate delivery systems with moderate changes in dental education and more changes in the technical practice of dentistry. It is the thesis of this paper that the nature of the provider is changing with the Millennials and internet/digital age that will drive the dental education and lead to changes in the private domain. Meanwhile the public is changing and has more complicated medical status, which requires more understanding of the medical conditions and how they interface with dentistry. Additionally research has shown that periodontal disease directly and dental caries indirectly influence total health. This should also lead to changes in delivery of dental care and dental education. The Millennials As the age of information morphs into the age of internet of things and robotics, so are people morphing, namely the brains of the Millennials are changing from the intense interaction with computers. These factors and more will have a profound effect on dentistry. The Millennial generation has variously defined birth times between 1980 to 2000, are relatively unattached to organized politics and religion, linked by social media, burdened by debt, distrustful of people, in no rush to marry-and optimistic about the future. They are also America's most racially diverse generation. In all of these dimensions, they are different from today's older generations. And in many, they are also different from older adults back when they were the age Millennials are now. Pew Research Center surveys shows that half of Millennials (50%) now describe themselves as political independents and about three-in-ten (29%) say they are not affiliated with any religion. These are at or near the highest levels of political and religious disaffiliation recorded for any generation in the quarter century that the Pew Research Center has been polling on these topics [2] . Taylor et al. [3] found that Millennials surpassed Baby Boomers to become the largest living generation in the United States. By analyzing 2015 U.S Census data they found there were 75.4 million Millennials compared to 74.9 million Baby Boomers. Just as baby-boomers had a profound effects on American society due to their different mentality and large size, so too will the larger Millennial generation have profound effects on our society [4,5] . While 49% of Millennials state that the country's best years lie ahead, they are the first in the modern era to have high-er levels of student loan debt and unemployment [2] . Newer research shows that Millennials change jobs for the same reasons as other generations-namely, more money and a more innovative work environment. They look for versatility and flexibility in the workplace, and strive for a strong work-life balance in their jobs [6] and have similar career aspirations to other generations, valuing financial security and a diverse workplace just as much as their older colleagues [7] . Educational sociologist Andy Furlong described Millennials as optimistic, engaged, and team players [8] . Some more characteristics of Millennials [9] for those who may hire or have interactions with them: 1. They're earnest and optimistic. 2. They embrace the system. 3. They are pragmatic idealists, tinkerers more than dreamers, life hackers. 4. Their world is so flat that they have no leader, which is why revolutions from Occupy Wall Street to Tahrir Square have even less chance than previous rebellions. 5. They want constant approval. 6. They have massive fear of missing out and have an acronym for everything (including FOMO). • They don't identify with big institutions. 7. They want new experiences, which are more important to them than material goods. 8. They are cool and reserved and not all that passionate. Brain Changes from Excess Screen Time An important change in the millennial generation is the excessive digital screen time, which has changed their brain anatomically and functionally as discussed below and may have facilitated their unique characteristics. Dossey [10] has stated that during the past twenty years a digital sea change has affected our world. Digital devices have changed the way we live and especially the way we work in our professions. As dentists, we are able to work with far greater accuracy and precision than ever before; we would be foolish not to embrace these advances. But, as is often the case with rapid cultural changes, we need to be aware of the possibility of unintended consequences that may accompany this revolution. Sound scientific studies are beginning to warn of the psychological and physiological problems of overuse of digital devices in our daily lives. We should remember that these devices are neutral. It is up to each of us to use them in ways that enhance patient care. Loh and Kanai [11] have stated that the Internet environment has profoundly transformed our thoughts and behaviors. Growing up with Internet technologies, "Digital Natives" gravitate toward "shallow" information processing behaviors, characterized by rapid attention shifting and reduced deliberations. They engage in increased multitasking behaviors that are linked to increased distractibility and poor executive control abilities. Digital natives also exhibit higher prevalence of internet-related addictive behaviors that reflect altered reward-processing and self-control mechanisms. Recent neuroimaging investigations have suggested associations between these internet-related cognitive impacts and structural changes in the brain. Taken together studies show that internet addiction is associated with structural and functional changes in brain regions involving emotional processing, executive attention [12] . In short, excessive screen-time appears to impair brain structure and function. Much of the damage occurs in the frontal lobe of the brain, which undergoes massive changes from puwww.ommegaonline.org Future of Dentistry J Dent Oral Care | Volume 3: Issue 1 berty until the mid-twenties. Frontal lobe development, in turn, largely determines success in every area of life-from sense of well-being to academic or career success to relationship skills [13] . Others would agree with these changes [14][15][16] Park, et al. [14] stated that the internet use disorder is associated with structural or functional impairment in the orbitofrontal cortex, dorsolateral prefrontal cortex, anterior cingulate cortex, and posterior cingulate cortex. These regions are associated with the process or reward, motivation, memory, and cognitive control. Early neurobiological research results in this area indicated that internet use disorder shares many similarities with substance use disorders, including, to a certain extent, a shared pathophysiology. Changes in view of dentistry: dental caries and periodontal disease affects systemic health Traditionally, there was compartmentalization of the mouth from the rest of the body and the relationship of oral diseases to systemic health was minimal. The understanding of the two major oral diseases, periodontal disease and dental caries is evolving from an etiopathologic view to our current concepts [17] . Historically, understanding of periodontal disease has been seen in three phases: the etiopathologic (host-parasite) era, the risk factor era and the periodontal disease-systemic disease era. The last era is seen as a two-way mechanism as periodontal disease affects the body and the body can affect periodontal disease [17] . Periodontal disease and diabetes Diabetic patients [18][19][20][21][22] are more likely to develop periodontal disease, which in turn can increase blood sugar and diabetic complications. People with diabetes are more likely to have periodontal disease than people without diabetes, probably because people with diabetes are more susceptible to contracting infections. In fact, periodontal disease is often considered a complication of diabetes. Those people who do not have their diabetes under control are especially at risk. Research has suggested that the relationship between diabetes and periodontal disease goes both ways -periodontal disease may make it more difficult for people who have diabetes to control their blood sugar. Severe periodontal disease can increase blood sugar, contributing to increased periods of time when the body functions with a high blood sugar. This puts people with diabetes at increased risk for diabetic complications. Stroke Additional studies [18] have pointed to a relationship between periodontal disease and stroke. In one study that group looked at the causal relationship of oral infection as a risk factor for stroke, people diagnosed with acute cerebrovascular ischemia were found more likely to have an oral infection when compared to those in the control. Heart disease Several studies have shown that periodontal disease is associated with heart disease [18,23] . While a cause-and-effect relationship has not yet been proven, research has indicated that periodontal disease increases the risk of heart disease. Scientists believe that inflammation caused by periodontal disease may be responsible for the association. Periodontal disease can also exacerbate existing heart conditions. Patients at risk for infective endocarditic may require antibiotics prior to dental procedures. Your periodontist and cardiologist will be able to determine if your heart condition requires use of antibiotics prior to dental procedures. Osteoporosis Researchers have suggested that a link between osteoporosis and bone loss in the jaw [18] . Studies suggest that osteoporosis may lead to tooth loss because the density of the bone that supports the teeth may be decreased, which means the teeth no longer have a solid foundation. Respiratory disease Research has found that bacteria that grow in the oral cavity can be aspirated into the lungs to cause respiratory diseases such as pneumonia, especially in people with periodontal disease [18] . Cancer Researchers found that men with gum disease were 49% more likely to develop kidney cancer, 54% more likely to develop pancreatic cancer, and 30% more likely to develop blood cancers [18] . Dental Caries and Systemic Disease Caries are frequently a sign of excess sugar intake and this can be related to systemic disease. The clinician should broaden their thinking to include the possibility that excess sugar intake can cause systemic disease from atherosclerosis, peripheral vascular disease, coronary heart disease, heart attack, stroke, type 2 diabetes and kidney disease. Excess sugar damages the body in the following manner, e.g. overloads and damages the liver, causes weight gain, creates metabolic syndrome, increases uric acid levels which is a risk factor for heart and kidney disease [24,25] . The changes in characteristics in the Millennial generation and the changes in view of oral health and systemic disease create a dissonance. Therefore additional education for the cognitive approach to health with the aid of computers will be a more likely pathway into the future. Changes in future equipment for dentistry Smart equipment; example sterilizers, chairs, CAD/ CAM and almost everything else will be able to diagnose and report issues back to the manufacturer and will rely less on human intervention for maintenance and proper function. Currently home appliances and other home systems are featuring Smart attributes. Currently we have digital dentistry, dental technology, dental radiography including 3D imaging, CAD/CAM cone beam, which are all computer based. Lasers are currently quite useful in dentistry and will likely be paired with computers into a robotic mechanism that will result in more precise preparations and soft tissue surgery. The offices of today are largely digital based and include digital records, scheduling, accounting, marketing, inventory and ordering of supplies, payroll, etc. All of which illustrates, how the digital office will require a "digital" brain to interact with it. All of this does not include the human touch and insight of dentistry. Changes in future treatments for dentistry -nanotechnology Aeran, et al. [26] state that nanotechnology creates in-credibly useful structures from individual atoms or molecules, which provides a new alternative and a possibly superior approach for the identification of oral health related problems and also in designing of more biocompatible dental materials with better properties and anticaries potential. In the year 2000, the term and maybe the field of nanodentistry were born. As nanomedicine advanced, dentistry also started evolving in the field of nanotechnology. It is envisaged that nanotechnology will affect the fields of diagnosis, materials, restorative dentistry and surgery. The exciting new branches nanorobotics, nanodiagnosis, nanomaterials, and nanosurgery and nanodrugs would profoundly impact clinical dentistry in the notso-distant future [27] . Modern dentistry has a goal to prevent rather than treat biofilm dependent oral diseases, e.g. dental caries and endodontic and periodontal diseases. Nanotechnology offers new approaches for preventive therapies in oral diseases, particularly dental caries and periodontal diseases. Controlling dental caries can be gained by inhibiting the bacterial action, reversing demineralization process and promoting remineralization. Nanotechnology offers means to these ends through: antibacterial nanotechnology, biomimetric remineralization, i.e. reversing an incipient caries, biomimetric remineralization of recurrent decay. Types of Dentists The key element in describing the future practice of dentistry is ownership. Ownership status determines the dentist's freedom to determine the course of clinical treatment for the patient. Recent graduates have high levels of educational debt, a reduced educational experience, and a dearth of alternative career choices. Corporate entities have more access to capital resources to purchase practices which are coming to market from the baby-boomers, and have the advantage of pricing. The rising cost of services creates an environment where corporate entities can cost compete with traditional practices in a variety of locations [28] . These corporate practices are and will be digitally based and require the services of more technician minded dentists. Increasingly, dental patients will be older, have more complex medical issues, and take routine medications. To treat these patients properly, dentists will need to have extensive knowledge of the relevant clinical sciences, including the foundational basic and medical services. Rather than emphasizing the training of dental care providers with an abbreviated educational experience, we should consider more extensive training to meet the more complex needs of the dental patient of the future [28] . In summary the "technician dentist" has large debt and is in need of income upon graduation, has low clinical experience, has a "Millennial" set, will be comfortable using digital equipment, which they in all probability cannot afford to purchase, and will likely become employees of corporate dentistry. The "physician-minded" dentist would be in the position of integrating the medical health with the dental health of the patient to create the treatment plan and monitor the course of health, which either the "physician-minded dentist" or the "technician-minded" dentist would perform with digital, robotic and nanotechnology.
v3-fos-license
2018-04-27T04:32:21.546Z
2018-01-01T00:00:00.000
4892659
{ "extfieldsofstudy": [ "Medicine", "Materials Science" ], "oa_license": "CCBY", "oa_status": "BRONZE", "oa_url": "https://doi.org/10.4317/jced.54300", "pdf_hash": "1664153478738c92201d56734e45922806f73de9", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41172", "s2fieldsofstudy": [ "Medicine" ], "sha1": "1664153478738c92201d56734e45922806f73de9", "year": 2018 }
pes2o/s2orc
Impact of different rectangular wires on torsional expression of different sizes of buccal tube Background Torsions in rectangular wires are the essential part of corrections in the finishing stage of treatment. Moreover the greatest amounts of torques are applied in the molar areas. a clinically effective moment is between 5 and 20 Nmm. In this study we have decided to evaluate the impact of different tube sizes and different dimensions of wires with different modulus of elasticities on the amount torsional bond strength of molar tubes. Material and Methods 60 human impacted molar teeth were collected. A buccal tube was bonded on the buccal surface of all the samples by using light cured adhesive resin. After that, the teeth were mounted in a hard acrylic block. According to the size of buccal tube and the rectangular wires to be tested 4 groups will be designed. Torsional force was applied by instron machine. The torque angle at 5Nmm and at 20Nmm point will be calculated: which means, how many degrees of torque is required to reach the maximum 20Nmm moment from the minimum 5Nmm.One-way ANOVA was used to compare torque angle in all of the groups. Results The least amount of clinically significant angle was 2.2 ᵒ in the 0.017×0.025 SS and the largest amount of it was 23.7 ᵒ in the 0.017×0.025 TMA in 0.018×0.025 slot molar tube. But, this angle was 19.9 ᵒand 13.6 ᵒ in 0.019×0.025 SS and 0.019×0.025 TMA archwire in 0.022×0.028 molar tube. Conclusions The 0.017×0.025 SS archwire in 0.018×0.025 molar tube had the lowest clinically significant angle. The largest amount was seen in group 0.017×0.025 TMA in 0.018×0.025 slot molar tube. Key words: Torsional efficacy, rectangular wires, buccal tubes, torque angle. Introduction To have a correct axial inclination at the final stages of fixed orthodontic treatment, a controlled root movement is required which is reffered to as third order movement. A twist of an edgewise wire in any attachment slot provided the required torsional load for root up righting (1). The couple force provides rotation of teeth in bucco-lingual direction is called torque. Torque can be observed from a mechanical or clinical point of view. Mechanically, it is the rotation of structure around its long axis. Clinically, it is the bucco-lingual crown/root inclination. This would be the any rotation perpendicular to the long axis (2). As defined by preview studies, for the purpose of the study torque is defined as the physical couple force applied to the molar tube, measured in Nmm.torque angle is considered the angle to which the wire is twisted (degree).and torque axpression is the torque at any set of angles (3)(4)(5). Torque expression is the result of many interacting factors. Attachment designs, engagement angle of wire and slot, mode of ligation (6), attachment deformation, mechanical properties of wire (6,7), wire edge beveling (8)(9)(10), are declaredas factors impacting the torque expression. Most fixed orthodontic treatments are accomplished with less than full-dimention wires. This means there is a lack of cohesive contact between the archwire and attachment when an undersized wire is inserted, the wire can rotate in slot of attachment . This angle of freedom is called play and it would increase as the differences in size between the slot and wire (11). In the range of this play no third-order movement would happen. In 1892, Burston defined the clinically effective movement between 5 and 20 Nmm which means no root movement happens under 5 Nmm and exceeding 20 Nmm may deteriorate the predontium (12). This effective sizeof the slot is the most important factor influencing the biomechanics during orthodontic treatment. The appliances introduced by angle possessed the slot height was 0.022 inch, however by introduction of smaller archwires attachments with slot 0.018 inch get popular. This slot size was threaded with working archwires of 0.017× 0.025 and 0.018 × 0.025 as fullthickness archwires (13). Thereafter, Roth reintroduced slot size of 0.022 with the straight wire technique (14). At the same, the archwires start to evolve, followed by introduction of beta-titanium (TMA) atchwire by the Ormco Corporation with the elasticity between steel and NiTi (15). As stated torsions in rectangular wires are the essential parts of corrections in the finishing stage of treatment. Moreover the greatest amounts of torques are applied in the molar areas. As the result, in this study we have decided to evaluate the impact of different tube sizes and different dimensions of wires with different modulus of elasticities on the amount torsional bond strength of molar tubes. Also the maximum torque that can be applied prior to debonding of buccal tube will be determined. Material and Methods -Specimen preparation: At first 60 third impacted molars which were extracted due to space deficiency diagnosed in the posterior segment analysis of mandibular arch were collected. All specimens were evaluated to have an intact buccal surface and not having any form of cracks, fracture, void or enamel developmental defects. They were stored in o/1 % Thymol solution to control bacterial growth. Before bonding, buccal surfaces of all teeth were cleaned with pumice and rinsed with distilled water. A buccal tube (American Orthodontic, USA) was bonded on the buccal surface of all the samples under controlled temperature (37º) and humidity (54%+_5): relative humidity, using the following procedures as recommended by the manufactures guidelines. The buccal enamel surface in the middle third of occlusogingival and mesiodistal dimension was etched for 30 seconded using phosphoric acid (3M, Monrovia, USA) and rinsed for 20 seconds. After that, the etched enamel was dried with oil and moisture free compressed air .Then Tansbonded XT primer (3M, Monrovia, USA) was applied using a microbrush. Tubes were bonded using light cured adhesive resin Transbond XT 3M, Monrovia, USA. The base of buccal tube was coated a uniform layer of Transbond XT light cure adhesive resin. To position all the tubes in the same position on the buccal surfaces, excess of resins were removed with a dental explorer, an orthodontic gauge was used. Position of the buccal tubes was reexamined and each margin of tube bases was cured for 20 seconds with an Optilux visible light curing unit (Kerr corp, Orange, Calif, USA). Before each bonding, the curing light was tested with a curing radiometer (Kerr, corp, Orange, Calif, USA) for a minimum intensity at 400 mw/cm 2 . After bonding of buccal tubes, the teeth was mounted in a hard acrylic block (Pars Acryle,Iran). For mounting a putty index was prepared and the teeth was placed in acrylic blocks using a surveyor (Marathon, South Korea) in a way that the buccal tubes were positioned at the right angle to horizon. This omitted any pre adjusted torque in the buccal tubes which can be caused by malposition of teeth in the acrylic blocks. According to the size of buccal tube and the rectangular wires to be tested 4 groups were designed. Group 1: buccal tube size of 0/018 × 0/025 inch and the tested wires of 0/017 × 0/025 inch SS. Group2: buccal tube size of 0/018 × 0/025 inch and the tested wires of 0/017 × 0/025 inch TMA. Group3: buccal tube size of 0/022 × 0/028 inch and the inserted wire of 0/019 × 0/025 inch SS. Group4: buccal tube size of 0/022 × 0/028 inch and the inserted wire size will be 0/019 × 0/025 inch TMA. -Testing apparatus: A testing apparatus was designed to fix the orientation of the buccal tubes to the rectangular arch wires in all three planes. The apparatus mimicked rectangular arch wire torqueing. A holder was considered at the middle of this apparatus to hold the specimens blocks. Supporting posts were constructed to mount a cross bar so that the wire could twist without any displacement in other directions. A string was tied around a drum on the torque axis on one side of crossbars. The other end of the string was fastened to a 100-N load cell in the universal. Testing machine (Zwick/Roell, Z020, Germany) the speed of crosshead was 0.5 mm/min (16,17) the wire was inserted in the buccal tube at each sample and grasped on both sides by crossbars. The wires were aligned with the buccal tubes so that no existing torque or angulation existed. For each sample displacement (mm) at 5Nmm and at 20Nmm of moment was recorded by the instron machine (Zwick/Roell, Z020, Germany) (4). The torque angle at 5Nmm and at 20Nmm point was calculated by converting the linear displacement of the string (in millimeter) which was needed to twist the wire united to rotational motion (in degree) and for each single sample in different combination of wires and tube size, the clinical significant torque angle was calculated as follow: which means , how many degrees of torque was required to reach the maximum 20Nmm moment from the minimum 5Nmm.This torque was expressed in either clockwise or counter clockwise. Results The mean and standard deviation (SD) of the angles at different combination of the archwires and tube slot sizes showed the couple of 5 and 20 Nmm, demonstrate in Table 1. The final column of the table demonstrates the clinical significant couple interval for each combination. This means the amount of torque needed to be appeared to reach the maximum 20Nmm couple from the minimum 5Nmm, either in clockwise or counterclockwise. Comparison of different combination of tube sizes and archwires showed a clinically significant difference in the amount of torque angle needed to reach 5 Nmm, 20Nmm and also the amount of clinically significant torque angle among the groups. The least amount of clinically signi- Although the largest amount of couple at 20 Nmm which is the threshold of clinically acceptable one was seen in the 0.017×0.025-inch TMA archwire in 0.018×0.025inchslot in molar tubes, it was not statistically significant in comparison with the other two groups of 0.019×0.025inch archwire in slot size of 0.022×0.028-inch with different material properties (P>0/05). Discussion In the clinical practice during the finishing stage of fixed orthodontic treatment mostly the evaluated configurations would be used. Heavier archwires such as 0.021×0.025 -inch are rarely used in practice due to generated hazardous forces regarding root resorptions (18). The teeth which were under loads of moments with higher magnitude for a longer duration are predisposed to a higher degree of root resorption. Although, the maximal torque threshold is hard to defined, Burston defined the 20Nmm moment at the maximal limit over which is likely to damage the periodontal tissue (12). Moreover they determined the range between 5 Nmm and 20 Nmm as the clinically efficacious moments. The etiology of root resorption is multifactorial and mechanical factors cannot extrapolate all this clinical situations. Table 2: Comparison between groups based on clinical significant couple. P ≤ 0.001 has been shown as * 0.001< P ≤ 0.01 has been shown as ♣ 0.01< P ≤ 0.05 has been shown as ♦ 0.05< P has been shown as N. Table 3: Comparison between groups at 5Nmm torque. P ≤ 0.001 has been shown as * 0.001< P ≤ 0.01 has been shown as ♣ 0.01< P ≤ 0.05 has been shown as ♦ 0.05< P has been shown as N. P ≤ 0.001 has been shown as * 0.001< P ≤ 0.01 has been shown as ♣ 0.01< P ≤ 0.05 has been shown as ♦ 0.05< P has been shown as N.S Even at any specific moment magnitude, the location of center of rotation influence the pressure area on the periodontium. It also should be put into consideration the fact that the relationship between center of rotation and center resistance may change over time which is a rule rather than an exception. By knowing the above fact, the difference in clinically significant torque angle detected among different configuration are caused by the differences in wire material and cross sections. The clinically significant torque angle of 0.017×0.025 SS archwire in 0.018 inch system tube had the lowest degree. This low torque angle is almost impractical to be applied precisely in clinic. This result consistent with the result reported by Sifakakis et al that the greatest mean moment of 14.2 Nmm was reported in 0.017×0.025 SS archwire in the 0.018-inch brackets in 15ᵒ buccal root torque (19). The elastic properties of 0.019×0.025-inch TMA are identical to an 0.018-inch SS wire and almost have half stiffness of steel wires of the same cross sections. Both play and applied torque are affected by the attachment and archwire features. Though the material used to construct the archwire is not an impacting factor for the amount of play, it is an important factor in term of applied torque. For bracket type attachments, Archambault et al reported that at any angle of torque, 0.019×0.025 SS archwire expresses a couple 1.5 to 2 times greater than TMA (5). For the molar tubes with the twist of wire from the mesial side, as was tested in the present study, the among of clinically significant torque angle was 1.5 times in SS in comparison to TMA in 0.022-inch system .On the other hand with less play in configuration of 0.017×0.025 in 0.018-inch molar tube, this ratio increased to more than 10times. Moreover, in a recent investigation by Mager et al, the plastic deformation of the sloth is negligible in the couples less than 26 to 38 Nmm (18). In the present study, the moment never exceeded 20Nmm. The plastic deformation of metal walls could not deteriorate the results. Since, the wearing of interior walls of tubes due to numerous times of testing could influence the result (20,21). For each configuration, a new archwire and molar tube was assembled in this study. The differential effect of the interbracket distance was almost omitted because the length crosshead was identical in all samples, and it was assumed to be as close as possible to clinical situation. On the other hand there are no significant effects of length on torsion and changes in length are not an exponential factor as they are in bending (22). In previous studies on this subject, the torque expression were evaluated in bracket slots and the type of ligation of the wire in the slot could have an impact on the results (12). In the present study the torque efficacy of different wires have been evaluated in the molar tubes. Since there is only insertion and no ligation this factor has been eliminated, and this configuration has been chosen is because of the importance of torque on posterior segment and there were no previous study on this this subject. It was found that after elimination of the play, the torque moment was significantly higher in wire ligation than elastic ligation in torsion levels of 40˚. However these differences were not observed in fully engaged archwires. The indisputable drawback of elastic ligation is the rapid force degradation which could be 50% in the first 24 hours which could lead to incomplete engagement of the wire in the slot. In preadjusted systems, the typical torque prescription is less than 25 degrees for molar tube and the use of straight wires in 0.022-inch systems TMA and SS with the size of 0.019×0.025-inch in the finishing stage would be in the limit of acceptable couple forces with regard to peridontiom. For 0.018-inch systems, the 0.017×0.025-inch ″ SS would exceed the safe zone after 2˚ of torque. In molar tube this force would remain till the tooth movement accurse and no degradation due to elastic ligation is present, an in preadjusted systems with prescribed torque which mostly exceed 2˚ of torque in molar areas, insertion of 0.017×0.025 ″-inch SS and heavier SS wires in molar tubes could be damaging (3). Although, TMA wire with their reduced torque angular deflection ratio in comparison to SS wire are less effective in transmitting the desire torque moments to the attachments, their range of possible activation is more fail safe in comparison to SS wires specially in 0.018-inch systems. In comparison of previous studies on torque expression different configuration of brackets including various e30 type of self-ligating brackets are compared with conventional one (19). In these studies the mechanical properties of the materials used to construct the attachments, such as modulus of elasticity and roughness of the slot walls arises impact on the amount of torque expression (19). In this study, this factor was eliminated by using the same brand in all samples and only the types of wire were the investigated factors. However, it is proposed to investigate these properties as the influencing factors in the future studies on this subject. Another important factor in the clinical routine situation is the vertical position of the attachments, since as previously was demonstrated a 3mm shift in vertical position can change the torque angle about 15 degree (23). This factor has less impact in molar tubes due to the fact that most of the times at least one side of the tube is the free end of the wire in comparison to the brackets attached to other teeth in fixed orthodontic systems. This element can also be investigated on the future studies especially since the finishing steps using TMA rectangulatr wire could happen often. The fact that a 10 degree less in the amount of torque may cancel out the prescribed torque in preadjusted attachment of the produced torque by this elements is in the opposite direction (6). The present study focused on the loading expression as many previous studies. With considering this fact that the unloading torque characteristics result in the root movement, we propose that these characteristic also be investigated in the future studies. It is important to consider that many factors are involved in the final inclination and position of teeth in the finishing stages of orthodontic treatments. Torsion magnitude is only one factor from many mechanical properties; such as: thickness of wire, position of bracket and tooth, slot sizes, attachments composition, the manufacturing tolerance and processes of wire attachments, as well as intra oral aging (24,25). This study is only a simplified representation of what occurs in the oral cavity. In addition to all these mechanical factors, position of center of resistance, center of rotation, root length, alveolar bone height can dictate the final inclination of a tooth. Conclusions As the conclusion, the combination of 0.017×0.025inch SS archwire in 0.018×0.025-inch molar tube had the lowest clinically significant angle between 5 Nmm to 20Nmm applied torsional force, among four groups. This low torque angle is almost impractical to be applied precisely in clinic. However, the largest amount of clinically significant torque was also seen in group 0.017×0.025-inch TMA in 0.18×0.025-inch slot molar tube. But, there was no significant difference among with the other two groups of 0.019×0.025-inch arch wire in slot size of 0.022×0.028-inch with different material properties.
v3-fos-license
2020-09-26T13:06:07.005Z
2020-09-25T00:00:00.000
221910548
{ "extfieldsofstudy": [ "Environmental Science" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.frontiersin.org/articles/10.3389/fenrg.2020.00191/pdf", "pdf_hash": "ac9bb27e6985e900b776e0382a75378abb66736d", "pdf_src": "Frontier", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41175", "s2fieldsofstudy": [ "Engineering", "Environmental Science" ], "sha1": "ac9bb27e6985e900b776e0382a75378abb66736d", "year": 2020 }
pes2o/s2orc
Review of Power-to-X Demonstration Projects in Europe At the heart of most Power-to-X (PtX) concepts is the utilization of renewable electricity to produce hydrogen through the electrolysis of water. This hydrogen can be used directly as a final energy carrier or it can be converted into, for example, methane, synthesis gas, liquid fuels, electricity, or chemicals. Technical demonstration and systems integration are of major importance for integrating PtX into energy systems. As of June 2020, a total of 220 PtX research and demonstration projects in Europe have either been realized, completed, or are currently being planned. The central aim of this review is to identify and assess relevant projects in terms of their year of commissioning, location, electricity and carbon dioxide sources, applied technologies for electrolysis, capacity, type of hydrogen post-processing, and the targeted field of application. The latter aspect has changed over the years. At first, the targeted field of application was fuel production, for example for hydrogen buses, combined heat and power generation, and subsequent injection into the natural gas grid. Today, alongside fuel production, industrial applications are also important. Synthetic gaseous fuels are the focus of fuel production, while liquid fuel production is severely under-represented. Solid oxide electrolyzer cells (SOECs) represent a very small proportion of projects compared to polymer electrolyte membranes (PEMs) and alkaline electrolyzers. This is also reflected by the difference in installed capacities. While alkaline electrolyzers are installed with capacities between 50 and 5000 kW (2019/20) and PEM electrolyzers between 100 and 6000 kW, SOECs have a capacity of 150 kW. France and Germany are undertaking the biggest efforts to develop PtX technologies compared to other European countries. On the whole, however, activities have progressed at a considerably faster rate than had been predicted just a couple of years ago. INTRODUCTION Future energy systems with high shares of renewable energies and aims to achieve the goals set out in the Paris Agreement will place a high demand on energy storage systems. Furthermore, electricity will be increasingly used in the heat, transport, and industry sectors, i.e., sector coupling (e.g., Ram et al., 2019), which will -to some extent -require transformation into other energy forms. Electricity can be used directly in other sectors, for example with battery electric vehicles, or it can be processed into other energy carriers that are more versatile in their use and can be better stored. Such concepts are known as Power-to-X (PtX), since electrical energy is transformed into different products. A key stage of this concept is the production of hydrogen by water splitting in an electrolyzer. Often, hydrogen or further processed methane are the final products. These concepts are referred to as Power-to-Gas (PtG), a name often used synonymously with all PtX applications. In Figure 1, an overview of basic PtX process chains is given. The focal point of most pathways is the electrolysis process to produce hydrogen. The required electricity often comes from variable renewable energy (VRE) generation, for example wind or photovoltaics, either directly or in the form of certificates (e.g., Pearce, 2015;Bauer, 2016;BIGH2IT, 2017;Büssers, 2019), and less frequently from the grid. The use of grid electricity, however, often contradicts the original idea behind PtX -to use and store renewable energy (RE) -as electricity produced from fossil fuels still makes up a considerable share of most national grids. Projects using grid electricity either focus on hydrogen production or processing (Moser et al., 2018), or they aim to provide peak shaving electricity to the grid (Hänel et al., 2019). If further processing is emphasized, carbon dioxide is a necessary feedstock to process hydrogen into other energy carriers or industrial products such as methane or chemicals (Hendriksen, 2015;MefCO2, 2019). To foster a climate-friendly energy system, non-fossil fuel carbon dioxide sources should, of course, be favored. However, in some research projects, fossil carbon is used due to it being easily accessible from existing test facilities for carbon capture and use (CCU; Moser et al., 2018). In future, fossil fuel-based power plants will be less, or not at all, available. The knowledge gained from such projects, however, can be used for other power plants, such as municipal waste treatment (MWT). Other carbon dioxide sources include industrial processes [(CCU P2C Salzbergen) BMWi, 2019], the anaerobic digestion of biomass (e.g., Rubio et al., 2016;Sveinbjörnsson and Münster, 2017), direct air capture (DAC; BMBF, 2018), and other biogenic sources. Alongside methanation, other options for hydrogen-based fuels are methanol, Fischer-Tropsch diesel (BMBF, 2018), or dimethyl ether (DME; Moser et al., 2018). Another possibility is the production of synthesis gas, a mixture of hydrogen and carbon monoxide, in a reversed water-gas shift reaction or co-electrolysis (Andika et al., 2018;Wang et al., 2019). This technology splits water electrochemically and simultaneously produces synthesis gas from the hydrogen and the added carbon dioxide in a single process. There is a diverse range of applications for hydrogen or hydrogen-based products. Hydrogen and fuels can be used in mobility applications (HyFLEET:Cute, 2009), for reelectrification in combined heat and power (CHP) plants (Exytron, 2019), or in industrial applications, for example refineries (H&R, 2017) or steel production [H2Stahl (BMWi, 2019]. Furthermore, hydrogen can substitute fossil fuel-based feedstocks in the chemical industry [(CCU P2C Salzbergen) BMWi, 2019]. Another way to use electricity across multiple sectors is through direct conversion into heat, which is a standard application in many cases, for example heat pumps. In industrial applications, an increasing number of electrode boilers have been installed over the last few years. However, an analysis of this kind of technology is beyond the scope of this article. As the number of PtX projects has increased, so too has the number of reviews. A first overview of PtX projects was published by Gahleitner (2013). It focused on global projects from laboratory scale to demonstration plants. Gahleitner identified 64 projects, 48 of which included a detailed assessment. Bailera et al. (2017) analyzed lab, pilot, and demonstration projects on a global scale. They identified 66 projects, highlighting 23 of them and focusing on catalytic methanation. A first overview focusing on Europe was given in an earlier publication by Wulf et al. (2018). We explicitly excluded lab projects, and still found 128 projects in 16 countries. One year later, Thema et al. (2019) published a review with 153 projects from 22 different countries on a global scale, also including lab projects and older projects dating back to 1988. They also included a cost and capacity projection for installed electrolyzers until 2050 as well as geospatial data. Although the authors included an analysis of the countries involved, there was no detailed discussion on this subject. Based on Task 38 of the International Energy Agency's (IEA) Hydrogen Technology Collaboration Programme, Chehade et al. (2019) performed a similar analysis and identified 192 PtX demonstration projects in 32 countries. The authors focused on the different fields of applications and objectives (economic or non-economic) and various hydrogen storage technologies, and also assessed the efficiency of the electrolyzers. They did not include projects that have only been projected or announced. This confirms the conditions for previous years, but does not reflect current or even future perspectives. Although these articles offer a good overview of the development of PtX technologies, they stopped collecting data in early 2018 (Chehade et al., 2019) and late 2018 (Thema et al., 2019), respectively. However, in 2019 and at the beginning of 2020, several multi-MW projects were announced, which offer a perspective for the future. Lab-scale projects as well as projects initiated before 2000 have been excluded from this article to focus on recent developments. Furthermore, all previous reviews show that Europe has been the leading region for these technology concepts for several years now. Therefore, only European projects are considered here. We also discuss qualitative trends in terms of certain countries or technological features using examples of different projects. This overview clearly shows how the scope of a review can influence the results. Chehade et al. (2019) provided a review on a global scale, which also included lab-scale projects. FIGURE 1 | Overview of Power-to-X process chains based on hydrogen. CO 2 , carbon dioxide; H 2 , hydrogen; H 2 O, water; CHP, combined heat and power; OME, polyoxymethylene dimethyl ether. For the years 2010-2014, they identified 78 eligible projects, while only 37 are taken into account here. As 80% of the projects they identified are European, the consideration of lab-scale projects can be seen as the major difference. METHODOLOGY Power-to-X projects were identified through extensive internet research. Many sources took the form of press releases from companies announcing a new project. Frequently used publications included BWK Das Energie-Fachmagazin, the German Hydrogen and Fuel-Cell Association (DWV) Mitteilungen (membership magazine), the project database of the German Energy Agency (dena, 2020), and the German Technical and Scientific Association for Gas and Water (DVGW, 2019). Non-German literature reporting frequently about new PtX projects is -to the best of our knowledge -not available. While these sources provided an initial indication of a new project, we also sought out announcements of each project in English. The same applies for publications in French, Danish, and other European languages, although this was not always available. To qualify as a project for this publication, the project must be located in Europe, have been initiated after the year 2000, and have a technology readiness level of five or higher (EU, 2014). The other review articles mentioned in the introduction were used to validate the data before 2018. The analyzed topics can be arranged into three categories: • General information: location (country base), year of commissioning (electrolyzer), out of operation (yes/no). • Technical specifications: power and carbon dioxide supply, type of electrolyzer, capacity of electrolyzer, type of hydrogen processing (e.g., catalytic methanation). • Field of application: gaseous or liquid fuels, industrial application, heat and power generation, blending into the natural gas grid. RESULTS AND DISCUSSION For the analysis, 220 projects that meet the set criteria were identified by June 2020. Twenty different countries are currently undertaking PtX projects, with an increasing number becoming interested in these technological concepts. A complete list of these projects can be found in the Supplementary Material. This section is structured into three sections according to the categories mentioned above. The first section provides an overview of the historical development of PtX in the different countries and discusses how they use different strategies. The second section features a discussion of current and planned installed electrolyzer technologies as well as their capacities. The third and final section takes a look at the design of the X phase and which electrolyzer technology is used for what purpose. General Information From Figure 2, it can be seen that 2018 was the year with the most commissioned projects so far in Europe. In the years to come, fewer projects will be initiated, but the data also shows that installed capacity is still growing rapidly. It also seems that PtX development is following a wave-like pattern with peaks in 2015, 2018, and 2020. Experience has shown, however, that some of the projects scheduled for 2020 will be delayed to 2021 due to technical difficulties and delayed approvals as well as the special circumstances surrounding the global COVID-19 crisis. The year 2024 is also expected to stand out, due to a situation specific to Germany. It had been assumed that the regulatory sandboxes (Reallabore) funded by the German Federal Ministry for Economic Affairs and Energy 1 that can be classified as PtX projects (ten out of 20) would be commissioned in 2024, if they were yet to have published a commissioning year. Furthermore, the three HyPerformer projects (NOW, 2019) are expected to begin in 2025, which is also a conservative estimation since these projects received their notification of funding in December 2019. It has been generally assumed that commissioning takes place in the penultimate year of the project, based on the experience of earlier projects. Twenty European countries are engaged in PtX projects. Furthermore, one pan-European project including the Netherlands, Denmark, and Germany is currently being planned, in which hydrogen is to be produced offshore in the North Sea right next to wind parks with connections to bordering countries (NSWPH, 2019). The country with the most PtX projects is Germany, representing 44% of all identified projects. In the first few years, PtX projects were developed in several different countries. Since 2011, however, Germany has started to increase its interest in this technology, with at least four new projects per year and 13 new projects in 2020. The interest of different countries in PtX projects has been growing constantly over the last few years (see Figure 3). However, in the years to come, the only new countries to launch PtX projects will be Hungary and Slovenia. Figure 4 shows the installed capacities of European countries in the last five years and the next five years. As expected, due to its large number of projects (Figure 2), Germany has had the largest installed capacity over the last five years and this is set to increase significantly in the next five years. Demonstration projects have also been realized in several other countries. However, it appears that in the future, these technologies will be implemented by fewer countries but with a higher intensity. Germany, France, the Netherlands, Belgium, and Denmark stand out in particular. Over the next five years, 494 MW of installed electrolyzer capacity is scheduled in Germany. In addition, there are plans for seven more projects, which have not yet specified their electrolyzer capacity, but which will likely also be on the multi-MW scale. In France, 514 MW are projected to be installed in this time period. It is also worth mentioning that in France, this capacity will largely be achieved by two projects alone. The developer H2V PRODUCT (Meillaud, 2019;H2V, 2020) is set to install 500 MW of capacity. In contrast, seven hydrogen projects for fuel provision (hydrogen refueling stations) funded by ADEME (Agence de l'environnement et de la maîtrise de l'énergie) are rather minor (around 1 MW) (FuelCellsWorks, 2020a). In Germany, 28 projects are scheduled for this period. Here, the main drivers of installed capacity are the aforementioned regulatory sandboxes [6 of the 10 projects will have a capacity of 220 MW (BMWi, 2019]). Surprisingly, the United Kingdom has only announced one new PtX project (ITM, 2020a), despite having been relatively active in the past. Furthermore, the United Kingdom's Committee on Climate Change has called for an increased national effort when it comes to using hydrogen in industry and other sectors (Stark et al., 2019). One reason for this might be that in the United Kingdom, hydrogen production from steam methane reforming, which is connected to carbon capture and storage/use in the long term, is being discussed (Bottrell Hayward, 2020). Belgium is an example of a country that has shown little interest in this kind of technology (one project with only 130 kW installed capacity), but is now announcing relatively major projects (one with a capacity of 25 MW and another with a capacity of 50 MW). In Spain, a similar development can be observed. In the 2000s, small projects were developed, whereas the target is now for capacities on the multi-MW scale. Eastern European countries rarely invest in PtX projects. Only Poland, Estonia, and Latvia currently have active projects. Latvia and Estonia are following the same pathway as the countries involved in the Clean Urban Transport for Europe (CUTE) project (Binder et al., 2006) in 2003, initiating their hydrogen activities with an EU project [H2Nodes (FuelCellsWorks, 2020b)] for fuel cell buses. However, the EU has established a new funding scheme -Important Project of Common European Interest (IPCEI)that aims to support countries that are not yet active in PtX development. Furthermore, an IPCEI on hydrogen is currently under development and aims to close the gap between research and development projects and commercialization (Hydrogen for Climate Action, 2020). The first projects should be approved by the end of 2020. The central idea behind many of these projects is to produce hydrogen (further processing is not yet planned) in sunny and windy regions, to use some of the hydrogen for local mobility applications, and to export the rest of the hydrogen to other countries. If available, data about the decommissioning of the projects were also gathered (see Supplementary Material). However, such data are hard to come by and too incomplete to allow for a meaningful analysis. In some R&D projects, even the installed technologies are passed on to a follow-up project, for example MefCO 2 and FReSMe (2017). Technical Specifications The analyzed technical specifications include a power and carbon dioxide supply, electrolyzer types, and capacities, as well as technologies for hydrogen processing. Energy Sources Energy sources have a major impact on PtX. One argument for PtX is the storage of intermittent energy sources of VRE. Almost half (105) of the projects consider a supply by direct RE technologies, such as wind, photovoltaics, geothermal, or hydropower plants. There is no clear trend as to which renewable technology is preferred, neither in connection to specific electrolyzer types, nor to capacity sizes, nor to countries. Twelve projects describe their PtX benefit as being able to store surplus energy from renewables, which would be retailed otherwise. This line of argumentation can be mainly seen with German projects. It remains uncertain what the real amount of surplus energy is and what its availability will look like in the future, but the key message is to use RE sources. This is especially true for projects in countries with a high share of electricity produced from fossil fuels as well as all planned projects that rely on certificates (Hulshof et al., 2019). However, almost 13% (28) of the projects do not include an energy source. They either use electricity from the national grid (whose share of renewables might be quite low) or do not specify the energy source. In particular, during the last decade, Germany, Denmark, and the United Kingdom have had several projects, in which the demonstrator was connected to the grid. One project explicitly focuses on the peak shaving of the national grid (Hänel et al., 2019). Most other projects focus on the feasibility of hydrogen production or processing for different applications, rather than demand side management or the demonstration of VRE electricity storage potential. Figure 5 shows the development of different energy source options, making a distinction between direct RE technologies, certificates for RE electricity, surplus RE electricity, and the national grid (not specified sources are included here for simplification). Considering the countries with the highest number of projects (Germany, France, Denmark, and United Kingdom), no preference for a specific type of electricity supply can be identified. However, there is a clear trend toward including RE sources in projects. Carbon Dioxide Source About one third of the projects (70) process hydrogen into other gases, liquid fuels, or chemicals (see section "Hydrogen processing"), predominantly in Germany (38). The capture of the required carbon dioxide is also included in the PtX project in the majority of cases (60). The sources of the carbon dioxide, however, vary from project to project. To underline the notion of non-fossil carbon sources, the majority (27) of projects obtain carbon dioxide from biogas or biomass plants. In Iceland, the country-specific option of geothermal carbon dioxide is used. A small number of projects (seven) obtain carbon dioxide from nearby industry sites or even lignite power plants. These large point sources already have capture technologies installed and are seeking utilization options (CCU), since storage options (CCS) are not currently available for these sources. This might be a temporary solution, as large industrial carbon dioxide emitters, such as the steel industry in the case of Sweden, and especially the electricity generation industry will have to change to a low carbon future. However, right now, they can provide carbon dioxide in considerable amounts [e.g., 7.2 t CO 2 /day (Moser et al., 2018)]. These projects provide an insight into the handling and purification requirements of future industrial carbon dioxide sources, which will still exist due to process-related reactions, such as for the cement industry. Projects from the Exytron Group (Schirmer, 2020) require one filling of carbon dioxide from an external source, with carbon dioxide then being captured and recycled from CHP plants using synthetic methane produced through PtX. In addition, carbon dioxide emissions will be emitted in the future during wastewater treatment and waste incineration. Therefore, nine projects include such facilities as a carbon dioxide source. An industry-independent source is provided by DAC. Ambient air generally flows through a filter where either adsorption, absorption, or mineralization removes carbon dioxide from air. Due to the very low carbon dioxide content of air and its resulting high energy demand (heat and electricity), this concept has proven controversial (Fasihi et al., 2019). Nevertheless, this Climeworks AG technology is included in seven out of eight projects, often in combination with solid oxide electrolyzer cell (SOEC) electrolyzers using the synergy of thermal integration in the concept. Electrolyzer Type and Capacity Alkaline electrolyzers have been used in previous projects and will continue to be used in future projects, thus indicating the constant development of this technology. Since 2015, polymer electrolyte membrane (PEM) electrolyzers have gained high shares of the market due to their good partial load range and dynamic behavior. Four projects aim to compare these two technologies and integrate both into their system design. Several attempts have been made to use SOECs in PtX projects, but this technology remains at a much lower technology readiness level. A more uncommon technology is the alkaline solid polymer electrolyte electrolyzer, which is a hybrid between a PEM and an alkaline electrolyzer. This technology was used in three projects. However, the installed capacities are rather small and future demonstration projects using this technology have yet to be announced. From 2022 onward, the share of projects with no dedicated electrolyzer technology is set to increase, which is understandable, since it has not yet been decided which technology is used. Unfortunately, over the past few years, the electrolyzer technology used has not been specified, which results in a total of 24% of the projects with no information about the technology. The cumulative installed capacity (see Figure 6 2 ) shows a constant increase with a noticeably higher rise from 2021 onward. Until the end of 2020, 93 MW of electrolyzer capacity is planned to be installed. The biggest projects (all with 6 MW of capacity) are the Audi e-gas project (commissioned 2013, alkaline electrolyzer) (Köbler, 2013), the 2 Many projects do not announce when they have shut down. For this reason, no information about active projects is available and capacities are cumulative; projects using PEM and alkaline electrolyzers are counted separately according to the technology used; due to the small number of projects using alkaline SPE, they have been excluded from this diagram. Energiepark Mainz (commissioned 2015, PEM electrolyzer) (Energiepark Mainz, 2016) both in Germany, and the new H2Future project (commissioned 2019, PEM electrolyzer) in Austria (voestalpine, 2019). The expected capacities for the upcoming years are presented. They are all based on project announcements, many of which have secured funding. Until 2025, the biggest projects will have a capacity of 100 MW. Four projects fall under this category: Element Eins (scheduled for 2022) (E1, 2019) and hybridge (2019) (scheduled for 2023)both in Germany -as well as H2V59 (also scheduled for 2022) (H2V, 2020), and the second H2V PRODUCT project -both in France -with 400 MW of capacity (planned for 2022/23) (Meillaud, 2019). From 2026 onward, even bigger projects have been announced, increasing the total installed capacity for electrolyzers to 1.8 GW. The largest single project is HyGreen Provence 2 with 435 MW of installed capacity planned in the final stage by 2030 in France (Le Hen, 2019; Saveuse, 2020). Thema et al. (2019) predicted exponential growth of cumulative installed capacity. Based on the published project data, this appears to be a considerable underestimation, even for the near future. For 2026, they predicted roughly 300 MW of installed capacity worldwide. The research presented here, however, shows that in Europe alone, 1410 MW is expected to be installed by 2026. The main drivers behind this acceleration of growth are the publicly funded projects in many sectors in Germany and several high-investment industry projects in France, for example H2V PRODUCT (Le Hen, 2019). A closer look at the years between 2012 and 2020 with regard to installed capacity and the electrolyzer technology used is also shown in Figure 6. It demonstrates the growing importance of PEM technology for hydrogen production. Not only is the number of projects utilizing PEM electrolyzers constantly growing (Figure 7) but so too the installed capacities. 2019 was the first year in which more PEM electrolyzer capacity was installed than alkaline electrolyzer capacity (cumulatively). However, alkaline electrolyzers will play an important role again. For example, the 100 and 400 MW PtX projects in France, which are part of the H2V PRODUCT project, will be equipped with alkaline electrolyzers (HydrogenPro, 2019). Due to the level of technological development, only a low level of capacity has been installed for SOECs over the last few years. This technology still needs to demonstrate its usability beyond niche applications. The MultiPLHY project aims to install a 2.6 MW SOEC electrolyzer in a biorefinery (European Commission, 2020). In the same year (2023), an industrial project with a 20 MW electrolyzer capacity is set to be installed in Herøya, Norway to produce jet fuel using the Fischer-Tropsch process (Norsk e-Fuel, 2020). This would be a much faster technological development from a multi-MW scale to greater than 10 MW than was the case for PEM or alkaline electrolyzers. For alkaline electrolyzers, it took ten years from the first demonstration projects to achieve a multi-MW scale and another eight years to reach 10 MW. For PEM, it took seven years to achieve the first multi-MW scale and another five years to reach 10 MW. For SOECs, the first step of technological development took nine years -in contrast to PEM and alkaline electrolyzers -the second step, however, is expected to follow instantly. In Figure 7, the distribution of electrolyzer capacities is depicted. The smallest class of electrolyzers (below 5 kW) is used extremely rarely, because they are too small for demonstration projects and are only considered for laboratory use. For electrolyzers below 100 kW, alkaline electrolyzers tend to be used. This is because of earlier projects, during which small capacities were installed and the preferred technology was alkaline electrolysis due to its higher maturity back then. On the other hand, the Exytron projects (Schirmer, 2020) -most of which have a capacity of 100 kW or below -will all use alkaline electrolyzers, stating the cost advantages of this technology due to its higher maturity. At present, SOECs are predominantly installed at a capacity between 100 and 500 kW, although they are less developed than PEM and alkaline electrolyzers. A trend toward smaller capacities might have been expected due to the lower level of technological development. Electrolyzers with a capacity between 0.5 and 1.0 MW are rarely used. Electrolyzer developers decided to opt directly for a size bigger than 1 MW. For sizes above 1 MW, more PEM electrolyzers are installed than alkaline electrolyzers. A relatively high share of projects, which have not yet defined their electrolyzer technology, are set to install multi-MW electrolyzers. Furthermore, a significant number of projects have not yet made a decision on electrolyzer type or capacity. Hydrogen Processing At present, only around one third of the projects are processing hydrogen into other fuels and products (see Figure 8). If hydrogen is treated further, mainly methane is produced that can be easily injected into the natural gas grid. Methanation can be realized in a catalytic and biological way. Biological methanation, for example, can be used if biogas or sewage gas needs to be upgraded to biomethane by injecting hydrogen into the biogas. A good example of the holistic use of PtX is its application in wastewater treatment plants. In the Swisspower Hybridkraftwerk project (Viessmann, 2019), hydrogen is used to enhance the methane content in the sewage gas. In another PtX project -LocalHy in Germany -the additional oxygen produced is used directly for wastewater treatment (localhy, 2019). Denmark is another country with several biological methanation projects. Catalytic methanation shows higher efficiencies, but it is also more complex from a technical perspective. However, the Exytron projects -seven projects with CHP production in residential buildings -show that catalytic methanation is on its way toward commercialization (Schirmer, 2020). Although the number of projects suggest a balance between catalytic and biological methanation, catalytic methanation is more commonly used in bigger projects, as Thema et al. (2019) have also stated (twice as much capacity for catalytic methanation). The trend becomes even more apparent when considering very recent projects. In the foreseeable future, 19 MW of installed electrolyzer capacity will be connected to biological methanation, and 122 MW to catalytic methanation. Furthermore, 100 MW alone will be installed in the efossil project (hybridge, 2019) in Germany. Only several projects are attempting to develop technologies for liquid fuel production. Methanol is the one most likely to be used. The George Olah Plant 1 in Iceland already FIGURE 8 | Process steps of Power-to-X projects in Europe with a focus on methanation technologies, n.s., not specified. proved in 2011 that this is a technically feasible option. The second most used technology is the Fischer-Tropsch process to produce mainly diesel or jet fuel (five projects) and other carbonbased co-products. They all use SOECs for hydrogen production. The four smaller projects are all located in Germany, but the most recently announced and largest project is situated in Norway (Norsk e-Fuel, 2020) due to the high availability of electricity from renewable sources. Other products include DME (Moser et al., 2018) and industrial products like waxes (Karki, 2018) or formic acid (Bär, 2014). However, in the future, no other projects are planned that go beyond methanol or Fischer-Tropsch fuels. Since liquid fuels based on electricity will have to play a major role in future energy systems (e.g., Ram et al., 2019), there have been greater efforts to develop these technologies. Fields of Application Fields of application include the blending of the produced gasmainly hydrogen or methane -into the national gas grids; the production of fuels for mobility applications, for example hydrogen in fuel cell electric vehicles, methane, methanol, or Fischer-Tropsch fuels in internal combustion engines; use of the produced gases in CHP plants and use of the gases in industry, for example refineries or steel plants. For some projects, no such purpose was detected 3 . Although projects were already being developed in the early 2000s whose main field of application was fuel production, the dominance of such projects is a rather recent trend. In our previous article from 2018 (Wulf et al., 2018), blending gases into the natural gas grid was the most common form of application. In that article, we mentioned that the interest in industrial applications is growing, a trend which has proven to be true. Although no further CHP projects are scheduled for after 2023, this does not mean they will no longer be implemented. If the Exytron projects and the Vårgårda housing project prove to be successful, similar projects will arise. However, these projects have rather small installed electrolyzer capacities (below 500 kW) and are easily overlooked. As can be seen in Figure 9, in the context of fuel production, PtX is the most common field of application in Europe with a 37% share of all projects. In some countries, there is a preference for certain applications. This is most apparent in the United Kingdom, for instance, where fuels are produced in the majority (67%) of the projects. The main driver behind this trend in the United Kingdom is ITM Power (ITM, 2020b), a company which produces electrolyzers as well as owning and operating several hydrogen refueling stations in the United Kingdom and France. In Germany, the greatest number of projects are also in the field of fuel production (32%). However, significant numbers of projects are found in all fields of application. Compared to other countries, industrial-based PtX projects are of higher interest. The trend toward industrial PtX applications is also likely to increase, as one of the aims of Germany's National Hydrogen Strategy (Die Bundesregierung, 2020) is to foster industrial applications. Furthermore, in the field of mobility, aviation, shipping, and heavy-duty vehicles are more likely to be funded than individual mobility. Denmark is another country with a clear preference for a certain technological purpose. In Denmark, 57% (8) of all projects are blending the produced gas into the natural gas grid. A methanation plant is used in seven out of the eight Danish projects, primarily biological methanation. However, these projects were all commissioned in the past; in the future, they will also focus on industrial applications and fuel production. In the Netherlands, the focus is on blending and industrial projects. The development of industrial applications is a rather recent trend of the 2020s. Based on the number of projects in France, the most common field of application is fuels. The seven recently approved ADEME projects (see section "General information") will contribute significantly to this development. Based on the installed capacities, fuels are also important with the HyGreen Provence projects (Le Hen, 2019). Furthermore, in future projects, multiple fields of application Certain types of electrolyzer are preferred for different fields of application (see Figure 9). For CHP purposes, an alkaline electrolyzer is used in almost 50% of the projects, whereas for industrial applications, a PEM electrolyzer is used in 47% of the projects. However, the use of industrial applications and PEM electrolyzers has increased significantly in recent years (Figure 10), which explains the correlation between these two parameters. The trend toward the renewed usage of alkaline electrolyzers in the upcoming years is mainly driven by the CHP Exytron projects (Schirmer, 2020). They all use alkaline electrolyzers, since they are more technically mature and less expensive. Furthermore, this is one of the few cases where customers are already starting to see economic viability (Schirmer, 2020). As the SOEC technology is less mature than PEM and alkaline electrolyzers, it is not surprising that this type of electrolyzer has yet to find a preferred field of application. Many projects planned for the future have not yet specified the type of electrolyzer used, which leads to the assumption that there is no strong connection between electrolyzer technology and fields of application. However, this line of argumentation is contradicted by the fact that some companies, such as Exytron, are using alkaline electrolyzers for CHP, while Sunfire is using SOECs and Fischer-Tropsch for fuel production in numerous projects. FIGURE 10 | Historical development of Power-to-X projects with regard to fields of application. CHP, combined heat and power, n.s., not specified. Frontiers in Energy Research | www.frontiersin.org As mentioned above, CHP projects most often have installed electrolyzer capacities below 500 kW. No such correlation can be drawn for fuel production, however, since it might refer to onsite hydrogen production at one hydrogen refueling station (e.g., Løkke and Simonsen, 2017) or centralized e-fuel production (e.g., Thomsen, 2019). CONCLUSION This analysis has shown that the development of PtX technologies is progressing quickly and will continue to do so in the near future. The planning and commissioning of PtX projects is expanding at a rapid rate. A new project is announced almost every week. This review therefore provides merely a snapshot of this development. Although the maximum number of commissioned plants was already reached in 2018 and fewer projects will be initiated in the upcoming years, installed electrolyzer capacities are getting larger and larger. This indicates that a consolidation is taking place, as fewer projects are closer to commercialization. The development of PEM and alkaline electrolyzer technologies has been good and these technologies are used very often, although there seems to be an apparent preference for the more mature alkaline technology in the future. Solid oxide electrolyzer cells are catching up in their technological development with multi-MW projects. However, the development of commercial applications is limited to one company (Sunfire), whereas several companies are involved in the development of PEM and alkaline electrolyzers. Methanation is used in many applications and has proven its feasibility for hydrogen processing. The choice between biological and catalytical methanation seems to be more a question of the project's aim rather than one of its technical maturity. Only a handful of projects are focusing on the production of liquid fuels, despite the fact that such fuels will be crucial for defossilized energy systems (Ram et al., 2019;Bauer and Sterner, 2020). Greater effort needs to be made in terms of liquid fuel production. The different technological developments of PtX technologies gives reason to believe that in the future we will see a division of major projects fostering technologies on the edge of commercialization. However, small projects will focus on technological development rather than large-scale implementation. This might also include the valorization of coproducts, in particular oxygen. Very little effort has been made in terms of the use of oxygen, for example in wastewater treatment plants and innovative heat integration strategies. Most of the discussed projects are dependent on public funding. However, the different technologies are getting closer to commercialization. This is also underlined by the introduction of the IPCEI on hydrogen. This should allow new countries, for example Portugal and eastern European countries, to participate in PtX projects. Furthermore, these projects will ensure the installation of sufficient capacities of RE, mainly wind and photovoltaics. The roll-out of new RE generation facilities is a prerequisite for many countries to enable the nationwide use of PtX technologies for the defossilization and decarbonization of the future economy; whether PtX is directly coupled with the generation of electricity or the use of RE sources is ensured by certificates. Although 220 projects in 20 different countries have been identified in Europe, a clear regional focus has been established with France and Germany as the leading countries. Both countries plan to install around 500 MW of capacity by 2025. In Germany, this capacity will be reached through many different projects with various purposes and motives. In France, however, it is an altogether more concentrated effort involving one company. With PtX technologies still in the precommercialization stage, the diversified strategy with distributed risks appears to be the more sustainable one. Other very active countries are Denmark and the Netherlands. Both countries border on the North Sea, where the large potential for offshore and onshore wind can guarantee the efficient production of hydrogen and other PtX products. AUTHOR CONTRIBUTIONS CW conducted the research and undertook most of the writing. PZ helped to conceptualize the manuscript, as well as support the analysis, and give a critical review. AS helped with the research and gave a critical review. All authors contributed to the article and approved the submitted version. FUNDING This work was funded by the Helmholtz Association of German Research Centres.
v3-fos-license
2018-05-21T21:28:03.997Z
2018-05-15T00:00:00.000
21705562
{ "extfieldsofstudy": [ "Medicine", "Materials Science" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.nature.com/articles/s41467-018-04233-5.pdf", "pdf_hash": "ea312b4925af66beb5b834d3dc746fc31756c3aa", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41176", "s2fieldsofstudy": [ "Materials Science", "Physics" ], "sha1": "f60fa190a2fd55aa04c708398d98e5ba8777a0be", "year": 2018 }
pes2o/s2orc
Ambipolar ferromagnetism by electrostatic doping of a manganite Complex-oxide materials exhibit physical properties that involve the interplay of charge and spin degrees of freedom. However, an ambipolar oxide that is able to exhibit both electron-doped and hole-doped ferromagnetism in the same material has proved elusive. Here we report ambipolar ferromagnetism in LaMnO3, with electron–hole asymmetry of the ferromagnetic order. Starting from an undoped atomically thin LaMnO3 film, we electrostatically dope the material with electrons or holes according to the polarity of a voltage applied across an ionic liquid gate. Magnetotransport characterization reveals that an increase of either electron-doping or hole-doping induced ferromagnetic order in this antiferromagnetic compound, and leads to an insulator-to-metal transition with colossal magnetoresistance showing electron–hole asymmetry. These findings are supported by density functional theory calculations, showing that strengthening of the inter-plane ferromagnetic exchange interaction is the origin of the ambipolar ferromagnetism. The result raises the prospect of exploiting ambipolar magnetic functionality in strongly correlated electron systems. Supplementary Note 1. Ionic liquid gating In our study, the gate voltage, VG, was applied at room temperature, and the voltage remains during the low-temperature measurement. Sheet resistance (RS) measurement is performed during the cooling process from 300 to 2 K and the desired gate voltage was maintained throughout the whole process. The same gating procedures are maintained in all measurements. Before characterizing our samples, a leakage current test was performed for every sample, with purposes of selecting functional devices and reducing leakage current. The leakage current test was always conducted using two sequential processes, namely a cooling process with VG = -1 V applied from 300 to 180 K and a warming process without VG from 180 to 300 K. The leakage current remains in the nA range for good samples. It shall be noted that the leakage current is typically reduced after the two processes, possibly benefited from crystallization of water moisture in the ionic liquid. We have completed a comprehensive characterization of the device stabilization and relaxation in a vacuum environment of 10 -4 Torr at 300 K. Supplementary Figure 1 shows the temporal changes in the RS when a constant gate voltage is applied at 300 K. With the VG applied, the RS reaches a nearly constant value after 30 min. Subsequently, the gate voltage is settled to zero, the RS gradually returns to the initial value of RS after a few hundred minutes at VG = 0. Finally, RS reaches a stable value that remains approximately constant, demonstrating that the electrostatic gating is the dominant effect. Figure 1. Temporal changes in the sheet resistance (RS) of an ionic liquid/3 unit cell (uc) LaMnO3/SrTiO3 (LMO/STO) device. The RS as a function of time on applying a positive/negative voltage for 30 min and then setting the gate voltage to zero. All measurement were carried out inside a high vacuum chamber at a pressure of 10 -4 Torr. Supplementary Note 2. Electrical and morphology characterization before and after the gating We conducted electrical and atomic force microscope measurements on 3 unit cell (uc) LaMnO3 (LMO) samples before and after the gating measurements. Supplementary Figure 2a shows the RS as a function of the temperature of an as-grown LMO before adding ionic liquid, exhibiting a semiconducting behaviour. Supplementary Figure 2b shows a typical atomic force microscope image of the surface of the as-grown 3 uc LMO. The surfaces of the samples are atomically flat, with a root-mean-square (RMS) surface roughness of ~ 0.16 nm. Subsequently, the temperature-dependent RS of the LMO thin film was measured with positive and negative 3 V VG across ionic liquid at 300 K. Supplementary Figure 2c shows that the -3 V induces a clear metal-to-insulator transition, and the RS recovers original state after removing the gate voltage. Then, a 3 V VG was applied and the device shows consistent transport phenomena. In the case of structural defects, such as cationic defects or oxygen vacancy, the RS will not immediately revert to the original state, due to the extremely low mobility of the defects around or below room temperature and the dramatic impact on RS from the defects. Supplementary Note 4. Cooling and warming runs for the resistance measurements In manganites competing phases, namely ferromagnetic conducting phase (FM-C) and charge-ordered insulating phase (CO-I), may coexist at the mesoscopic length scale. Typically, the presence of thermal hysteresis serves as evidence of phase separation in manganites and its absence may suggest a single-phase magnetic state in our samples. In addition, studying phase separation from first-principles would be very valuable. However, the task of taking into account non-uniform doping and local structural distortions in first-principles calculations, which also need to include electron-electron correlation effects, to explore competing phases in strongly correlated oxide materials is enormously complicated. Thus far, there is no clear understanding how even to approach this problem yet. The only attempts are analysis based on model Hamiltonians 4,5 , which were used to map out the spatial distribution and size of the competing phases on the mesoscopic scale. Therefore, more work is needed to understand the possibility of competing phases in our samples, which we cannot completely exclude. Supplementary Figure 4. Cooling and warming measurement of resistance for both electron-and hole-doped LaMnO3. The electron-and hole-doping were realized by applying +3 and -3 V VG across the ionic liquid. Supplementary Note 5. Hall effect measurement The Hall effect measurement was performed at different temperatures in a magnetic field up to 9 Tesla. The Hall conductivity (σxy) is defined by the expression of ρxy/ρxx 2 , where ρxx and ρxx are transverse and longitudinal resistivity, respectively. The σxy of CMR material can usually be expressed as where and are the ordinary and anomalous Hall conductivity, respectively. At room temperature, 300 K, the slope of ρxy is dominated by the anomalous Hall effect (AHE) contribution. Hence, the sign of the ρxy at 300 K represents reversed type of carriers for both electron-and hole-doping gating cases 6,7 . It is therefore inappropriate to conclude the sign and density of carrier at room temperature. At low temperature, the ρxy has a measureable and stronger ordinary Hall effect (OHE) contribution at high field. The OHE can be written as RH = 1/n(p)e, where RH is the ordinary Hall coefficient, e is elementary charge, and n(p) is carrier density for electron(hole). In Fig. 2d which dominant at different regimes of longitudinal conductivity (σxx). As a function of σxx for diverse materials, the three broad regimes are (i) a high conductivity regime [10 6 (Ω cm) -1 < σxx] in which AHE are due to dominant skew scattering and normal Hall effect can be visible; (ii) in intermediate regime [10 4 (Ω cm) -1 < σxx < 10 6 (Ω cm) -1 ] where AHE is independent of σxx due to intrinsic contribution; and (iii) a bad-metal regime [σxx < 10 4 (Ω cm) -1 ] where AHE changes dramatically with changing σxx. In particular, the intrinsic mechanisms occur in magnetic materials with strong spin-orbit coupling, such as oxides and diluted magnetic semiconductors (DMSs). Manganites, including the gated 3 uc LMO, have a σxx between 10 -1 to 10 4 (Ω cm) -1 and are in the bad-metal regime. Unfortunately, direct measurement of magnetism in LMO is prohibited by two mauor technical challenges, namely (a) the extremely weak signal from the limited volume of our 5 uc-thick, 300 µm-long and 50 µm-wide LMO, and (b) spurious signal from applied current, gold electrode, and contamination in ionic liquid. Supplementary Figure 5. xy of (a) hole-and (b) electron-doped LMO under various gate voltages at 2 K. Supplementary Note 6. First-principles calculations Theoretical modelling of the orthorhombic Pbnm LMO was performed using density functional theory, the prouected augmented wave method, and PBEsol pseudopotentials 8 , as implemented in the Vienna ab initio simulation package 9 . Correlation effects beyond generalized gradient approximation (GGA) were treated at a semi-empirical GGA+U level within a rotationally invariant formalism with U = 5 eV on Mn 3d-orbitals 10 where 1 and 2 are the first and the second nearest-neighbor intra-plane exchange interactions and 1 and 2 are the first and the second nearest-neighbor inter-plane exchange interactions. Taking into account of a reference energy, this system of five linear equations contains five unknowns and can be uniquely solved with respect to the exchange constants. In a layered system, it is convenient to express the exchange in terms of total intra-plane exchange Jab and total inter-plane exchange . Considering the number of the first and the second nearest neighbors, the total intra-plane exchange coupling is given by = 4 1 + 2 2 . Similarly, the total inter-plane exchange is Supplementary Figure 6h shows the calculated nearest-neighbor exchange parameters 1 , 2 , 1 and 2 , as a function of doping. As seen in the figure, 1 , 1 and 1 are positive (ferromagnetic), while 2 is negative (antiferromagnetic). With doping, 1 and 1 increases, 2 remains nearly constant, while 2 decreases. The overall intra-plane exchange in the first nearest neighbors involves twice more than that in the second nearest neighbors, which leads to a FM ordering in the plane The orbital ordering of biaxially strained LMO was found to be similar to that known for bulk LMO. Supplementary Figure 6f shows the charge density contour of LMO in the a-b plane, which reveals a checker-board-type orbital ordering in that plane, similar to that found previously for bulk LMO 12 . This ordering is maintained when the system is homogeneously doped. Supplementary Note 7. Structural characterization First, we imaged 3 uc LMO deposited on STO substrate. Supplementary Figure 7a shows a high-angle annular dark field (HAADF) image of the sample along the [010] zone axis. Supplementary Figure 7b shows the EELS of the uncapped 3 uc LMO. In the ADF image, the contrast of the outermost unit cell of LMO becomes gradually blurry. But, the EELS signal from the outermost unit cells of LMO is still visible. This indicates that the outermost unit cell of LMO becomes partially amorphous possibly due to the damage by the focused ion beam during the STEM sample preparation. Therefore, in order to correctly characterize the 3 uc LMO and avoid the damage during the STEM sample preparation, we added a STO capping layer to protect the surface of LMO and performed STEM-EELS using the same conditions (see Fig. 1 of Main Text).
v3-fos-license
2019-05-19T13:05:35.395Z
2017-03-09T00:00:00.000
157358319
{ "extfieldsofstudy": [ "Economics" ], "oa_license": "CCBY", "oa_status": "GREEN", "oa_url": "http://cdm15738.contentdm.oclc.org/utils/getfile/collection/p15738coll2/id/131096/filename/131307.pdf", "pdf_hash": "71acdaad52f2b3f2b2f3355c042ba0e9af72f4b5", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41177", "s2fieldsofstudy": [ "Economics" ], "sha1": "da37b907fde69461010344f33aca4388fb6b8b60", "year": 2017 }
pes2o/s2orc
Forced Gifts: The Burden of Being a Friend In many developing countries, gift expenses account for a substantial share of total household expenditures. As incomes rise, gift expenses are escalating in several developing countries. We develop a theoretical model to demonstrate how (unequal) income growth may trigger “gift competition” and drive up the financial burden associated with gift exchange. We use unique census-type panel data from rural China to test our model predictions and demonstrate that (1) the value of gifts responds to the average gift in the community, (2) the escalation of gift giving may have adverse welfare implications (especially for the poor), and (3) escalating gift expenses crowd out expenditures on other consumption items. INTRODUCTION Gift giving is ubiquitous in many developing countries, and gift expenses constitute a large share of living expenditures for many households. This is particularly true in rural areas, where limited opportunities to obtain formal insurance imply that individuals rely on tight-knit social networks to cope with adverse shocks. Gift giving plays an important role in signaling friendship, sustaining networks, and facilitating reciprocity (Levine 1998). According to Mauss (2011, 2-3), gift giving, altruism, and reciprocity are the "human rocks on which societies are built." However, in several developing countries gift expenses are increasing rapidly, perhaps to the point where they become a burden on household budgets. In rural China, the gift-income ratio is now as high as 10 percent and continues to rise (Yan 1996;Chen, Kanbur, and Zhang 2011). According to our own data, presented in detail below, the average gift expenditure in the sampled villages grew by 40 percent per year between 2004 and 2011. During this period, annual nominal income grew by "only" 20 percent. Anecdotal evidence suggests that people increase their gift expenses in an effort to "keep up with the Joneses." Some households borrow to finance gift giving, or tighten their belts-reducing expenditures on other consumption items. Some even engage in risky behavior such as selling blood for the purpose of extending gifts (Chen and Zhang 2012). Escalating gift giving is also observed in other countries, including countries in West Africa and Southeast Asia. For instance, in Ghana, friends and inlaws demonstrate sorrow and respect for the deceased by giving (increasingly) generous gifts during funerals. Such gift giving can be semipublic, because gifts are usually announced to the public and written down (Witte 2003;Boni 2010;Wang 2016). Why do people engage in such extensive gift giving? Sufficiently generous (costly) gifts can be interpreted as a sign of altruism or friendship, cementing (reciprocal) relationships between specific individuals (Levine 1998). 1 Generous gifts can also be a signal of public-spiritedness to all members of the local community, translating into high status (Hopkins and Kornienko 2004;Brown, Bulte, and Zhang 2011;Hopkins 2014), which may be useful in the pursuit of private objectives (Postlewaite 1998). We are agnostic about the exact motives dominating the decision to engage in generous gift giving-personalized signaling or signaling to the community at large. Presumably, both motives matter for most individuals, but from the perspective of our theory the underlying motivation is unimportant. What matters is that most of the literature on "impure altruism" or "competitive altruism" assumes that income is observable. When income is observable, the appropriate (or reference) value of gifts given presumably varies with the givers' own income. Households only need to assure that their gifts meet the reference level appropriate for their income or capacity to give, rather than caring about others' gift expenses. Such models do not necessarily predict gift escalation. In this paper we develop and test an alternative theory to explain escalating gift giving, and study its implications for welfare and inequality. We propose that in the absence of information about the exact income of gift givers, people may evaluate generosity by comparing the value of specific gifts to the average value of gifts given in the community. Usually in regions where gift giving escalates, the average gift value is readily observable and provides a natural reference point. To build reciprocal relationships, therefore, one has to give at least as much as others-preferably more. Such generous gifts bid up the average value of gifts in the reference population. To maintain the status of "friend," people have to "keep up" and also give more. We show that the welfare implications of this process vary with income levels. Specifically, the poor are made worse off. Model predictions are applied to data from rural China, where the unequal inflow of remittances and government transfers provides a particularly rich testing ground to explore the dynamics of social contests. We collected detailed census data on gift giving, incomes, and subjective welfare. Moreover, reference populations are identified in so-called natural villages (Knight, Song, and Gunatilaka 2009;Mangyo and Park 2011). We find considerable support for our theory of gift giving as a competitive and inequality-enhancing process. This paper contributes to several strands of literature. It speaks to the literature on informal insurance and risk sharing. Although we are not the first to suggest that sharing in informal networks may have a "dark side," we provide an alternative mechanism to explain how informal solidarity networks supported by gift giving may adversely affect household welfare. 2 Our paper is also related to the literature on status. Focusing on rural China, Brown, Bulte, and Zhang (2011) demonstrate that people care about their rank in local society and engage in conspicuous consumption to acquire and protect status. Status races also involve negative externalities, likely resulting in Red Queen-type outcomes where "it takes all the running you can do, to keep in the same place" (see Hopkins and Kornienko [2004] for a theoretical treatment). Even more closely related to our paper is the small literature on gift behavior and impure altruism. Levine (1998) and Hopkins (2014) highlight the benefits of signaling altruism in a generalized setup. Inspired by the models, we develop a theoretical framework, and our empirical findings help cast light on rising levels of gift expenses and the distributional consequences of this trend. The remainder of the paper is organized as follows. In section 2 we sketch the background and develop a simple theoretical model. We also derive several testable hypotheses. Section 3 introduces the data, and section 4 outlines our empirical strategy. We present our estimation results in section 5. Section 6 concludes. 2 An emerging literature points to the potential adverse incentive effects of gift giving and informal exchange in kin networks (Di Falco and Bulte, 2013;Baland, Somanathan and Wahhaj, 2013;Malmendier, Velde and Weber, 2014;Ozier and Jakiela, 2015). A so-called moral economy of sharing, or forced solidarity, encourages free riding and attenuates incentives to accumulate or self-protect against (idiosyncratic) shocks. CONCEPTUAL FRAMEWORK Institutional Background Across the developing world, people rely heavily on nonmarket transactions such as reciprocal exchange due to ineffective contract enforcement and imperfect markets. Strong social connections are necessary for risk sharing and favor exchange. In rural China, for example, guanxi plays an essential role in every area of life-from harvesting, to house building, to personal financing (Yan 1996). To cultivate and strengthen these connections, people spend heavily on gift giving, which signals the social distance between gift givers and recipients (Yan 1996;Gold, Guthrie, and Wank 2002;Wang 2016). So, gift giving in China serves not only as a kind of "quasi-credit" or insurance (Fafchamps, 1999;Chiappori et al. 2014), but also as a signal of social distance or generosity. The values of gifts are usually observable in China. Particularly at wedding ceremonies or funerals, gifts in kind are presented to all participants, while cash gifts are recorded in a "gift-list book" (see Yan 1996). Similar customs are found in other countries, for example, in Ghana, where gift values are not only recorded but also announced at funerals (Witte 2003). A Simple Model of Gift Competition In this section we develop a simple model of competitive gift exchange, inspired by Hopkins (2014). In our model, the benefits of friendship are due to network linkages, which may have instrumental value (for example, mutual insurance or sharing, or assistance in times of need). We do not model any direct utility gains or intrinsic value associated with engaging with friends, or from knowing they do well (altruism). Obviously, such sentiments are important as well, and indeed help explain the signaling value of gifts. In Levine (1998) and Hopkins (2014), gift givers do not need to compete with each other because incomes are assumed to be observable. By relating the value of the gift to a giver's income, the recipient can infer the giver's "type." However, income levels are not always perfectly observable-not even in rural areas in China, where more and more households have come to rely on off-farm income or remittances in recent years. 3 The absence of information about the giver's income forces the recipient to use an alternative reference level to assess whether a gift was sufficiently costly. A natural candidate is the average gift value in the local community. Specifically, agents are more likely to be perceived as true friends if their gifts are more valuable than the gifts given by others. 4 Our model of gift competition and its welfare implications are based on this practical solution. To simplify the modeling, we refrain from explicitly modeling the benefits of strong network relationswhich may be derived from a range of exchange services (for example, sharing, participation in labor teams, and so on). Instead, we develop a reduced-form shortcut, capturing the beneficial impact of generous gift giving on utility directly. Assume a community consists of N agents and that each agent allocates his or her income to consumption goods ( x ) and gifts ( y ). Also assume the function ( ) ⋅ v maps gift expenses into a stream of benefits associated with reciprocal relationships (Levine 1998;Hopkins 2014). The net benefits of such network links for individual i in the community, ( ) ⋅ v , are determined by two variables: own gift expenses, i y , and a reference gift level, y . As i y increases, an agent is more likely to be taken as a friend. In contrast, higher values of y raise the bar and reduce the benefits of gift giving. Hence, One specification that captures these assumptions is the following function: ∑ , or the total value of gifts in the community. If people use the average gift as the reference level, y is given by ( ) ( ) Next, θ is a parameter capturing the sensitivity of reciprocity with respect to how one's own gifts compare with the gifts of others. It is assumed to be sufficiently small so that the net benefits of reciprocity are always positive. We further assume that all agents have the following utility function, which can be decomposed into two elements: where the first term on the right-hand side represents the utility from consuming goods (purchased i x , or received as gifts). The second is the benefit of reciprocity, ( ) ⋅ v . We assume gifts are equally distributed among members of the community, so that ( ) ( ) represents the gifts received by agent i from others. We further assume 0 1 γ β < + < , ensuring diminishing marginal utility. 5 Following Hopkins and Kornienko (2004) and Hwang, Reed, and Hubbard (1992), the reciprocity term ( ) ⋅ v enters multiplicatively into the preference equation (3). Our utilitarian perspective on friendship assumes that the benefits from network relations vary with own consumption (which determines the degree to which interhousehold sharing is feasible). Gift exchange can be costly for several reasons. There may be "deadweight losses" associated with gifts (Waldfogel 1993), caused by a mismatch between the preferences of the giver and recipient. 6 For example, in the context of rural China, welfare losses will occur because many in-kind gifts are overly "luxurious"-gift givers buy expensive food and commodities that are different from the items that they would purchase for their own consumption. The "expressive" function of gift giving implies that the cost of the gift is usually higher to the giver than the level of utility it generates to the recipient. 7 The wedge between the value of the good and the cost to the giver implies the following budget constraint: 5 Note that we assume agents have zero utility if they do not engage in gift giving: yi = 0 implies vi = 0 and Ui = 0. In other words, it is always a dominant strategy to defend reciprocal relationships, so we do not formally discuss the case of corner solutions. Of course, alternative specifications are possible (Hopkins and Kornienko 2004). We acknowledge that the utility will be greater than zero even under "autarky." Hence, in reality people may "opt out" if gift expenses are too burdensome. 6 In another social and cultural context, Waldfogel (1993) finds that the average recipient values gifts at 87 percent of the cost to the giver. 7 Additional welfare losses may be due to the intertemporal dimension of gift exchange. Gifts are given at wedding ceremonies, which occur infrequently, so individual households may be "net givers" for a long period, which may result in liquidity problems. Escalating gift values also imply that early recipients may have to return more than they received, so there will be arbitrary winners and losers. where i z is household income and p is the relative price of gifts, which is usually higher than 1. Many gifts are in-kind, but selling gifts for cash is limited due to social norms and related reasons (Vaillant et al. 2014). Individual Analysis Suppose agents maximize utility by allocating their income to a bundle of consumption goods and gifts, taking others' gift expenses as given. Given utility function (3) and budget constraint (4), the optimal amount allocated to gifts by household i is given by According to (5), gift expenses are determined by own income level, i z , and others' gift expenses, It is easy to demonstrate that the following holds: As the value of the average gift increases, household i will give more expensive gifts to retain the signaling value of its gifts: it becomes increasingly expensive to signal altruism. Gift expenses squeeze spending on other consumption goods. Offsetting these costs, to some extent, is the enhanced inflow of gifts, that is, the increase in ( ) ( ) --1 i G y N , which will make the household better off. As long as θ or price wedge p is sufficiently large, however, gift competition results in a negative welfare effect. < 0 In Appendix we show that for p / 1 > θ , the net welfare effect is negative, since the extra gifts received cannot compensate the welfare loss due to costs associated with gifts given. In our setup, the relative price of gifts, 1 > p , is crucial for negative welfare impacts. The negative welfare effect of gift competition is heterogeneous across income groups, with rich people suffering less than the poor. Specifically, in Appendix we prove that for < 1 γ β + , the following holds: 6 The heterogeneous welfare effect is due to the concavity of the utility function. For < 1 γ β + and p > 1, extra gift spending squeezes consumption of other goods, which is particularly costly for the poor (who have a higher marginal utility of consumption). 8 In other words, the assumption of diminishing marginal utility amplifies the negative (net) impact of gift competition on the welfare of the poor. Equilibrium Analysis and Income Shocks The Nash equilibrium reads as follows: We now highlight the role of income growth in triggering gift competition and explore the dynamics and welfare implications in equilibrium. Suppose that agent j suddenly (and exogenously) has a higher income. According to (5), an increase in j z may affect i y via two channels-income and equilibrium effects. Specifically, Consider the first case, where agent i benefits from the income shock (so that j i = ). In this case, the agent allocates more to both consumption and gifts. From (5) and (9), we know Extra giving by agent i raises the average value of gifts in the local community, thereby inviting gift competition and driving up the gift expenses incurred by others. In turn, this causes agent i to spend even more on gifts. So both income and equilibrium effects are positive as own income increases. Next, consider the case in which the agent does not benefit from the income shock himself or herself ( j i ≠ ). In the absence of an income effect, the gift giving of the agent will still be affected by the equilibrium effect: Agents who do not benefit from the income gain also need to spend more on gifts to retain the credibility of their signal and retain the benefits of reciprocal exchange. Thus, we obtain the following proposition (the full proof is provided in Appendix). Proposition 1: As the income of agent j rises, all agents will increase their level of gift expenses. In the Nash equilibrium, the welfare of households that were not subject to an income shock themselves deteriorates due to intensified gift competition. As shown in Appendix, we can demonstrate that for 1 p θ > , the following holds: Moreover, in equilibrium the negative welfare effect of gift competition varies across income groups. For The proof is again in Appendix. The welfare implications regarding gift competition are summarized as follows. Proposition 2: As agent i 's income rises, gift competition intensifies, resulting in a greater welfare loss for low-income households than for high-income households. This proposition explains why gift expenses may grow faster than income. Even if only a few households experience positive income shocks, all households must increase their level of gift expenditure. In the following empirical analyses, we will test Propositions 1 and 2. DATA AND BACKGROUND We seek to test key predictions of the theoretical model, drawing on census-type panel data collected in three administrative villages in Guizhou Province, China. 9 The dataset contains three waves of household data collected in three administrative villages, consisting of 26 so-called natural villages, in 2006, 2009, and 2011. The natural villages are both geographically isolated and ethnically diversified. Ethnic minorities (mainly Miao, Buyi, Gelao, and Yi) comprise about 20 percent of the population. Local residents know each other well, and the kinship and friendship networks of most residents are confined primarily to these natural villages. The three survey waves cover more than 800 households and include detailed information on household demographics, income, consumption, and interhousehold transfers. A noteworthy feature of our dataset is the records on gift expenses. Compared to the information in other databases, the records on gift exchange are quite detailed, covering every household in the 26 villages. Variables include not only total gift expenses but also the times of gift giving, the value of the highest single-time gift, and amounts spent on social events. Table 3 Table 3.2 reports summary statistics of some key variables, including measures of life satisfaction and average gift expenses. Average nominal per capita gift expenses increased by around 40 percent annually from 2006 to 2011. We further divide the households into three distinct groups: high-income (top 25 percent of households per natural village, ranked by income), middle-income (middle 50 percent) and low-income households (bottom 25 percent). The annual growth rate is higher for low-income and middle-income groups than for the high-income group, which sees a growth rate of less than 30 percent. Table 3.2 also reports the measures of compensation given to households for land expropriation by the government. We will use these variables as instruments to tease out causal effects (see below). In the process of urbanization, some farmland was expropriated by the government. This represents an exogenous shock to the incomes of affected households (but not all households), which we will exploit in our empirical study. Next, we outline how welfare and gift competition are measured. The database contains a surveybased measure of subjective welfare. Note that standard variables, like consumption or income, cannot be applied to gauge the total welfare effects, as some of the benefits associated with reciprocal relationships would not be reflected. For example, many favors exchanged in daily life are intangible. We assume such benefits are better captured by a measure of subjective well-being. Rather than using a standard index of life satisfaction ("level"), we use changes in life satisfaction over time, facilitating both interhousehold comparisons as well as controlling for household fixed effects. Our index has a three-category scale: an outcome of 1 indicates subjective welfare deteriorated over time, an outcome of 2 indicates welfare more or less stayed the same, and an outcome of 3 indicates that welfare increased across survey waves. 10 Specifically, , where i DHp is our index of change in life satisfaction. Casual inspection of the descriptive statistics suggests that despite fast income growth, subjective welfare (life satisfaction) has declined. We argue that rapidly growing gift expenses are one potential factor contributing to this puzzle. To measure the intensity of gift competition, we assume that information about gift expenses is generally quite complete within a natural village (Yan 1996;Mangyo and Park 2011). A household therefore takes other villagers in the same natural village as a reference group for gift giving. So we use the average value of other villagers' gift expenses, i G − , as a proxy for the intensity of gift competition for villager i. The measure is consistent with our model. Specifically, i G − is given as follows: where G is the total gift expenses in a natural village and i Y is agent i's own gift expenses. The gift variables are summarized in Table 3.2. IDENTIFICATION STRATEGY The hypotheses we seek to test follow from the theoretical model. We first explore the evidence of gift competition by testing the relationship between average gifts in the village and gift giving by individual households. Based on Proposition 1 we postulate the following: Hypothesis 1: People will increase their gift expenses as the average level of gift expenses by other villagers in the natural village rises (or when the intensity of gift competition goes up). To test this hypothesis, we first estimate the following fixed effects regression model explaining variation in gift giving (Model I): α measures the response of (poor) households to changes in the average gift level (which we expect to be positive), and coefficients 2 α and 3 α pick up heterogeneous treatment effects for middle-and high-income groups. Our empirical strategy follows previous studies on peer effects, which usually rely on linear-inmeans models. This implies that the so-called reflection problem must be addressed, which we seek to do by introducing a vector of peers' characteristics in the regression model. Another concern is reverse causation. To deal with concerns about endogeneity, we leverage location-specific variation in income gains due to the land expropriation policy as a source of exogenous variation in an instrumental variable (IV) setup. In the surveyed area, some households' land was expropriated by the government in 2010. As compensation, the government subsidized households that lost land by, on average, more than 15,000 RMB. This amounted to an exogenous income shock for about 30 percent of our respondents, which we will leverage to tease out causal effects. In a series of IV estimations we will focus on households in land acquisition villages that received no subsidies themselves, and explore how their gift giving changed as a result of a one-time income shock to their peers. Specifically, if the increase in int G − is driven by land acquisition subsidies (transfers), we can identify the fraction of the variation that is exogenous-not affected by int Y . We therefore use Mean of land expropriation subsidy of other households in same natural village and Share of households subsidized by land expropriation subsidy in same natural village as instrumental variables for int G − and estimate the model using two-stage least squares (2SLS). 11 This estimation provides empirical evidence to test Hypothesis 1. The instrument is exogenous because whether the subsidy is given depends only on the location of one's fields. We believe there is no reason to expect that gift giving by recipients is affected by the subsidies, other than via the enhanced gift giving of the recipients. Hence, we assume (but will verify) that the exclusion restriction is satisfied. We next probe the (heterogeneous) welfare effect of gift competition. Based on Proposition 2, we formulate the following testable hypothesis: Hypothesis 2: When gift competition intensifies, low-income households will be worse off than highincome households. To test this hypothesis, we follow Clark and Senik (2015) and explore the determinants of subjective well-being in China. In contrast to Clark and Senik, who use a linear regression, we employ an ordered probit model (also see Fafchamps and Shilpi [2008]) specified as follows (Model II): where int DHp * is the (unobservable) change in utility of household i in village n in year t ; the other variables are as defined above. We again cluster standard errors at the natural village level. We assume int DHp * is related to the observable ordinal variable int DHp in the following way: The main independent variable is int G − , which measures the intensity of gift competition for household i . To control for time-invariant omitted variables, we introduce a fixed effect term into the estimation. By introducing interaction terms of int G − and the income group dummies, we estimate welfare effects for each income group separately. Our theory predicts that 1 α is negative, as poor people (especially) will be worse off. The coefficients associated with the interaction terms are positive if adverse welfare effects for richer cohorts are attenuated. As a robustness check, we also split the sample, based on income groups, and estimate Model II without interaction terms. Since endogeneity might also be a concern for Model II, we again use the land expropriation subsidy variables as instruments. Of course, it is not evident that the exclusion restriction is satisfied for this model, because the income shock may affect subjective welfare through channels other than gift expenses (Stevenson and Wolfers 2008). As a robustness check, we also estimate the welfare effects of gift competition for the subsample of households that did not benefit from the subsidy themselves. We next consider the channel or mechanism via which gift competition affects welfare. According to our theory, escalating gift expenses reduce welfare of (especially) poor people by squeezing consumption of other goods. Therefore, we explore how gift giving affects expenditures on consumption items-the (potentially heterogeneous) squeeze effect. Hypothesis 3: When gift competition intensifies, gift expenses will squeeze consumption of other types of goods, especially for the low-income group. To explore the existence of this squeeze effect, we estimate the following specification for each income group separately: In equation (20), int SC is the share of expenditures for certain consumption items in total expenditures of household i in village n at time t . We consider two categories of consumption goods: food spending and education expenditures. int SG represents the share of gift expenses in total expenditures, and X and in δ are as defined above. We again cluster standard errors at the natural village level. Our theory predicts that the squeeze effect will be particularly serious for low-income households, so we expect 1 α to be negative and significant for the low-income subsample. The effects for the middle-and high-income groups will be less significant. As a robustness check, and to attenuate concerns about endogeneity, we also estimate IV models using the land acquisition variables as instruments. High G − × ∆ are both negative and are significant for the high-income group at the 5 percent level. We control for changes in income rank, own income, and other household characteristics in Column (2). To reduce the concerns about the "reflection problem" we also introduce peers' characteristics. The coefficient for int G − ∆ reduces to 0.507 but remains significant at the 10 percent level. ESTIMATION RESULTS Next, we estimate the effect of average gift giving on own gift giving for the three income groups separately 12 . This approach is slightly more general than the model with interaction terms, as it does not "force" the coefficients of the control variables to be identical across income groups. The results are reported in Columns (3)-(5). For all income groups we find that the coefficients of interest are positive and significant. The coefficient increases slightly, from 0.522 (significant at the 10 percent level) to 0.568 (significant at the 5 percent level), when we compare the estimation results for the low-and middleincome groups, but it shrinks to 0.433 (significant at the 10 percent level) for the high-income group. 13 It appears that all income groups respond to rising average gift values by increasing the value of their own gifts. Households have to spend more on gifts as long as the average gift value goes up-even if they do not receive more gifts themselves. When we estimate a 2SLS model, all income groups strongly respond to the change in average gift levels. Second-stage regression results are reported in Columns (6)-(11). The results for the pooled sample of subsidy beneficiaries and non-beneficiaries are summarized in Columns (6)-(8). The coefficient for the low-income group now is 1.434, significant at the 5 percent level. Coefficients for the middle-and high-income groups are 1.263 and 1.509, respectively, and both are significant at least at the 5 percent level. 14 Test statistics provide support for our instrumental variables. First-stage results of the IV model are summarized in Appendix, and all the estimates comfortably pass a standard overidentification test (Sargan test), with the usual caveat about the assumptions behind and power of such tests. We consider only nonbeneficiaries in Columns (9)-(11), suggesting that people follow the rising trend of gift spending even if they do not have extra income themselves. For the low-income group, the coefficient is 1.386 and significant at the 10 percent level, so a one standard deviation increase in gift expenses will increase the poor's spending on gifts by more than 50 percent-clearly a nontrivial outcome. Hence, after controlling for endogeneity we still find strong evidence of gift escalation. 15 Overall, these estimation results support Hypothesis 1. 12 Households may change their income group over years. To keep a balanced panel data for each income group, we consider a household belongs to an income group as long as it is in that income group in any survey year. 13 A t-test shows that the coefficients are not significantly different between the low-and middle-income groups, but the coefficient for the high-income group is significantly smaller than those for the other two groups. 14 A t-test shows that the coefficients for all three income groups are significantly different from each other. 15 It is interesting to observe that especially the coefficients for the high-income group have increased. We speculate that this may be because the share of beneficiaries of the subsidy scheme is disproportionally large among this group. > 12), Age of household head, Number of wedding ceremonies, Share of registered permanent residents in household, Number of funerals in family, Share of family members having local odd jobs, Share of family members working out of local county. 3. We control peers' (mean) characteristics, such as age, ethnicity, gender, political status, and number of unmarried sons in the regression. 4. The bottom 25% of households are defined as the low-income group; the top 25% of households are defined as the highincome group. The rest (50%) are the middle-income group (see details in footnote 15). 5. In Columns (6)-(10), we employ Mean of land expropriation subsidy of other households in same natural village and Share of households subsidized by land expropriation subsidy in same natural village as instrumental variables of Mean of log per capita gift expense of other households in same natural village. 6. We use both subsidy beneficiaries and nonbeneficiaries in Columns (1)-(8), and focus on the villages that have land expropriation in Columns (6)-(8). We use only nonbeneficiaries as the sample in Columns (9)-(11). 7. Standard errors are clustered at the level of the natural village. 8. *** p < 0.01, ** p < 0.05, * p < 0.1. FE = fixed effect; RE = random effect; AIC = Akaike information criterion. We next consider the relation between gift giving and subjective well-being. Figure 5.1 presents unconditional associations between average gift expenses and life satisfaction at the natural village level. The left panel shows the case of low-income households. Their subjective well-being is lower in villages with higher gift expenses. For middle-income households, displayed in the middle panel, the negative correlation is still significant but of smaller magnitude. The association between gift expenses and subjective welfare is very weak for the high-income group, as shown in the right panel. Patterns in these data are consistent with the predictions of our model. In Table 5.2, we present the estimation results of Model II for the different income groups. Column (1) documents a negative correlation for the pooled sample. In Column (2), with interaction terms, the estimate of 1 α is also negative and significant at the 1 percent level (-1.855 High G − × ∆ are positive but not precisely estimated and insignificant. These results suggest that we cannot reject the hypothesis that middle-and high-income groups also suffer from gift escalation. The income cohort-specific regression results provide a similar pattern. As shown in Columns (3)-(5), the estimated coefficients for the low-income and middle-income groups are negative and significant at the 1 percent level. By contrast, the coefficient for the high-income group is insignificant (but of similar magnitude). This suggests that there is more variation in the behavioral response among richer households, or that not everybody experiences a similar welfare loss. 2. An ordered probit panel-data (fixed effect) model is employed in estimation. 3. The bottom 25% of households are defined as the low-income group; the top 25% of households are defined as the high-income group. The rest (50%) are the middle-income group (see details in footnote 15). 4. Control variables include Male head of household (dummy), Marriage status, Education level of household head, Age of household head in 2006 (initial age), Self-reported health status of household head, Share of cadre in family, Share of family members having chronic disease, Share of children (age < 6) in family, Share of party members in family, Number of wedding ceremonies in natural village, Number of funerals in natural village, Number of wedding ceremonies, Number of funerals in family. 5. Standard errors are clustered at the level of the natural village. 6. *** p < 0.01, ** p < 0.05, * p < 0.1. FE = fixed effect; RE = random effect; AIC = Akaike information criterion. As shown in Figure 5.2, the probability that people are worse off is defined by the area to the left of 1 ε , equal to The probability that people are better off is captured by the region to the right of 2 ε , which equals ( ) ε and 2 ε move to 1 ε ′ and 2 ε ′ , so the probability of becoming "worse off" increases, while the probability of becoming "better off" decreases. Based on the results presented in Column (3), with other things equal, the annual growth rate of gift expenses decreases the probability of becoming "better off" by 2 percent on average. Welfare improvements due to income growth are to some extent offset by gift escalation. The cumulative welfare impact of gift competition exceeds 10 percent during the survey period (2006-2011). Figure 5.2 Marginal effect of gift competition on life satisfaction Source: Drawn by authors. To address concerns about the endogeneity of the gift variable, we present IV results in Table 5.3 for each income group separately. As shown in Columns (1)-(3), we obtain regression results that are qualitatively similar to the ordered probit results above: significantly negative effects for low-income and middle-income households, and no significant effect for the high-income group. The coefficient of the high-income group is not only imprecisely estimated but also some 40 percent smaller in magnitude than the coefficient of the low-income group. We next focus separately on the welfare of the subsample of nonbeneficiaries of the subsidy. Columns (4)-(6) present the results for these households. Consistent with intuition, the estimated coefficients across all income groups are larger and more significant. For low-income households, the coefficient becomes -2.056. In Table 5.4 we probe the mechanism that explains the negative effect of escalating gifts on welfare. Escalating gifts should squeeze expenditures on other consumption items. We consider two different expenditure categories and examine how spending in these categories relates to gift giving for the three income groups separately. In Columns (1)-(3) we focus on food consumption, and in Columns (4)-(6) we consider education expenditures. For both food and education expenditures, the estimated coefficients are negative. While the effects on education are significant for all income groups, only the low-income households tighten their belts by spending less on foodstuffs as well. The estimated coefficient for the low-income households is more than 2.5 times as large as the coefficient of the richer cohorts. As gift expenses increase by more than 40 percent per year, low-income people have to reduce the share of food expenditures by around 3 percent annually. 16 This clearly indicates a squeeze effect, especially for the poor, which is consistent with our theory. (age < 6) in family, Share of party members in family, share of registered permanent residents in household, Share of family members having local odd jobs, Share of family members working out of local county. Standard errors are clustered at the level of the natural village. 5. *** p < 0.01, ** p < 0.05, * p < 0.1. FE = fixed effect; RE = random effect; AIC = Akaike information criterion. In Table 5.5 we report the results of the 2SLS models in which we instrument for gift giving. The Sargan test suggests that we cannot reject the assumption of exclusion restriction being satisfied. The squeeze effect for food and education now emerges for the low-income group only, and the estimated coefficient is larger than before. However, the share of education spending is not significantly affected for the low-income group. The coefficient is large and negative but imprecisely estimated. We now find that the middle-income group is adversely affected by gift escalation. 17 Overall, these regression results provide considerable support for our hypotheses. Asymmetric income shocks allow beneficiaries to raise their gift-giving level, which (due to the externality implied by the signaling process) triggers an escalating process of gift competition throughout the local community-squeezing the consumption of especially the poor, who have few options but to raise their level of gift giving to preserve their social relations. Finally, to check the robustness of our estimation results, we specified and estimated several alternative models. Specifically, we (1) changed the grouping method (that is, we divide the full sample into four income groups), (2) replaced the log form of gift expenses and income with a level term, and (3) narrowed our focus to a smaller but balanced sample of only 379 households (for three years). The results of these regression models are consistent with the results reported and discussed above (details are available on request). CONCLUSION Gift giving allows people to invest in reciprocal relationships or signal altruism to the community more widely. In various parts of the world, gift giving appears to be escalating-and absorbing an increasing share of total income (or expenditures). We demonstrate a potential dark side of escalating gift exchange. In particular, if households base their appreciation of specific gifts on how these gifts compare to the value of the average gift in the community, then extending a generous gift implies a negative externality for others in the community. Generous gifts increase the value of the average gift in the community, raising the bar for future gift givers to signal generosity or altruism. We develop a model based on the assumptions that households relate gifts received to the value of the average gift in their community, and that people have few alternative options (but signaling true friendship or altruism) to gain access to valued social services. Such services might eventuate via specific interhousehold relationships (guanxi). As gift levels escalate, especially the poor will have no option but to "keep up with the Joneses," such that expenditures on other items or services will be squeezed. People, particularly the poor, may be worse off as gift competition intensifies-their subjective welfare level deteriorates. This is the burden of friendship. We test these theoretical predictions using census-type panel data from 26 natural villages in rural China and generally obtain support for our predictions. We find that (especially poor) households will respond to escalating gift giving by spending more on gifts themselves, reducing the share of income that can be spent on food and education. This process lowers their subjective well-being. Our model of "forced altruism" deviates significantly from existing models of "impure altruism" (for example , Andreoni 1989;1990). Our findings suggest that potential efficiency gains and distributional benefits might be achieved if the imperative of investing in reciprocal relationships via gifts were challenged by alternative institutions. For example, the expansion of financial services, including credit and insurance, could relax the need to invest in reciprocal relations. The outcome will be levels of gift giving that are dictated by altruism rather than the signaling value of transfers. We leave the exploration of the interactions between such formal institutional innovations and guanxi for future research. Welfare Loss of Gift Competition To prove the propositions, we first check the first-order derivative of From this, we get ( ) As long as p 1 > θ , the following holds: Note that p 1 > θ is a sufficient condition, rather than a necessary one. Heterogeneity of Welfare Loss When we take the optimal i y back to the solution of , we can prove that the negative welfare effect of gift competition is heterogeneous over income groups: compared with rich people, poor people will suffer more from gift competition. We can illustrate the idea by proving 2 > 0 As long as 1 ≤ + β γ , we find that: Proof of Proposition 1 The proposition shows that in equilibrium, people increase their gift expenses as the average gift expenses rise. Given the equilibrium condition, We obtain G in equilibrium, which is a function of total income in the community. When others' gift expenses rise as income increases, one must follow and spend more on gifts. This proves the existence of "forced" gifts. Q.E.D Proof of Proposition 2 In equilibrium, the loss of utility is smaller for rich people than poor people, by the same mechanism as illustrated in the individual analysis. Bringing G in equilibrium back to the utility function, we have so that given the increase in G , which is driven by m ,
v3-fos-license
2016-05-04T20:20:58.661Z
2015-05-01T00:00:00.000
2095759
{ "extfieldsofstudy": [ "Medicine", "Biology" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://journals.plos.org/plospathogens/article/file?id=10.1371/journal.ppat.1005065&type=printable", "pdf_hash": "3592d83c691d6c0e8827983f7148c39f655f3f89", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41178", "s2fieldsofstudy": [ "Biology" ], "sha1": "3592d83c691d6c0e8827983f7148c39f655f3f89", "year": 2015 }
pes2o/s2orc
IL-27 Signaling Is Crucial for Survival of Mice Infected with African Trypanosomes via Preventing Lethal Effects of CD4+ T Cells and IFN-γ African trypanosomes are extracellular protozoan parasites causing a chronic debilitating disease associated with a persistent inflammatory response. Maintaining the balance of the inflammatory response via downregulation of activation of M1-type myeloid cells was previously shown to be crucial to allow prolonged survival. Here we demonstrate that infection with African trypanosomes of IL-27 receptor-deficient (IL-27R-/-) mice results in severe liver immunopathology and dramatically reduced survival as compared to wild-type mice. This coincides with the development of an exacerbated Th1-mediated immune response with overactivation of CD4+ T cells and strongly enhanced production of inflammatory cytokines including IFN-γ. What is important is that IL-10 production was not impaired in infected IL-27R-/- mice. Depletion of CD4+ T cells in infected IL-27R-/- mice resulted in a dramatically reduced production of IFN-γ, preventing the early mortality of infected IL-27R-/- mice. This was accompanied by a significantly reduced inflammatory response and a major amelioration of liver pathology. These results could be mimicked by treating IL-27R-/- mice with a neutralizing anti-IFN-γ antibody. Thus, our data identify IL-27 signaling as a novel pathway to prevent early mortality via inhibiting hyperactivation of CD4+ Th1 cells and their excessive secretion of IFN-γ during infection with African trypanosomes. These data are the first to demonstrate the essential role of IL-27 signaling in regulating immune responses to extracellular protozoan infections. Introduction African trypanosomiasis is a vector-borne parasitic disease of medical and veterinary importance. It is estimated that 170,000 people contract the disease every year, and that approximately 70 million people mainly in sub-Saharan Africa are at the risk of contracting the disease [1,2]. In addition, this disease severely limits the agricultural development by affecting domestic animals in the area [2]. The causative agents of this disease are various species of genus of Trypanosoma, which are extracellular protozoan parasites equipped with a flagellum that emerges from the flagellar pocket and provides the parasite with its motility [2]. Upon the bite of the mammalian host by a trypanosome-infected tsetse fly, the parasites enter the blood circulation via lymph vessels and can multiply in the bloodstream and interstitial fluids of the host [3,4]. The parasites have evolved very sophisticated evasion mechanisms to survive in the chronically infected host [3][4][5], causing a serious disease that is often fatal without treatment [1,2]. Due to practical and ethical reasons, mouse models have become an alternative and proven to be a cornerstone for studying African trypanosomiasis of humans and domestic animals [6]. Most of studies have been performed with T. brucei and T. congolense parasites [3,6]. Based on mouse models, although the parasites circulate in the blood stream, the liver is the major place for clearance of the parasites [7][8][9]. Recent studies demonstrated that Kupffer cells efficiently engulf trypanosomes, which is mediated by both IgM and IgG antibodies specific to the parasites [10][11][12]. IFN-γ, mainly secreted by VSG-specific CD4 + T cells [13][14][15] following activation by dendritic cells [16,17], has been shown to mediate protection during African trypanosomiasis [13,15,[18][19][20]. Proinflammatory cytokines such as IL-12, TNF-α, as well as iNOS produced by M1-type myeloid cells are also critical for host resistance to African trypanosomes [15,[21][22][23][24][25]. However, excessive secretions of these inflammatory cytokines by hyperactivated myeloid cells and T cells lead to liver pathology and shorten the survival of infected mice [11,22,[26][27][28][29]. In this respect, IL-10 has been found to be essential for maintenance of the immunological balance between protective and pathological immune responses during African trypanosomiasis [11,20,22,26,27]. Importantly, the role of IL-10 as an antiinflammatory agent has been more recently confirmed in cattle, primate and human infections with African trypanosomes [30][31][32]. It remains unknown whether, in addition to IL-10 signaling, another pathway that maintains this immunological balance exists. IL-27, a recently identified cytokine produced primarily by macrophages and dendritic cells, is a member of the IL-12 super-family [33]. The IL-27 receptor (IL-27R) complex consists of the specific IL-27Rα subunit (WSX-1) and the IL-6R subunit (gp130), and is expressed on numerous subsets of leukocytes including CD4 + T cells, CD8 + T cells, NK cells, monocytes, Langerhans cells, and dendritic cells [34]. Earlier studies have demonstrated that IL-27, as a proinflammatory cytokine, drives naïve T cells to differentiate into Th1 cells [35][36][37]. More recent studies have suggested that IL-27 also has the function to inhibit immunopathology via downregulation of active CD4 + T cells during infections, particularly with intracellular protozoan parasites [38][39][40][41][42]. However, the precise mechanism of CD4 + T cell-mediated immunopathogenesis in the absence of IL-27 signaling still remains incompletely understood. In addition, it is not clear so far whether IL-27 plays an important role in regulation of the immune responses during infections with extracellular protozoan parasites such as African trypanosomes. Based on previous data showing that a subset of highly activated pathological CD4 + T cells produces excessive IFN-γ, and leads to immunopathology and early mortality of mice infected with T. congolense [11,28,29], we formulate a hypothesis that IL-27 signaling is, besides IL-10 signaling, another novel pathway that prevents the immunopathology and early mortality via down-regulation of the hyperactivity of CD4 + T cells and their excessive secretion of IFN-γ during experimental Africa trypanosomiasis. With this in mind, we examine in this study how IL-27 signaling regulates the immune responses in mice infected with African trypanosomes. IL-27 signaling is crucial for survival of mice infected with T. congolense To evaluate the role of IL-27 signaling during African trypanosomiasis, we first determined whether infection led to increased expression of this cytokine or its receptor. Wild-type C57BL/6 mice were infected with T. congolense, a species of African trypanosomes which are unable to leave the circulation and only live in blood vessels, causing fatal disease in cattle [4]. The mice were euthanized at day 0, 7, and 10 after infection, as parasitemia usually peaked on day 6-7 after infection [15,29]. As the liver is the major organ for clearance of the parasites [7][8][9]11], the liver was collected for measurement of mRNA levels of IL-27 and its receptor using real-time quantitative RT-PCR. mRNA levels of both subunits of IL-27 (IL-27p28 and EBI3) were upregulated in the liver of mice at day 7 and 10 after infection, compared to uninfected mice ( Fig 1A). In contrast, mRNA levels of IL-27 receptor (WSX-1) were not affected by the infection (Fig 1A). Next, we infected IL-27R -/-(WSX-1 -/-) and wild-type mice with T. congolense to assess whether IL-27 signaling affected the disease progression. Similar to infected wild-type mice, infected IL-27R -/mice could control the first wave of parasitemia ( Fig 1B). However, IL-27R -/mice succumbed to the infection on day 12 to 20 after infection with a mean survival time of 14.5 days (Fig 1C). In contrast, infected wild-type mice survived until day 67 to 138 days after infection with a mean survival time of 123 days ( Fig 1C). Compared to infected wild-type mice, the infected IL-27R -/mice survived significantly shorter (p<0.01). These data demonstrated that IL-27 signaling is required for survival of mice infected with T. congolense. congolense. (A) mRNA expression levels of IL-27p28, EBI3 and WSX-1 in the liver of wild-type mice infected with T. congolense on day 7 and 10 versus day 0 (uninfected). (B) Parasitemia of IL-27R -/-(WSX-1 -/-) and wildtype mice (n = 6-9) infected with T. congolense. (C) Survival of IL-27R -/and wild-type mice (n = 6-9) infected with T. congolense. Data are presented as the mean ± SEM. The results presented are representative of 3 separate experiments. Deficiency of IL-27 signaling results in enhanced systemic inflammatory responses in mice infected with T. congolense The above results demonstrated that absence of IL-27 signaling led to earlier mortality of mice infected with African trypanosomes. As uncontrolled inflammation causes early mortality of mice infected with African trypanosomes [3,4], we next examined the plasma levels of inflammatory cytokines and their secretions by cultured spleen cells. As shown in Fig 2A, significantly higher amounts of IFN-γ, IL-12p40, and TNF-α were detected in the plasma of IL-27R -/mice infected with T. congolense, compared to infected wild-type mice, on day 7 and 10 after infection (p<0.01). Although the plasma level of IFN-γ in IL-27R -/mice decreased on day 10 after infection probably due to clearance of the first wave of parasitemia, it was still significantly higher than that of the infected wild-type mice (p<0.01, Fig 2A). To evaluate the secretions of cytokines by spleen cells, spleen cells were collected from IL-27R -/and wild-type mice on day 7 and 10 after infection with T. congolense, and cultured in vitro for 48 h. The production of IFN-γ, IL-12p40, and TNF-α by spleen cells were significantly elevated in infected IL-27R -/mice, compared to infected wild-type mice (p<0.01 or <0.05, Fig 2B). As recent studies have shown that IL-27 mainly regulates CD4 + T cell activation during infection with intracellular pathogens [38][39][40][41][42], we further evaluated IFN-γ-producing CD4 + T cells in the spleen cultures using flow cytometry. A limited and similar percentage and absolute number of CD4 + T cells from uninfected wild-type and IL-27R -/mice produced IFN-γ after 12 h stimulation with Cell Stimulation Cocktail (containing PMA, ionomycin, and protein transport inhibitors). However, by 7 and 10 days post infection both the percentage and the absolute number of IFN-producing CD4 + T cells were significantly enhanced in IL-27R -/mice when compared to wild-type cohorts (Fig 2C). IL-27R -/mice develop severe liver pathology during infection with T. congolense We and others have previously shown that excessive systemic inflammatory responses of mice infected with African trypanosomes are associated with severe liver damage [11,22,43,44]. In addition, the liver is the primary organ of trypanosome clearance [7,9,11]. Therefore, we next evaluated effects of IL-27 signaling on liver pathology during the course of infection with the parasites. IL-27R -/mice, but not wild-type mice, showed extensive pale geographic areas highly suggestive of necrosis on day 10 after infection with T. congolense ( Fig 3A). Microscopic examination of the liver of infected IL-27R -/mice revealed many large areas with loss of hepatocyte cellular architecture and an infiltration of inflammatory cells (Fig 3B). By contrast, these pathological changes were not observed in the liver of infected wild-type mice (Fig 3B). To further characterize the liver pathology, we measured the serum activities of alanine aminotransferase (ALT) of mice during T. congolense infection. As shown in Fig 3C, IL-27R -/mice had significantly higher serum activities of ALT than wild-type mice on both day 7 and day 10 after infection (p<0.05), indicating death of hepatocytes and release of cytosolic enzymes. These results demonstrated that IL-27 signaling played a major role in prevention of the liver pathology that was associated with enhanced systemic inflammatory responses. 4. Early mortality of IL-27R -/mice infected with T. congolense is not due to impaired IL-10 production severe liver pathology [11,22,27]. In this regard, IL-27 has been shown to drive CD4 + T cells to produce IL-10 for downregulation of inflammation [45][46][47]. The similarity of the cytokine profile and liver pathology of infected mice in the absence of IL-27 signaling and IL-10 signaling [11,20] prompted us to examine whether IL-27 signaling prevented early mortality of mice infected with African trypanosomes via IL-10. We first compared the disease progression in the absence of IL-27 signaling with that in the absence of IL-10 signaling. T. congolense-infected IL-27R -/mice and wild-type mice showed similar parasitemia and a significantly reduced survival after administration of anti-IL-10 receptor (IL-10R) mAb (p<0.01, Fig 4A). Strikingly, infected wild-type mice treated with anti-IL-10R mAb survived significantly shorter than infected IL-27R -/mice (p<0.01, Fig 4A), suggesting that IL-27 and IL-10 may independently regulate inflammatory responses during African trypanosomiasis. Next we compared the IL-10 levels in plasma, and supernatant fluids of cultured spleen cells or liver leukocytes between Essential Role of IL-27 in African Trypanosomiasis IL-27R -/and wild-type mice infected with T. congolense. There was no significant difference in IL-10 production in plasma and supernatant fluids of the cultures between IL-27R -/and wildtype mice on day 7 after infection ( Fig 4B). Surprisingly, IL-27R -/mice even showed significantly higher amounts of IL-10 in both plasma (up to 14 folds) and supernatant fluids of cultured spleen cells or liver leukocytes on day 10 after infection (p<0.01 or <0.05, Fig 4B), demonstrating that secretion of IL-10 was strengthened, rather than impaired in IL-27R -/mice infected with African trypanosomes, probably due to deficiency of the immune regulation mediated by IL-27 signaling in those infected IL-27R -/mice. Taken together, these data suggested that early mortality of IL-27R -/mice infected with African trypanosomes was not due to impaired IL-10 production. Enhanced CD4 + T cell responses and elevated secretions of inflammatory cytokines in the liver of IL-27R -/mice infected with T. congolense Because early mortality of IL-27R -/mice infected with African trypanosomes was associated with severe liver pathology without impaired secretion of IL-10 as shown above and because IL-27 has been shown to mainly regulate T cell, particularly CD4 + T cell activation during infection with intracellular pathogens [38][39][40][41][42], we next characterized CD4 + T cell responses in the liver of IL-27R -/mice during infection with T. congolense. We found that the frequency and the absolute number of activated hepatic CD4 + T cells (CD44 hi CD62L low ) were significantly higher in IL-27R -/mice infected with T. congolense, compared to infected wild-type mice (p<0.01, Fig 5A). The production of IFN-γ, IL-12p40, and TNF-α by cultured liver leukocytes from infected IL-27R -/mice was significantly higher than production of these cytokines by liver leukocytes from infected wild-type mice (p<0.001, <0.01 or <0.05, Fig 5B). In particular, the production of IFN-γ was enhanced by 4-8 folds in the liver leukocyte cultures of infected IL-27R -/mice ( Fig 5B). Thus, we further evaluated the activation of liver CD4 + T cells by examining their secretions of IFN-γ using single cell analysis. A small and similar percentage and absolute number of CD4 + T cells from uninfected wild-type and IL-27R -/mice secreted IFN-γ after 12 h stimulation with Cell Stimulation Cocktail (containing PMA, ionomycin, and protein transport inhibitors). In contrast, by day 7 and 10 post infection significantly higher percentage and absolute number of IFN-γ-producing CD4 + T cells were detected in IL-27R -/mice as compared to wild-type cohorts ( Fig 5C). Collectively, these data suggested that the early mortality of IL-27R -/mice infected with African trypanosomes was associated with exacerbated Th1-mediated immune responses with overactivation of CD4 + T cells. 6. CD4 + , but not CD8 + , T cells mediate the early mortality of IL-27R -/mice infected with T. congolense As shown above, CD4 + T cells were excessively activated in the liver of IL-27R -/mice infected with African trypanosomes, raising the possibility that the early mortality of infected IL-27R -/mice was a consequence of a CD4 + T cell-dependent immune-mediated pathology. To test this, IL-27R -/mice infected with T. congolense were treated with depleting anti-mouse CD4 mAb, anti-mouse CD8 mAb, or rat IgG as control; and the course of infection, immune responses, and severity of liver damage were assessed. As shown in S1 Fig, administration of the antibodies efficiently depleted CD4 + T cells or CD8 + T cells in the spleen and liver of the infected mice. Infected mice from all three groups could effectively control the first wave of parasitemia, although depletion of CD4 + T cells resulted in a significantly higher parasitemia at some time points of infection (p<0.01 or <0.05, Fig 6A). Strikingly, infected IL-27R -/mice treated with anti-CD4 mAb had two fold increase of survival compared to infected IL-27R -/mice treated with rat IgG (p<0.01, Fig 6A). In contrast, depletion of CD8 + T cells did not affect the survival of infected IL-27R -/mice ( Fig 6A). These results demonstrated that IL-27 signaling had a crucial role in dampening CD4 + T cell activation in experimental T. congolense infection in mice, allowing for prolonged survival. We next evaluated the effect of CD4 + T cells on weight loss and liver pathology of IL-27R -/mice infected with T. congolense. Infected IL-27R -/mice treated with anti-CD4 mAb had significantly less weight loss at the later stage of infection, compared to infected IL-27R -/mice treated with rat IgG or anti-CD8 mAb (p<0.01; S2A Fig). Importantly, infected IL-27R -/mice treated with rat IgG or anti-CD8 mAb exhibited many large areas with loss of hepatocyte cellular architecture in the liver, whereas these pathological changes were hardly seen in the liver of infected IL-27R -/mice treated with anti-CD4 mAb (S2B Fig). In addition, depletion of CD4 + , but not CD8 + , T cells significantly reduced the serum activities of ALT in IL-27R -/mice infected with T. congolense (p<0.05, Fig 6B). These data suggested that CD4 + T cells played a central role in the development of liver pathology in experimental T. congolense infection, and that IL-27 was crucial for dampening this CD4 + T cell-mediated pathology. We further characterized the contributions of CD4 + T cells to secretion of cytokines in IL-27R -/mice infected with T. congolense. Depletion of CD4 + , but not CD8 + , T cells significantly reduced plasma levels of IFN-γ and TNF-α in infected IL-27R -/mice (p<0.001 or <0.05), although the reduction of IL-12p40 did not reach statistical significance (Fig 6C). In addition, Having demonstrating that IL-27 is crucial for dampening trypanosomiasis-associated CD4 + T cell activation, needed for prolonged survival, we next addressed the mechanism of CD4 + T cell-mediated mortality of infected IL-27R -/mice. Because the production of IFN-γ, and the frequency and the absolute number of IFN-γ-producing cells were enhanced in infected IL-27R -/mice compared to infected wild-type mice (Fig 2 and Fig 5), and also because depletion of CD4 + T cells dramatically reduced the IFN-γ production (Fig 6; S2 Fig), we examined whether the early mortality of infected IL-27R -/mice was directly attributed to the overproduction of IFN-γ. IL-27R -/mice infected with T. congolense were treated with neutralizing anti-IFN-γ mAb or rat IgG as a control. Although administration of anti-IFN-γ mAb led to doubled parasitemia in infected IL-27R -/mice at the peak on day 7 after infection (P<0.05), the infected IL-27R -/mice treated with anti-IFN-γ mAb efficiently controlled the first wave of parasitemia as infected control mice did (Fig 7A). Importantly, administration of anti-IFN-γ mAb significantly enhanced the survival of infected IL-27R -/mice (p<0.01; Fig 7A), demonstrating that high levels of IFN-γ accelerated the mortality of IL-27R -/mice infected with African trypanosomes. We next assessed the effects of IFN-γ neutralization on weight loss and liver pathology of IL-27R -/mice infected with T. congolense. Infected IL-27R -/mice treated with anti IFN-γ mAb had significantly less weight loss than infected IL-27R -/mice treated with rat-IgG on the late stage of infection (p<0.01, S3A Fig). Importantly, infected IL-27R -/mice treated with anti-IFN-γ did not exhibit areas with loss of hepatocyte cellular architecture in the liver whereas these pathological changes were observed in the liver of infected IL-27R -/mice treated with rat IgG (S3B Fig). Moreover, neutralization of IFN-γ significantly reduced the serum activities of ALT in infected IL-27R -/mice (p<0.01, Fig 7B). These data suggested that IFN-γ played a critical role in the development of liver pathology in IL-27R -/mice infected with African trypanosomes. We finally examined cytokine responses of infected IL-27R -/mice treated with anti-IFN-γ mAb. IFN-γ was almost undetectable in the plasma of IL-27R -/mice treated with anti-IFN-γ, suggesting the neutralization was successful (p<0.01, Fig 7C). Plasma levels of IL-12p40 and TNF-α were dramatically reduced in infected IL-27R -/mice treated with anti-IFN-γ mAb, compared to infected IL-27R -/mice treated with rat IgG (p<0.01, Fig 7C). Neutralization of IFN-γ also significantly reduced the production of IL-12p40 and TNF-α by cultured spleen cells (p<0.01, or <0.05, S3C Fig). Thus, the results indicated that IFN-γ was critically involved in the enhanced inflammatory responses in IL-27R -/mice infected with African trypanosomes. Essential role of IL-27 signaling in preventing lethal effect of CD4 + T cells in mice infected with T. brucei We finally characterized the role of IL-27 signaling in regulation of immune responses during T. brucei infection. In contrast to T. congolense, T. brucei species have the ability to penetrate the walls of capillaries, invade interstitial tissues, including the brain tissues, thus serving as a model of human African trypanosomiasis [48,49]. T. brucei infection also upregulated the mRNA expressions of IL-27p28 and EBI3, but not IL-27R -/in the liver of mice (Fig 8A). IL-27R -/mice infected with T. brucei efficiently controlled the first wave of parasitemia as infected wild-type did, but survived significantly shorter than infected wild-type mice (15 days vs. 32 days, p<0.01, Fig 8B), demonstrating an essential role of IL-27 signaling in prevention of the early mortality of mice infected with T. brucei. IL-27R -/mice infected with T. brucei also showed enhanced IFN-γ production in plasma and supernatant fluids of spleen cultures, as well as enhanced serum activities of ALT, compared to infected wild-type mice (p<0.01 or <0.05, Fig 8C). Importantly, depletion of CD4 + , but not CD8 + , T cells enhanced the survival of IL-27R -/mice infected with T. brucei by 3 folds (p<0.01, Fig 8D). Thus, IL-27 signaling is also required for survival of mice via preventing excessive Th1 immune responses during T. brucei infection. Discussion Successful clearance of African trypanosomes in the bloodstream requires induction of inflammatory immune responses; however, failure to control this inflammation leads to immunemediated pathology [4,50]. IL-10 signaling has been previously suggested to be involved in maintaining this immunological balance in African trypanosomiasis [11,20]. In the current study, we have identified IL-27 signaling as a novel pathway to maintain this immunological balance in African trypanosomiasis. Our data are the first to demonstrate the essential role of IL-27 signaling in regulating immune responses to extracellular protozoan infections. More importantly, we provided direct evidence, that infection-associated IL-27 signaling served to extend the survival of the infected host by dampening CD4 + T cell activation and their secretion of IFN-γ. Indeed, the early mortality of infected mice lacking IL-27 signaling (IL-27R -/mice) was correlated with exaggerated inflammatory responses and liver immunopathology. The disease similarity of infected mice lacking IL-27 and IL-10 signaling raised the possibility that regulatory function of IL-27 is mediated via the induction of IL-10 secretion, as IL-27 has the capability of promoting CD4 + T cells to secret IL-10 [45][46][47]. However, the fact that blocking IL-10R further shortened the survival of infected IL-27R -/mice and the fact that infected mice lacking IL-10 signaling and infected mice lacking IL-27 signaling had distinct survival suggested that IL-27 functions through a mechanism independent of IL-10. In addition, compared to infected wild-type mice, infected IL-27R -/mice produced similar or even higher amounts of IL-10, depending on the time points examined. Furthermore, the enhanced survival of infected IL-27R -/mice following depletion of CD4 + T cells was correlated with dramatically reduced secretion of IL-10. These data suggested that a defect of IL-10 signaling is unlikely to contribute to the early mortality of IL-27R -/mice. Thus, we suggest that IL-27 suppresses the liver pathology and prevents the early mortality of mice infected with African trypanosomes through IL-10-independent mechanisms, possibly by direct modulation of T cell function. It has been previously demonstrated that IL-10 inhibits accumulation and activation of M1-type myeloid cells, in particular, TIP-DCs (CD11b + Ly6C + CD11c + TNF and iNOS producing DCs) in the liver during infection with African trypanosomes [22,26,27]. Accordingly, African trypanosomes-infected CCR2 deficient mice and MIF (macrophage migrating inhibitory factor) deficient mice exhibited significantly reduced accumulation of TIP-DCs, which was correlated with remarked diminished liver pathology, and significantly prolonged survival [26,44]. Thus, IL-10 signaling suppresses liver pathology, mainly through downregulation of M1-type myeloid cells [3,50]. In contrast, IL-27R -/mice infected with African trypanosomes displayed more activation of T cells, in particular, CD4 + T cells. Moreover, depletion of CD4 + T cells prevented liver pathology and early mortality of infected IL-27R -/mice. Obviously, IL-27 signaling functions through limiting activation of CD4 + T cells in African trypanosomiasis. Thus, although both IL-10 signaling and IL-27 signaling are crucial for limiting the inflammatory complications associated to African trypanosome in particular in preventing liver pathology, the two signal pathways involve distinct mechanisms. Dampening accumulation of highly activated CD4 + T cells by IL-27 signaling has also been recently observed in infection with other microorganisms, particularly intracellular protozoan and bacterial pathogens [38,[40][41][42]51]. Our data demonstrate that the same mechanism exists during infections with extracellular protozoan parasites such as African trypanosomes. However, the precise mechanism of CD4 + T cell-mediated early mortality in previous models was not fully elucidated [38,42]. One of the most important properties of CD4 + T cells is that they secret a large amount of IFN-γ upon activation. IFN-γ is required to eliminate intracellular parasites, but also has potential to induce immunopathology [52,53]. Indeed, early mortality of IL-27R -/mice infected with Toxoplasma gondii, or Plasmodium berghei is associated with significantly enhanced production of IFN-γ [38,42], suggesting that IFN-γ might be a critical molecule for CD4 + T cell-mediated mortality in the absence of IL-27 signaling. Surprisingly, neutralization of IFN-γ did not prolong the survival, and had no effect on the liver pathology of IL-27R -/mice infected with T. gondii or P. berghei at all [38,54]. Thus, although CD4 + T cell-mediated mortality coincides with significantly elevated secretion of IFN-γ, it still remains inconclusive whether IFN-γ is the direct mediator of CD4 + T cell-dependent mortality in these infections. In contrast, neutralization of IFN-γ significantly enhanced the survival IL-27R -/mice infected with African trypanosomes accompanied by a major amelioration of liver pathology, providing direct evidence that IFN-γ directly mediated the mortality of infected IL-27R -/mice. In addition, enhanced survival of infected IL-27R -/mice depleted of CD4 + T cells was correlated with a dramatically reduced production of IFN-γ. Obviously, either removing of CD4 + T cells or neutralization of IFN-γ got rid of the lethal effect of IFN-γ, leading to the prolonged survival of infected IL-27R -/mice. Thus, another important finding of this study is that, in the absence of IL-27 signaling, CD4 + T cells mediated mortality directly through their secretion of IFN-γ, at least, during infection with extracellular protozoan parasites African trypanosomes. It is important to point out that our results in no way exclude the protective role of CD4 + T cells and IFN-γ during infection with the parasites. Indeed, early studies have shown that there was a correlation between high IFN-γ levels in serum, low parasitemia, and host resistance during infection with African trypanosomes [18]. Subsequent studies demonstrated that VSG-specific CD4 + T cells mediated protection via secretion of IFN-γ [13,55]; and splenic DCs were the primary cells responsible for activating naïve VSG-specific CD4 + T cell responses [16,17]. The protective role of CD4 + T cells and IFN-γ in African trypanosomiasis has been recently confirmed by independent groups [14,15,19]. In support of previous findings, we showed that either depletion of CD4 + T cells or neutralization of IFN-γ resulted in a significantly elevated peak parasitemia level in IL-27R -/mice infected with T. congolense, confirming the protective role of CD4 + T cells and IFN-γ during the infection. It is likely that IFN-γ promotes M1-type myeloid cells to produce IL-12, TNF-α and iNOS, which has been shown to be critically involved in lysis or damage of African trypanosomes [15,21,23,25,56]. On the other hand, excessive production of IL-12, TNF-α and iNOS driven by IFN-γ could also mediate immunopathology of mice infected with African trypanosomes [22,24,26,27,57]. Further, IL-12 and TNF-α could stimulate T cells to produce more IFN-γ [4,21]. Thus, IL-10 is required to down-regulate the production of IL-12, TNF-α and iNOS possibly by direct modulation of M1-type myeloid cells [11,22,26,27]. In the present study, we identified IL-27 signaling as a novel pathway to down-regulate the secretion of IFN-γ by direct modulation of CD4 + T cells. Obviously, in the absence of IL-27 signaling, excessive secretions of IFN-γ by CD4 + T cells also mediate liver pathology and mortality, although IL-10 signaling still fully functions and the infected mice produce even more IL-10, in African trypanosomiasis. Thus, both IL-10 signaling and IL-27 signaling are required for survival of mice infected with the parasites via preventing aberrant inflammatory responses, although they function in a distinct manner in African trypanosomiasis. In conclusion, we have described an essential role for IL-27 signaling in preventing early mortality of mice infected with African trypanosomes through dampening IFN-γ secretion by CD4 + T cells, thus identifying, in addition to previously described IL-10 signaling, a novel pathway for maintenance of immunological balance during infection with extracellular protozoan parasites African trypanosomes. These data contribute significantly to our understanding of both immunopathogenesis of African trypanosomiasis and mechanisms underlying IL-27 immunoregulation during infection with extracellular protozoan and bacterial pathogens. Ethics statement This study was performed in strict accordance with the recommendations in the Guide for the Care and Use of Laboratory Animals of the National Institutes of Health. The animal protocols involving mice were approved by the University of Maryland Institutional Animal Care and Use Committee (IACUC) under protocol R-12-60. Mice and parasites Eight-to teen-week-old C57BL/6NCrJ (C57BL/6) mice and five-to six-week-old outbred Swiss white mice (CD1) were purchased from the National Cancer Institute (Frederick, MD). B6N.129P2-Il27ra tm1Mak (IL-27R -/-, or WSX-1 -/-) mice were purchased from the Jackson Laboratory and bred in-house. All animal experiments were performed in accordance with the guidelines of the Institutional Animal Care and Use Committee and Institutional Bio-safety Committee of the University of Maryland, College Park. T. congolense, Trans Mara strain, variant antigenic type (VAT) TC13 was used in this study. The origin of this parasite strain has been previously described [58]. T. brucei AnTat1.1E was obtained from the Institute of Tropical Medicine (Antwerp, Belgium). Frozen stabilates of parasites were used for infecting CD1 mice immunosuppressed with cyclophosphamide, and passages were made every third day as described previously [58]. The parasites were purified from the blood of infected CD1 mice by DEAE-cellulose chromatography [59] and used for infecting mice. Splenocyte or liver leukocyte cultures for measurement of cytokine synthesis Splenocytes were collected from mice. Cells were cultured at a concentration of 5 × 10 6 cells/ml (200 μl/well) in 96-well tissue culture plates in a humidified incubator containing 5% CO 2 . The culture supernatant fluids were collected after 48 h and centrifuged at 1,500g for 10 min, and the supernatant fluids were stored for cytokine assays at -20°C until used. Liver leukocytes were isolated as described previously [60]. Briefly, the liver was perfused with PBS until it became pale. Thereafter, the gallbladder was removed and the liver excised carefully from the abdomen. The liver was minced into small pieces with surgical scissors and forced gently through a 70 um cell strainer using a sterile syringe plunger. The preparation obtained was suspended in 50 ml RPMI-1640 medium containing 10% FCS. The cell suspension was centrifuged at 30g with the off-brake setting for 10 min at 4°C. The obtained supernatant was centrifuged at 300g with the high-brake setting for 10 min at 4°C. The pellet was resuspended in 10 ml 37.5% Percoll in HBSS containing 100 U/ml heparin and then centrifuged at 850g with the offbrake setting for 30 min at 23°C. This new pellet was resuspended in 2 ml ACK buffer (erythrocyte lysing buffer), and incubated at room temperature for 5 min, then supplemented with 8 ml RPMI-1640 medium containing 10% FCS, followed by centrifugation at 300g with the highbrake setting for 10 min at 8°C. Cells were collected and cultured at a concentration of 5 × 10 6 cells/ml (200 μl/well) in 96-well tissue culture plates in a humidified incubator containing 5% CO 2 . The culture supernatant fluids were collected after 48 h and centrifuged at 1,500g for 10 min, and the supernatant fluids were stored for cytokine assays at -20°C until used. Cytokine assays Recombinant murine cytokines and Abs to these cytokines for use in ELISA were purchased from BD Biosciences or R&D Systems. The levels of cytokines in culture supernatant fluids or plasma were determined by routine sandwich ELISA using Immuno-4 plates (Dynax Technologies), according to the manufacturer's protocols. Flow cytometry To assess the activation of T cells, intrahepatic leukocytes were isolated as described above. The cells were incubated (15 min, 4°C) with purified anti-mouse CD16/CD32 ([FcγIII/II Receptor], clone: 2.4G2) to block nonspecific binding of Abs to FcRs, washed with staining buffer (eBioscience), resuspended in staining buffer, and stained with mAbs specific for various cell surface markers, or the relevant isotype-matched control Abs. For intracellular IFN-γ staining, spleen cells or intrahepatic leukocytes were diluted to 5 × 10 6 cells/ml and cultured (200 μl/well) in a 96-well plate in the presence of 1x Cell Stimulation Cocktail (containing PMA, ionomycin, and protein transport inhibitors, eBioscience) for 12 h. The cells were then harvested and washed twice in staining buffer. The cells were incubated (15 min, 4°C) with purified anti-mouse CD16/CD32, washed with staining buffer, followed by staining with mAbs specific for cell surface markers. The cells were fixed and permeabilized using Intracellular Fixation & Permeabilization Buffer Set (eBiosciences). Intracellular staining was then performed using mAbs specific for IFN-γ. Samples were resuspended in staining buffer, tested by FACSAria II, and analyzed using FlowJo software. Aminotransferase determination and histopathological examination Liver alanine transaminase (ALT) activities were determined using EnzyChrom Alanine Transaminase Assay Kit (BioAssay Systems) according to the manufacturer's instructions. For histopathological examination, the liver was taken from mice on day 10 after infection and fixed with 10% formalin in PBS. Sections were stained with Hematoxylin and Eosin. Statistical analysis Data are represented as the mean ± SEM. Significance of differences was determined by ANOVA or a log-rank test for curve comparison using the GraphPad Prism 5.0 software. Values of p0.05 are considered statistically significant.
v3-fos-license
2021-09-09T13:22:25.202Z
2021-09-09T00:00:00.000
237445037
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.frontiersin.org/articles/10.3389/fgene.2021.747270/pdf", "pdf_hash": "9cf04cf023bca4367cfa4200f4ff116e8fbeeb51", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41179", "s2fieldsofstudy": [ "Biology" ], "sha1": "9cf04cf023bca4367cfa4200f4ff116e8fbeeb51", "year": 2021 }
pes2o/s2orc
Integrating Genetic and Transcriptomic Data to Reveal Pathogenesis and Prognostic Markers of Pancreatic Adenocarcinoma Pancreatic adenocarcinoma (PAAD) is one of the deadliest malignancies and mortality for PAAD have remained increasing under the conditions of substantial improvements in mortality for other major cancers. Although multiple of studies exists on PAAD, few studies have dissected the oncogenic mechanisms of PAAD based on genomic variation. In this study, we integrated somatic mutation data and gene expression profiles obtained by high-throughput sequencing to characterize the pathogenesis of PAAD. The mutation profile containing 182 samples with 25,470 somatic mutations was obtained from The Cancer Genome Atlas (TCGA). The mutation landscape was generated and somatic mutations in PAAD were found to have preference for mutation location. The combination of mutation matrix and gene expression profiles identified 31 driver genes that were closely associated with tumor cell invasion and apoptosis. Co-expression networks were constructed based on 461 genes significantly associated with driver genes and the hub gene FAM133A in the network was identified to be associated with tumor metastasis. Further, the cascade relationship of somatic mutation-Long non-coding RNA (lncRNA)-microRNA (miRNA) was constructed to reveal a new mechanism for the involvement of mutations in post-transcriptional regulation. We have also identified prognostic markers that are significantly associated with overall survival (OS) of PAAD patients and constructed a risk score model to identify patients’ survival risk. In summary, our study revealed the pathogenic mechanisms and prognostic markers of PAAD providing theoretical support for the development of precision medicine. INTRODUCTION Pancreatic adenocarcinoma (PAAD) remains one of the deadliest cancer types and has become the leading cause of cancer-related mortality in the United States (Rahib et al., 2014;Ilic and Ilic, 2016). The incidence and mortality rates of PAAD vary widely worldwide and are highest in developed countries (McGuigan et al., 2018). Although studies have shown that smoking, obesity, hereditary diabetes and irregular diet are risk factors for the development of pancreatic cancer, the pathogenesis was still poorly understood. Several of treatments exist that can improve the prognosis of PAAD patients. For example, nab-paclitaxel plus gemcitabine (Von Hoff et al., 2013) and FOLFIRINOX vs. gemcitabine (Conroy et al., 2011). Although these treatments have improved the survival of some patients, the 5-year survival rate of PAAD still remains severe at 8% (Siegel et al., 2017). Therefore, it is necessary to deeply discover the carcinogenic mechanism and possible therapeutic targets of PAAD. Genomic variation refers to differences in the structure and composition of DNA between individuals or between populations. With the development of high-throughput sequencing, multiple sources of disease-related genomic variation have been identified such as copy number variation and somatic mutations. Large-scale cancer genome sequencing consortia, such as The Cancer Genome Atlas (TCGA) (Tomczak et al., 2015) and ICGC (International Cancer Genome et al., 2010), have provided somatic mutation data from numerous of tumor patients. The role of somatic mutations in the development of specific cancer phenotypes is the main purpose of cancer genomics studies (Vogelstein et al., 2013). Somatic mutations have significant tumor heterogeneity, and each individual has different sets of mutations across many genes. Therefore, exploring the mutation-driven regulation of gene expression can better serve the purpose of precision medicine. Work from the past decade has given us a whole new perspective on non-coding RNAs. For example, Long non-coding RNA (lncRNA) have been demonstrated to play an important role in chromatin reprogramming, transcription, post-transcriptional modifications and signal transduction (Anastasiadou et al., 2018;Wang et al., 2021). LncRNA could act as a miRNA sponge to participate in competitive endogenous RNA (ceRNA) regulation determined by microRNA (miRNA) response elements (MREs) (Salmena et al., 2011), which is an important way for it to regulate gene expression post-transcriptionally. Somatic mutations in the MRE region of the lncRNA may weaken, enhance or prevent binding to the pro-miRNA, which may cause some imbalance in the ceRNA regulatory network and even alter the expression of related target genes in the regulatory pathway (Thomas et al., 2011;Thomson and Dinger, 2016). Here, we have collected mutation data, clinical information and transcript expression profile of PAAD from TCGA to conduct a systematic investigation concerning mutation features, pathogenesis and prognostic markers. Statistical Analysis of Somatic Mutations The R package maftools (version 2.8.0) (Mayakonda et al., 2018) was used for the statistical and visualization of mutation location, mutation form, mutation frequency and other information. The package enables efficient aggregation, analysis, annotation and visualization of MAF files from TCGA sources or any in-house study. We also used the visualization results of maftools to reveal new discoveries of PAAD. Driver Gene Identification We first counted the number of mutations in each gene across samples to generate a mutation matrix. Combined with the gene expression profile of PAAD from TCGA, we retained genes that were mutated in at least two samples. Further, the difference in expression of each gene between mutated and unmutated samples was measured by Student's t-test and fold change. We set the cutoff for p-value and fold change to 0.05 and 1.5, respectively (He et al., 2021). We define genes that are differentially expressed between mutated and unmutated samples as mutation driver genes. Construction of Gene Co-expression Networks For the driver genes affected by mutations, we separately calculated other genes co-expressed with each driver gene, which may interact with each other and play a role in the occurrence and development of PAAD. Pearson's (Bishara and Hittner, 2012) correlation algorithm was used to calculate the correlation between the expression of two genes, which was performed by cor.test function of R. We defined gene pairs with p-value < 0.01 and correlation coefficient | R| > 0.5 as those with significantly related expression. For all co-expressed genes, cytoscape (v3.7.0) 6 (Shannon et al., 2003) was used to plot the co-expression network. Further, NetworkAnalyzer was used to calculate the topological properties of the network and to mark the size of the nodes according to their degree. Identification of Putative Mutation-miRNA-LncRNA Regulation Units Somatic mutations occurring in lncRNA may affect the affinity of the original lncRNA and miRNA binding (Wang P. et al., 2020;Zhang et al., 2021). Based on the lncRNA annotation information collected from GENCODE (v29, GRCH38), we relocated the mutations that occurred in the lncRNA. Considering the requirements of miRNA target prediction tools for predicted sequences, we extracted sequences of 21 approximately nucleotides (nt) upstream and 7 nt downstream of the lncRNA somatic mutation site, which will be used to construct mutation and wild sequences. TargetScan (v.6.0) 7 and miRanda (v2010), 8 which are two miRNA target prediction tools, were used to predict the possible combination of miRNA and mutant/wild sequence. We also set stringent thresholds of score > 160 and energy < −20 for miRanda (Betel et al., 2008) and context score < −0.4 for TargetScan (Friedman et al., 2009), and miRNA-targets that satisfy this threshold are considered to be reliable. We define mutations that affect the affinity of miRNA binding to wild sequences as putative mutations, and the lncRNA in which the putative mutation was located as ceL. Further, the altered binding affinity of miRNA and mutation/wild sequence was divided into four states including gain, up, loss, and down. For these ceRNAs perturbed by somatic mutations, we constructed putative mutation-miRNA-lncRNA (ceL) units. Next, altered binding affinity of the original lncRNA and miRNA may affect the expression of other downstream mRNAs regulated by this miRNA (Wang et al., 2015;Wang P. et al., 2019;Zhang et al., 2021). We collected miRNA-target mRNA regulatory relationships from the miRTarBase database that were validated by experiments including the luciferase reporter assay, PCR, and western blotting to build the somatic mutation-lncRNA-miRNA-mRNA (ceRNA dysregulation) network. Functional Enrichment Analysis For those mutated genes, we sorted the genes with a weight of −log10(p-value). The sorted genes and hallmark gene set were used for gene set enrichment analysis (GSEA) (Subramanian et al., 2005). Similarly, for those genes co-expressed with the mutation driver genes, we ordered the co-expressed genes for each driver gene using the correlation coefficient as a weight, which was also used for GSEA. The clusterprofiler (v3.18.0) (Yu et al., 2012) R package was used to perform gene ontology (GO) functional enrichment and kyoto encyclopedia of genes and genomes (KEGG) pathway analysis on these mRNA. We set p-value < 0.05 to screen for significantly enriched functions and pathways. Constructing Survival Prediction Model We integrated significantly differentially expressed mutant genes (p-value < 0.05 only) and other protein-coding genes perturbed by putative mutations in these genes through the ceRNA mechanism. First, we used univariate COX regression to screen for genes significantly associated with overall survival (OS) in PAAD patients (the cutoff of p-value was 0.05). Considering that univariate cox regression was not sufficiently rigorous, lasso regression (Alhamzawi and Ali, 2018) was used to further screen for prognosis-related genes. Next, we randomly selected 70% of all samples as the training set and the remaining as the testing set. The train set were used to construct a multivariate COX regression model (Fisher and Lin, 1999). The Hazard Ratio hypothesis test was also used in the construction of the regression model. We retained the genes passing the Hazard Ratio hypothesis test to establish survival risk prediction model and nomogram to predict the OS of PAAD. The reliability of this risk prediction model was depicted by the receiver working characteristic curve (ROC), and the area under curve (AUC) also was calculated. The train set and test set was, respectively, divided into high-risk and low-risk groups based on the median risk score calculated by risk score model, and Kaplan-Meier (KM) survival analysis was used to measure the difference in OS between these two groups and bilateral logarithmic rank test was used. Statistical Analysis All statistical analyses and graph generation were performed in R (version 4.0.2). The R package resources were obtained from http://www.bioconductor.org/ and https://cran.rstudio.com/bin/ windows/Rtools/. The Landscape of Pancreatic Adenocarcinoma Somatic Mutations In this study, it is necessary to perform an overall statistical analysis of the somatic mutations in PAAD. First, we evaluated samples in the TCGA database collection for which somatic mutation data were available. The result contained 182 samples with 25,470 somatic mutations. We counted the distribution of somatic mutations on the genome including chromosomal location and transcript type. We found that somatic mutations were significantly enriched on chromosomes 17 and 19 ( Figure 1A), suggesting the preference of PAAD somatic mutation in the mutation position. Compared with transcripts (mRNA) of protein-coding genes, several somatic mutations occur in lncRNA ( Figure 1A). Although relatively few mutations occur in the non-coding region, studies have confirmed that mutations within the non-coding genome are a major determinant of human disease (Maurano et al., 2012). Somatic mutations, including missense and nonsense mutations, account for the largest proportion of all somatic mutations, with missense mutations predominating (Figures 1A,B). We also found mutations occurring at the transcription start site in only four samples ( Figure 1B). All these suggest that PAAD patients are more likely to have mutations that alter protein function to disrupt normal physiological mechanisms. Further, we counted the frequency of mutations in each gene and the number of samples with mutations in that gene, and the top mutated genes were illustrated ( Figure 1C and Supplementary Figure 1A). We found that different genes have different preferences in the type of mutation. For example, TTN, the gene considered to be most frequently mutated in the pan-cancer cohort (Oh et al., 2020), tended to have missense mutations in PAAD, whereas the TP53 gene had a high proportion of indel mutations. Studies have shown that the impact of mutations on the prognosis of patients is related to the type and background of the tumor (Hainaut and Pfeifer, 2016). As a mutated gene commonly occurring in PAAD patients, TTN has multiple non-sense mutation hot spots ( Figure 1D), which will have a significant impact on the function and structure of its encoded protein. We found no significant exclusivity between high-frequency mutated genes in the PAAD samples, and a general correlation between the TNN gene and other high-frequency mutated genes (Figure 1E), revealing a mutational feature of pancreatic cancer that the coordinated mutation of multiple genes affects the normal physiological mechanism. We found that nearly half of the point mutations (base substitution) in PAAD patients are C > T substitutions ( Figure 1F and Supplementary Figure 1B). Transitions, one of the two types of DNA base conversion, have a high proportion of overall PAAD point mutations, which are capable of being retained by evolution. However, transversions as another type of DNA base conversion account for nearly 30% of overall point mutations, and these mutations may be key factors in the deterioration of pancreatic tissue. Taken together, all these revealed the mutational features of PAAD. Driver Genes Boost Tumor Invasion Somatic mutations could indirectly affect biological traits by regulating gene expression. It is thus intriguing to explore genes whose expression changes affected by mutations. We integrated the mutation and gene expression profiles of PAAD, with 173 samples having both mutation and gene expression data. A total of 4,517 genes that were mutated in at least two samples were collected to construct the mutation matrix. By comparing the differential expression of each gene between mutant and nonmutant samples, we identified a total of 31 driver genes that were significantly differentially up/down regulated [p-value < 0.05, | log2(fold change) | > log2(1.5)] (Figure 2A). We next sorted the genes by fold change. The top 10 driver genes were RP11-97C18.1 (ENSG00000225191), AC024937.4 (ENSG00000231464), DRD1, CD5L, PCDH8, GK2, MAGEB6, SORCS3, TRIM51, and PRDM9 ( Figure 2B). The top driver gene RP11-97C18.1 is a pseudogene of Adaptor-Related Protein Complex 2, Beta 1 Subunit (AP2B1), which is an essential adaptor of the clathrinmediated endocytosis pathway (Diling et al., 2019;Wang G. et al., 2020). The driver gene AC024937.4 is also a pseudogene of ADP-ribosylation factor-like 8B (ARL8B), which is involved in cellular endocytosis, autophagy and the movement of phagocytic vesicles on microtubule tracks to fuse with lysosomes (Marwaha et al., 2017). All these suggest non-coding genes are essential in the development and progression of PAAD. Further, consensus clustering tools were used to cluster PAAD samples based on driver gene expression profiles. These samples were divided into six clusters (Supplementary Figure 2). We found that PCDH8, which acts as a tumor-suppressor gene in multiple types of cancer and inhibits tumor cell proliferation, invasion and migration (Yu et al., 2020), was downregulated in clusters 3 and 4 (Figures 2C,D), suggesting that tumor cells may be more aggressive in the two clusters with lower PDCH8 expression. Patients in stage I were mainly concentrated in clusters 5 and 6 ( Figure 2C). It is intriguing that there is no significant difference in the number of sample mutations in each cluster (Figure 2E), revealing that differences in gene expression of samples among clusters are not simply determined by the number of mutations. Taken together, all these suggest that driver genes affected by mutations play an essential role in the proliferation and invasion of PAAD. Interaction of Essential Factors With Driver Genes Regulates Oncogenic Pathways For those genes that were mutated, they may play an essential role in the proliferation and invasion of tumors. In order to explore the role of these genes in carcinogenic pathways, we performed GSEA to identify hallmark pathways enriched in mutant genes explaining somatic mutations in the genome of PAAD patients (see section "Materials and Methods"). We found that IL2-STAT5 signaling, glycolysis, apoptosis and allograft rejection pathways are significantly enriched in genes whose expression is affected by somatic mutations (Figure 3A). Studies have shown that interleukin-2 (IL-2) and the downstream transcription factor STAT5 are essential for maintaining regulatory T (Treg) cell homeostasis and function (Cheng et al., 2018), suggesting that the immune microenvironment in tumor tissue of PAAD patients affected by somatic mutations may be disrupted. The altered glycolytic machinery in PAAD was designed to adapt to the tumor microenvironment, which is consistent with previous studies showing that cancer cells are preferentially dependent on glycolysis (Ganapathy-Kanniappan and Geschwind, 2013). The allograft rejection pathway affected by mutations may become the key point of PAAD immunotherapy (Land et al., 2016). Global reprogramming of the transcriptome occurs in order to support tumorigenesis and progression. In addition to the direct effect of mutations on gene expression, there are other regulatory mechanisms such as transcriptional regulation, ceRNA mechanisms, epigenetic. Genes co-expressed with driver genes may have a potential role in tumor development. We performed the Pearson correlation algorithm to identify genes that may be influenced by other regulatory mechanisms coexpressed with driver genes. We identified 495 genes (491 positive and 4 negative) significantly associated with 19 driver genes (pvalue < 0.01, | R| > 0.5). These significantly related genes were used to construct gene co-expression networks using cytoscape ( Figure 3B). We also counted the topological properties of the network using the NetworkAnalyzer tool and found that the gene FAM133A had the top degree (Supplementary Table 1). FAM133A has been confirmed in previous studies to be related to the invasion and metastasis of glioma (Huang et al., 2018). Next, we performed a functional enrichment analysis of all genes in the co-expression network using the R package clusterprofiler. We found that these genes were significantly enriched in immunerelated functions and apoptotic pathways, such as complement activation, immunoglobulin mediated immune response, B cell mediated immunity, and apoptosis-multiple species (Figure 3C and Supplementary Figure 3). For the 19 driver genes identified as having co-expressed genes, we used GSEA to analyze the functional features of the driver genes. Hallmark gene sets and genes ordered by correlation coefficients were available for GSEA. We found that the oncogenic pathway was significantly enriched only in genes co-expressed with the driver genes FAM133A and SORCS3, suggesting that most driver genes are required to synergistically regulate oncogenic mechanisms. In contrast to the driver gene FAM133A, the driver gene SORCS3, in combination with its co-expressed genes, plays an important role in tumor metastasis, hypoxia and apoptosis ( Figure 3D). Taken together, all these indicate that the synergistic interaction network of multiple driver genes may contribute to the complex pathogenesis of PAAD. LncRNA Mutations-ceRNA Indicates Novel Mechanisms of Mutation Regulation LncRNA have been confirmed that genes are essential in preand post-transcriptional regulation. The lncRNA with (miRNA) response element (MRE) can be used as a miRNA sponge to participate in the ceRNA regulatory mechanism. To explore the impact of somatic mutations occurring on lncRNA MREs on ceRNA regulatory mechanisms, we constructed mutant/wild sequences to identify mutations that alter the affinity of lncRNA-miRNA binding. Based on lncRNA annotation data collected from GENCODE, we identified 497 somatic mutations occurring on lncRNA compared to 24,604 somatic mutations occurring on the genome. Affected by mutations, lncRNA may enhance, reduce and lose their binding affinity to existing miRNAs, or even gain binding affinity to new miRNAs ( Figure 4A). Next, we examined the influence of lncRNA mutations on miRNA binding sites according to the TargetScan and miRanda. In total, we identified 277 somatic mutations for PAAD in 235 putative miRNA target genes (putative lncRNAs). These mutation sites showed different binding affinities to 447 miRNAs between the mutation and wild sequences ( Figure 4B). All these constituted 552 mutation-miRNA-lncRNA regulation units. We further constructed ceRNA dysregulation networks based on the identification of mutation-miRNA-lncRNA regulation units ( Figure 4C). We found that TTN-AS1 has top degree in the ceRNA dysregulation networks and that five somatic mutations occurring on it affect the affinity of binding to 11 miRNAs (8 Up/gain and 3 Down/loss, Figure 4D). Combining 31 driver genes, we found that only driver lncRNA AC090099.1 (ENSG00000255470) has mutations involved in ceRNA regulation imbalance, which suggesting that the mechanisms underlying changes in driver gene expression are complex. We found two mutations in AC090099.1 that affected binding affinity to four miRNAs (3 Up/gain and 1 Down/loss, Figure 4E). In order to verify our prediction results at the transcriptome level, we performed one-sample t-test to identify the difference between the gene expression level of the nonmutated sample and the mutant sample. We found significant differences in the expression of AC090099.1 and the target gene CEBPB and LHFPL3 regulated by miRNA hsa-miR-663a between mutated and unmutated samples (Figures 4F-H). Taken together, all these results suggest that ceRNA dysregulation due to lncRNA mutations is an essential factor in variations of target gene expression. Identifying Prognostic Markers for PAAD Genes affected by mutations played an important role in the mechanism of carcinogenesis. It is meaningful to identify the markers associated with prognosis of PAAD patients from genes that are significantly differentially expressed between mutated and unmutated samples (p-value < 0.05). In total, we obtained 171 genes that were significantly differentially expressed by mutation-driven. We performed univariate cox regression to identify genes associated with overall survival (OS) in PAAD patients, and 53 genes were selected by controlling for p-value < 0.05. We further rigorously screened for these 53 genes using lasso regression and 8 genes including SLC30A1, RBM10, PNPLA6, DSG2, CHML, DLGAP5, TTLL6, and PDE4DIPP5 were identified as significantly associated with patient OS (F-H) The distribution of the expression for lncRNA AC090099.1, CEBPB, and LHFPL3 was shown by density curve. The expression value of these genes in the mutant sample were marked with a red line. One-sample t-test was used to calculate statistical significance. Figure 4A). The multivariate Cox regression were performed to construct survival risk prediction model using these eight feature genes and train set, three of which, RBM10, SLC30A1, and DLGAP5, were major genes that associated with the risk of death in patients ( Figure 5A). Nomograms were used to illustrate the probability of survival risk at 6, 12, and 18 months (Figure 5B). The calibration curve was also used to validate the stability of the risk prediction model (Supplementary Figure 4B). In order to identify the best predictive time point for the risk prediction model, we divided the 6-18 months period into six time periods and evaluated the prediction results using ROC curve. We found that the risk prediction result reached the maximum area under curve (AUC) value of 0.84 in the 474.5 days ( Figure 5C). Further, we used multivariate Cox regression coefficients of eight genes identified by lasso regression to construct risk score models as follows: risk score = 0.65 * SLC30A1-0.84 * RBM10-0.27 * PNPLA6 + 0.36 * DSG2-0.21 * CHML + 0.54 * DLGAP5-0.02 * TTLL6-0.08 * PDE4DIPP5, and calculated the risk score for each PAAD sample. The samples of train and test set were, respectively, divided into two categories (high-risk and low-risk) based on the median risk score, and we found that high-risk samples in both the training and test sets exhibited an association with poorer PAAD OS (Figures 5D,E). By combining clinical information from the PAAD sample with the risk score, we found that patients in stage II, III, and IV had a significantly higher risk score compared to stage I (Figure 5F), and found that the origin of the tumor was significantly related to the patient's survival risk (Figure 5F), and found that patients treated with radiation have a significantly lower risk of survival than those who are not treated with radiation ( Figure 5F). All these may provide support for the treatment of PAAD. DISCUSSION In this study, we have used mutational and transcriptomic data to reveal mutational features, driver genes and prognostic markers in PAAD. Statistical analysis of the mutational profile of PAAD revealed that relatively lower number of mutations occurred in non-coding regions of the genome, with most mutations occurring in coding regions affecting the structure and function of the protein. We identified 31 driver genes based on statistical test that are strongly associated with apoptosis, energy metabolism and invasion of tumor cells. Next, we constructed a co-expression network determined by driver genes, revealing the oncogene interaction mechanism and oncogenic pathways of PPAD. We further constructed a ceRNA dysregulation network using TargetScan and miRanda tools to reveal that somatic mutations on lncRNA regulate the expression of target genes at the post-transcriptional level. Using a dual screen of univariate cox regression and lasso regression, we identified eight genes that were strongly associated with the prognosis of PAAD patients despite the existence of public databases for studying the prognosis of pan-cancers . We also constructed a risk score model to specify the risk of survival for each patient, showing that higher risk scores have a poorer probability of survival. Pancreatic cancer is one of the deadliest malignancies (Vincent et al., 2011). Multiple of studies have tried to reveal the pathogenesis of pancreatic cancer and discover effective treatments. For example, exploring the role of the microbiome in the occurrence, development and treatment of PAAD , and discover the carcinogenic mechanism and possible treatments of PAAD from the perspective of genetics (Bhosale et al., 2018). The development of PAAD is influenced by multiple factors, the most critical is the occurrence of malignant mutations in the chromosomes. Malignant mutations in chromosomes, which hold the genetic material of an organism, will affect the physiological mechanisms of normal cells. Although there are numerous of research results to support the conquering of PAAD, few studies have focused on somatic mutations in the genome (Chang et al., 2014). We integrated mutagenomic and transcriptomic data to discover the oncogenic mechanisms and potential prognostic markers of PAAD, which is the rational application of multi-omics data in the era of big data. In revealing the carcinogenic mechanism, multi-omics research has more advantages than previous single-omics research. CeRNAs are transcript that regulate each other by competing shared miRNAs. The proposal of the ceRNA competition mechanism provides a new direction for the post-transcriptional regulation of genes. Considering the important role of noncoding RNA in PAAD, we explored the impact of lncRNA mutations on the ceRNA competition network. We have identified 552 mutation-miRNA-lncRNA regulation units and constructed a ceRNA dysregulated network. Although there is not enough gene expression data (massive absence of miRNA expression data) to support our prediction results, it contributes to the exploration of the post-transcriptional regulatory mechanism of PAAD. In conclusion, this study provided the mutational landscape of PAAD and discovered driver genes. The IL2-STAT5 signaling pathway and allograft rejection affected by mutations provide a new direction for the treatment of PAAD. Marker genes associated with patient prognosis were identified through univariate cox regression and lasso regression. We also provide a survival risk prognostic model for PAAD patients. All these findings in this study may provide theoretical guidance for the diagnosis and treatment of PAAD. DATA AVAILABILITY STATEMENT The datasets presented in this study can be found in online repositories. The names of the repository/repositories and accession number(s) can be found in the article/ Supplementary Material.
v3-fos-license
2019-03-17T13:11:18.813Z
2018-03-31T00:00:00.000
80449348
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://clinmedjournals.org/articles/jgmg/journal-of-geriatric-medicine-and-gerontology-jgmg-4-038.pdf?jid=jgmg", "pdf_hash": "ea456a989477c3afbf84bb44b34e364b744b7b2a", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41180", "s2fieldsofstudy": [ "Medicine" ], "sha1": "1482c405272ab65787478ecf7d3bef3abcf0e2e6", "year": 2018 }
pes2o/s2orc
Postoperative Cognitive Dysfunction : What Anesthesiologists Know That Would Benefit Geriatric Specialists Citation: Detweiler MB (2018) Postoperative Cognitive Dysfunction: What Anesthesiologists Know That Would Benefit Geriatric Specialists. J Geriatr Med Gerontol 4:038. doi.org/10.23937/2469-5858/1510038 Received: July 11, 2017: Accepted: February 22, 2018: Published: February 24, 2018 Copyright: © 2018 Detweiler MB. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. Open Access ISSN: 2469-5858 Some of the more common surgeries associated with POCD are cardiovascular and orthopedic interventions such as hip and spinal interventions.In some cases of consecutive surgeries in the elderly, there is an incremental cognitive decline with each successive surgery, replicating the step-wise decrement seen in vascular dementia [15] and in persons with multiple traumatic brain injuries [16].The case dependent risk factors for COPD in the elderly such as advanced age, genetic disposition, pre-existing cognitive impairment, pre-existing inflammatory conditions, pattern of diurnal variation in cortisol level [12], complexity and duration of surgery and anesthesia, postoperative delirium and infection.Several modifiable risk factors include pre-and post-surgery pain, use of potentially neurotoxic drugs and low intraoperative cerebral oxygenation. As clinicians, we see many of our geriatric patients emerge from major surgery with both transitory and permanent cognitive changes which may cause fear and threaten their independence.The pervasive symptoms of POCD usually are reported to the family practitioners, internists and geriatricians as static or progressing mental status changes along the continuum of cognitive decline following surgery.Such situations provoke anxiety if the patient has not had presurgical education from their internist or geriatric clinician about the risks of POCD.This is often due to a lack of medical team knowledge about POCD sequalae. Check for updates Post-operative cognitive decline (POCD) in the elderly is well known to the anesthesiologists, but others are not as knowledgeable about this complex phenomenon and its causes.POCD is characterized by a slowing of brain processing speed, deficits in memory and executive function, in addition to other neuropsychological domains [1].POCD is also associated with permanent brain damage, especially in those populations with more vulnerable central nervous systems due to age, children under two years of age and, increasingly, the elderly [1][2][3][4][5][6][7][8][9][10].Although the problem of POCD has been reported in the literature for over a century and remains an ongoing interest in anesthesia research today [9], it is largely unknown among many clinicians such as family practitioners, internal medicine specialists and geriatricians that have daily contact with the elderly.It has been estimated that approximately 41 percent of elderly patients demonstrate some cognitive impairment following surgery with anesthesia [6,7].With the increasing number of elderly undergoing surgery with general anesthesia worldwide, problems with POCD following surgery is an important topic in clinical medicine [11,12].Given the scientific evidence in anesthesia literature and the growing anecdotal evidence of POCD, primarily among clinicians treating the elderly, there appears to be the need for a more interdisciplinary discussion regarding the risks and the long term effects of POCD that are costly for both health care systems and for the quality of life of the affected individuals [13,14]. Frequently the elderly that experience POCD do not discuss their memory problems with their medical team as they fear being diagnosed as having "psychiatric problems" [17].When patients present in clinic with reports of POCD, if the treatment team does not have an explanation or neglects to offer a plan to treat the symptoms, the afflicted individuals remain anxious, fearful and often attempt to ignore their memory loss as they may be under the impression that there is no medical explanation or treatment.Unfortunately the stress associated with the fears of losing one's memory and/or having psychiatric problems often accelerates cognitive degradation with reduced volumes of the hippocampus, amygdala, thalamus, hypothalamus, bed nucleus of stria terminalis, nucleus accumbens, and the descending projections which synapse at the thoracic spinal cord.In addition, shorter telomeres in white blood cells may be an unwelcomed consequence [18][19][20][21][22][23].Clinicians also see worried patients and family members that come to clinic with questions about post-operative cognitive changes, with frequent complaints of, "I'm worried, I can't remember things that I could before the operation"; or "my memory is not getting better (following surgery)". What do we know about POCD and why is it important, especially for clinicians treating the elderly?This commentary is not a tutorial, rather a brief introduction to POCD for those readers unfamiliar with the diagnosis, with suggestions for treatment of the memory deficits postsurgically.The reader is referred to POCD reviews for additional in-depth information [24][25][26][27]. Anesthesia The risk of developing POCD is related to many variables including, but not limited to, immune response to surgery, advanced age, pre-existing cerebral, cardiac, and vascular disease, alcohol abuse, low educational level, and intra-and postoperative complications [7,13,14,28].Many randomized controlled studies suggest the method of anesthesia is also a major variable associated with prolonged cognitive impairment.Therefore, one of the first POCD factors investigated was the use of volatile gases, such as isoflurane, sevoflurane, desflurane, nitrous oxide, pentobarbital, midazolam and ketamine during surgical procedures [29][30][31][32].In vitro and animal studies have demonstrated that inhalational and intravenous anesthetics are principal components of POCD neuropathology.These anesthetic agents may cause neuroapoptosis, caspase activation, neurodegeneration, β-amyloid protein (Aβ) accumulation, oligomerization and neurocognition impairment [9].Studies demonstrate that certain volatile anesthetics, such as desflurane, may have a less harmful neurotoxic profile compared to others in the surgical and clinical settings [9,12,33,34]. Propofol and other more modern volatile anesthetics are among the recommended choices for general anesthesia in the inpatient and outpatient settings.The choice of anesthesia may reduce cognitive complications such as delirium and POCD [12].Some hospitals are routinely utilizing 2,6-diisopropylphenol (propofol) with a benzodiazepine, ketamine or fentanyl during conscious sedation during both ambulatory surgery and inpatient surgery for appropriate elderly patients [35][36][37][38][39]. Propofol when used in conjunction with fentanyl appears to be a safe, quick, and effective method of providing conscious sedation which is advantageous for the elderly, especially during spinal and neurological blocks in the effort to avoid general anesthesia [35].Propofol has an attractive pharmacokinetic profile of rapid onset and offset, but must be employed with caution for patients with cardiac and respiratory complications and when egg and soy allergies are present [40].Propofol in combination with benzodiazepines such as flurazepam facilitates GABA receptor activity and increases the apparent GABAA receptor complex affinity for propofol, resulting in a synergistic potentiation by the combination [41].A case control study demonstrated that both propofol-ketamine (Group I) and propofol-fentanyl (Group II) combinations produced rapid, pleasant and safe anesthesia.Group I had stable hemodynamics during maintenance phase while Group II recorded a slight increase in both pulse rate and blood pressure.During recovery, ventilation score was better in Group, while movement and wakefulness scores were better in Group II.The authors concluded that both groups' anesthesia combinations produce rapid and safe anesthesia with few minor side effects [36]. Blood Brain Barrier Aging is often accompanied by changes in bloodbrain barrier permeability due to chronic inflammatory processes, a component of POCD pathology.Increasing blood-brain barrier permeability augments the burden of inflammation, infection and toxins passing into the brain that in turn accelerate degenerative processes [42,43], reduce brain reserve [44] and render the brain more susceptible to POCD [45].Moreover, reduced drug elimination rates contribute to increased episodes of toxic medication effects peripherally [46].When the toxic medications cross the blood-brain barrier, they escalate the risk of neurodegenerative disorders [34]. Perioperative considerations Literature regarding the treatment of POCD is presently limited, in part related to the suspected multifactorial pathophysiology.Jildenstål, et al. in 2014 noted that anesthesiologists in general have not systematically addressed the reversible and irreversible symptoms of POCD in the elderly as they primarily focus on minimizing cardiovascular and pulmonary risks and on diminishing nausea, vomiting and pain postoperatively [10].A Swedish study sent questionnaires to greater than 2500 anesthesiologists and nurse anesthetists.Some clinicians are addressing the complexity of POCD treatment by utilizing the 36 point ReCODE (reversing cognitive decline) protocol which has been proven to reverse Alzheimer's disease even for persons with two copies of ApoE4 allele.This treatment protocol has been supported by over 200 peer reviewed publications [43].The ReCODE protocol of Dr. Dale Bredesen and colleagues at the Buck Institute for Research on Aging at UCSF address most of the complex issues involved in precipitating the memory deficits of POCD: Insulin resistance; inflammation and infections; hormone, nutrient and trophic factor optimization; toxins (biological, chemical, physical); and restoration and protection of damaged synapses [43].The protocol includes changes in lifestyle, diet, sleep patterns, and exercise to reverse cognitive decline.Outcomes are measured by cognitive scales, homocysteine levels, hippocampal volume changes and other biomedical markers.It is speculated that the ReCODE protocol will provide the treatment advances for POCD in the future. Conclusions POCD is a debilitating surgical sequalae.Understanding its complex physiology and treatment are ongoing endeavors.Clinicians treating the elderly and infant populations need to have a working understanding of the syndrome in order to treat patients, to educate both the patients and families and to proactively address the symptoms of POCD.In addition to continuing interdisciplinary research of POCD, more education about this clinical entity should be included in the teaching of medical student, residents and fellows in most specialties.Moreover, there needs to be more information about POCD in those journals read by pediatricians, family practitioners, internists and geriatricians to better prepare them when they encounter POCD clinically. The survey revealed that postoperative neurocognitive deficits were not primary outcome indices of anesthesia protocols of the anesthesiologists contacted [10].However, anesthesia research regarding perioperative anesthesia sequalae and pain management problems is ongoing and contributing to an understanding of POCD pathology [26,36,39,[47][48][49][50][51].Addressing perioperative pain management is an important treatment for reducing the risk of delirium and POCD [49,51]. Both pain and the resulting administration of opioids are notable contributors to delirium and POCD [49,[52][53][54][55].Moreover, the elderly have many comorbid medical conditions, including chronic pain conditions such as low back pain, chronic tension-type headaches and fibromyalgia which complicate post-surgery recovery and return to presurgical cognitive and functional levels [53,56].Chronic pain has been associated with changes in global and regional brain morphology and brain volume loss including structural brain changes in the middle corpus callosum, middle cingulate white matter and the grey matter of the posterior parietal cortex as well as impaired attention and mental flexibility as measured by neuropsychological tests [53,54].Brain atrophy and white matter lesions have been shown to be associated with increased risk of delirium which in some cases is the prodrome to POCD [26,48,54,57].Studies also suggest that presurgery dementia and post-surgery intensive care unit admission are more important predictors of postoperative delirium than are opioid medications [55]. Anesthesia research is making advances in postsurgical pain management [49,51,52,54].Minimal incision surgery for total hip and total knee arthroplasties with closely supervised pain management and physical therapy protocols markedly improved outcome variables compared to the same interventions with standard incisions [52].Midwest Orthopedists in Rush surgical teams have been advancing protocols to reduce postsurgery pain with reduced inpatient hospital narcotic consumption, resulting in reduced inpatient nausea, vomiting and hospital length of stay [57]. Treatment of POCD When the patient comes to the outpatient clinic with POCD symptoms, what can be done?The literature gives clinicians few hints as no protocol or consensus guidelines could be found in a literature search.Many clinics go through a differential diagnosis including ongoing postsurgical delirium from comorbid infection, inflammation, metabolic (e.g., Vitamin B12, folate, thyroid function), medical, psychiatric, pharmacy and substance abuse problems.This replicates the clinical approach often employed for persons presenting with memory disorders from subjective cognitive impairment (SCI), mild cognitive impairment (MCI) and the dementias (Alzheimer's disease, vascular dementia, Lewy Body dementia, Parkinson's Dementia, Frontotemporal dementia, etc.).
v3-fos-license
2021-12-19T16:58:44.517Z
2021-12-01T00:00:00.000
245313889
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/2076-3921/10/12/2006/pdf", "pdf_hash": "068ed7bb9ede675ccf5fcfc4449a2493b818a4d5", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41186", "s2fieldsofstudy": [ "Chemistry" ], "sha1": "9182dc8d59a5259b92e902b4d629d1538038beee", "year": 2021 }
pes2o/s2orc
Antioxidants into Nopal (Opuntia ficus-indica), Important Inhibitors of Free Radicals’ Formation Nopal (Opuntia ficus indica) belonging to the Cactacea family has many nutritional benefits attributed to a wide variety of phenolic and flavonoid compounds. Coumaric acid (COA), ferulic acid (FLA), protocatechuic acid (PRA), and gallic acid (GAA) are the phenolic acids (PhAs) present in nopal. In this study, the role of these PhAs in copper-induced oxidative stress was investigated using the density functional theory (DFT). The PhAs form 5 thermodynamically favorable complexes with Cu(II), their conditional Gibbs free energies of reaction (ΔG’, at pH = 7.4, in kcal/mol) are from −23 kcal/mol to −18 kcal/mol. All of them are bi-dentate complexes. The complexes of PRA and GAA are capable of inhibiting the Cu(II) reduction by both O2•− and Asc−, their reactions with the chelated metal are endergonic having rate constants about ~10−5–102 M−1 s−1, PhAs can prevent the formation of hydroxyl free radicals by chelating the copper ions. Once the hydroxyl radicals are formed by Fenton reactions, the complexes of PhAs with Cu(II) can immediately react with them, thus inhibiting the damage that they can cause to molecules of biological interest. The reactions between PhAs-Cu(II) complexes and hydroxyl free radical were estimated to be diffusion-limited (~108 M−1s−1). Thus, these chelates can reduce the harmful effects caused by the most reactive free radical existent immediately after it is formed by Fenton reactions. Introduction Several diseases are caused by oxidative stress (OS) such as cancer, rheumatoid arthritis, pulmonary and renal failures, cardiovascular diseases and neurodegenerative disorders such as Alzheimer's and Parkinson's diseases, multiple sclerosis, and memory loss [1][2][3][4][5][6][7][8][9][10]. OS is caused by an excess in the production of reactive oxygen species (ROS) and the inability of the body to remove such excess. The imbalance between production and consumption of ROS can severely damage cells or molecules of biological interest such as proteins or DNA [11,12]. Therefore, to preserve human health, it is important to find efficient strategies to attenuate the OS-induced molecular damage. In addition to enzymatic defense systems, chemical antioxidants arising from plant extracts containing phenolic and flavonoid compounds are a viable tool to achieve that purpose. Nopal (Opuntia ficus indica) is part of the Cactacea family. This plant grows wildly in arid and semi-arid regions in America and Europe. The tender young part of the cactus stem, or cladode, is frequently consumed as a vegetable in salads, while the nopal fruit is consumed as a fresh fruit. The fruit and cladodes of nopal have nutritional benefits attributed to their antioxidant properties and the pharmacological activity of their Chemical composition depends on the cactus variety, maturation stage and environmental conditions of the place of origin [40,41]. Different chemical compounds have been found in the nopal, which are distributed in different proportions in its tissues. Cactus fruit contains important amounts of ascorbic acid, vitamin E, carotenoids, phenols, flavonoids, betaxanthin, betacyanin, and amino acids [42,43], while, in its flowers, many flavonoids are present, such as kaempferol and quercetin [44]. Cactus peel and seeds are rich in palmitic acid, oleic acid, and linoleic acid [45]. The cactus cladodes contain vitamins, various flavonoids, particularly quercetin 3-methyl ether, narcissin, gallic acid, and coumaric acid [46][47][48]. In addition to its free radical scavenging activity, some phenolic compounds can prevent the formation of OS in the presence of redox metals such as iron or copper. These metals participate in Fenton-type reactions generating hydroxyl radicals ( • OH) that are a highly oxidizing species. However, for COA, FLA, PRA and GAA there are not enough data to determine if they could be implicated in preventing the production of • OH radicals. So, the aim of the present investigation was to gain a deeper knowledge on this activity and to provide a qualitative and quantitative analysis of the working mechanisms of COA, FLA, PRA and GAA. Methodology In the present investigation, the Gaussian 09 package of programs [59] was used. The computational details to study the mechanisms for reactions between the complexes and the hydroxyl free radical are in line with the quantum mechanics-based test for the overall free radical scavenging activity (QM-ORSA) protocol [60]. In the present investigation, all the calculations were carried out inside the framework of the density functional theory (DFT) and, in particular, a Thrular functional M05 [61] was used in conjunction with the basis set of Pople 6-311+G(d,p). To simulate the aqueous environment, the continuum solvent method based on density SMD [62] was considered. SMD can be safely used for estimating solvation free energies for any charged or uncharged solutes, with relatively low errors [59]. The local minima and transition states were identified by 0 and 1 imaginary frequencies, respectively. To verify that the imaginary frequency of each transition state corresponds to the expected motion along the reaction path, the intrinsic reaction coordinate (IRC) calculations were carried out. Unrestricted calculations were used for the open-shell systems. The energetic values include thermodynamic corrections at 298.15 K. The conventional transition state theory (TST) [63][64][65] and the 1 M standard state were used to calculate the rate constants, using harmonic vibrational frequencies and Eckart tunneling [66]. For the electron transfer reactions, the Gibbs free energy of activation were calculated using the Marcus theory [67]. In addition, since several of the calculated rate constants (k) are close to the diffusion limit, the apparent rate constant (k app ) cannot be directly obtained from TST calculations. The Collins-Kimball theory is used to that purpose [68], in conjunction with the steady-state Smoluchowski [69] rate constant for an irreversible bimolecular diffusion-controlled reaction, and the Stokes-Einstein [70] approaches for the diffusion coefficient of the reactants. In the complexes, the Cu (II) ions were modeled coordinated to water molecules, because they are expected to be hydrated in the aqueous phase. This model results to be more adequate to represent "free" copper in biological systems than the corresponding naked ions. Four water molecules were chosen for this purpose, since it was previously reported that the most likely configuration of Cu (II) water complexes, in the aqueous phase, corresponds to an almost square planar four-coordinate geometry [71]. Results and Discussion To determine the molar fractions M f of each acid-base species, at physiological pH (pH = 7.4), it is important to take into account the pKa values; in Table 1 are reported those for the investigated PhAs. For each PhA, in Figure S2, are the deprotonation routes. Table 1. Experimental pK a values (with their references) and the molar fractions ( M f ) for the different acid-base species of the investigated PhAs, at pH = 7.4. PhAs pK a1 pK a2 pK a3 pK a4 M f (H n A) Refs. [58,74] At pH = 7.4 the species with more molar fraction M f of the investigated PhAs is the mono-anionic species (H n−1 A − ), but also the neutral and di-anionic species were taken into account in the present investigation. On the other hand, the molar fractions of tri-anions (H n−3 A −3 ) and fourth anions (H n−4 A −4 ) for PRA and GAA are almost negligible. Pro-Oxidant Effects by Cu(II) Reduction A crucial feature for the antioxidant protection, when it takes place in the presence of metal ions, is the reductant capability of the antioxidant, in particular of their deprotonated species. Since the mono-anions of the antioxidants can behave as nucleophilic agents, they can cause the reduction of copper (II) to copper (I) and therefore accelerate the Fenton reactions, thus originating hydroxyl radicals. Such pro-oxidant effects were also explored in this work, considering both PhAs and their corresponding anions (1)- (3). To put the calculated data into perspective, the Cu(II) reductant activity of the investigated PhAs was compared to those of the superoxide radical anion (O 2 •− ), reaction (4), and the ascorbate anion (Asc − ), reaction (5). The superoxide anion radical (O 2 •− ) and ascorbate (Asc − ) are reducing species at a physiological level, even at an experimental level a mixture of copper-ascorbate is used to provoke redox conditions, and the O 2 •− is the main reducing species in Fenton reactions; therefore, they are used in the present investigation to analyze the reductant effect against Cu(II). In Table 2 are collected the data referring to the reduction of free Cu(II). To obtain the rate constants, the molar fractions of PhAs, O 2 •− and Asc − were included at the pH of interest. Taking into account that the reduction of Cu(II) to Cu(I) by O 2 •− experimentally occurs at 8.1 × 10 9 M −1 s −1 [75], it can be evinced that the rate constant calculated for that reaction is found to be only 1.74 times lower than the experimentally measured one. This finding supports us on the kinetic data reported and discussed in this work. At physiological pH, the di-anionic PhAs are predicted to reduce Cu(II) to Cu(I) at significant rates. However, neutral, and mono-anionic acids do not have this pro-oxidant effect since their rate constants are around 10 • OH-Inactivating Ligand Behavior The possibility that PhAs behave as • OH-inactivating ligand (OIL) [76][77][78] in the presence of redox metal ions also was explored. Such a behavior can be exhibited in two different ways [72]: OIL−1: by sequestering metal ions from reductants. OIL−2: by deactivating • OH as they are formed through Fenton-like reactions. In both cases, OIL molecules should act as metal chelating agents. When they behave as OIL−1 agents, the metal (Cu (II)) is protected by the antioxidant in the complex formed. Thus, initially the Fenton reactions that originate hydroxyl radicals are inhibited. Furthermore, the antioxidant behaves like OIL−2, when once the hydroxyl radical is formed by Fenton reactions, the complex of PhAs with Cu(II) immediately reacts with the radical, thus behaving as an immediate target and thus protecting other molecules of great interest such as proteins or even DNA. Chelation was the first aspect explored here since it represents a necessary and crucial step in both cases. Cu(II) ions were modeled coordinated to four water molecules, in a near square-planar configuration and Cu(I) with four explicit water molecules too, in a linear two-coordinate configuration. In this case, in fact, Cu(I) is coordinated only to two water molecules, while the other two are retained in the second coordination shell of the metal ion. All the possible chelation sites involving O atoms of the examined PhAs were explored, as described in detail in Table S1. In addition, their roles as mono-dentate and bi-dentate ligands have both been considered. Six different chemical routes leading to Cu(II) chelation were investigated. Table S1 collects all the complexes formed between PhAs and Cu(II) indicating their chelation routes (ChR), chelation sites (ChS), conditional Gibbs free energies of reaction (∆G', at pH = 7.4, in kcal/mol), and Maxwell-Boltzmann distribution (%MB) for the different chelation pathways. The corresponding equilibrium constants and Gibbs energies of reaction would explicitly depend on the pH, because the coupled deprotonation-chelation mechanisms (CDCM), i.e., routes II, IV and VI, simultaneously involve Cu(II) chelation and deprotonation of the reactive site in the ligand. Thus, to take this effect into account, the reported data correspond to pH = 7.4. If the complex corresponds to a neutral species and mono-dentate, the label is, for example, H n COA(1)-Ci, where "i" is the number of the chelate. In addition, if it is from a mono-anionic species and bi-dentate, the label is, for example, H n−1 FLA -(2)-Ci, and so on. Some of these complexes could be formed through two routes; for example, the complex H n COA(1)-C2 formed by the route II, and the complex H n−1 COA -(1)-C1 formed by the route III, are both the same complex (see Table S1). Table S1 reports the values of Cu(II) chelation, by all the PhAs present in the nopal, characterized by at least one significantly exergonic pathway. In addition, for COA, PRA and GAA only one main complex is expected, with contributions larger than 98%, i.e., H n−2 COA 2-(2)-C2, H n−2 PRA 2-(2)-C7 and H n−2 GAA 2-(2)-C7. For FLA, two complexes are the most abundant, H n−2 FLA 2-(2)-C5 and H n−2 FLA 2-(2)-C2, with 81.61% and 18.21%, respectively (Table 3). Therefore, they were the ones that have been explored as antioxidants with the behavior of OIL−1. To this purpose, two different reductants were considered: the superoxide radical anion (O 2 •− ) and the ascorbate anion (Asc − ) (6). The complexes of protocatechuic acid and gallic acid are capable of inhibiting the Cu(II) reduction by both O 2 •− and Asc − , then they act as highly performing OIL−1. (Table 2). On the contrary, coumaric acid and ferulic acid are able to prevent the Cu(II) reduction from Asc − but do not inhibit the reduction by O 2 •− , albeit the reactions with these complexes are slower than with Cu(II)(H 2 O) 4 . These results suggest that, although COA and FLA are able to coordinate copper, it is possible that the chelated metal can be reduced by strong reductants, such as O 2 •− , producing • OH radicals, through Fenton-type reactions. This is not expected to happen if PRA and GAA coordinate copper. The OIL−2 behavior was also analyzed, which involves the reactions between the PhAs ligands in the copper complex and the hydroxyl radical ( • OH). Different reaction mechanisms may contribute to such reactions, namely, single-electron transfer (SET), formal hydrogen atom transfer (f -HAT) and radical adduct formation (RAF) (Scheme 1). The complexes with PRA and GAA were included in this part to test if they can scavenge • OH radicals formed, even though they do not contribute to their production. Thermodynamic and kinetic results for the SET mechanism (Path (A) of Scheme 1) are collected in Table 5. The rate constants values for this mechanism are in the range of 10 -7 -10 -9 M -1 s -1 . The complex that reacts faster with • OH is H n−2 FLA -2 -(2)-C5, while the slowest reactions correspond to that of H n−2 FLA -2 -(2)-C2. When copper is coordinated only with water molecules, Cu(II)(4H 2 O), the value of the Gibbs free energy of reaction with • OH through the SET mechanism becomes 73.85 kcal/mol [79]. It implies that the high values in the rate constants (Table 5) are due to the presence of PhAs in the coordination sphere of Cu(II). Table 5. Gibbs free energy of reaction (∆G, kcal/mol), Reorganization energies (λ, kcal/mol), Gibbs free energy of activation (∆G = , kcal/mol), and rate constants (k, M -1 s -1 ) for the SET reactions between PhAs-Cu(II) and • OH. PhAs-Cu(II) ∆G Thermodynamic results for f -HAT mechanisms (Paths (B) and (C) of Scheme 1) are reported in Table 6. All the reactions resulted to be exergonic. COA forms complexes with copper through its fully deprotonated specie. For this reason, the only hydrogens available to be transferred to • OH are those of the water molecules coordinated to copper (Path (C) of Scheme 1). However, the copper complexes with PRA, FLA and GAA as ligands have protonated functional groups that also participate in f -HAT reactions with the • OH radical (Path (B) of Scheme 1). These groups are OCH 3 in the FLA complexes and OH in the PRA and GAA complex. Table 6. Gibbs free energy of reaction (∆G, kcal/mol) and rate constants (k, M -1 s -1 ) for the f -HAT reactions between PhAs-Cu(II) and • OH. PhAs-Cu(II) ∆G k The f -HAT reactions are highly exergonic, with values of Gibbs free energy of reaction ranging from −41.90 to −51.76 kcal/mol, except those originated from the OCH 3 group in the FLA complexes, the values of which are −23.63 and −21.83 kcal/mol. The f -HAT reactions from water molecules binding to copper (Path (C) of Scheme 1) were identified as barrierless ( Figure S3), with rate constants (k) controlled by diffusion ( Table 6). The same f -HAT mechanism was tested for the reaction between the Cu(II)(4H 2 O) complex and the • OH radical. It was found that this reaction is endergonic by 1.6 kcal/mol. Those results evidence that Cu(II) coordinated only with water molecules is not able to act as radical scavenger. For the RAF mechanism (Path (D) of Scheme 1), all the possible sites on the aromatic ring of PhAs numbered in Scheme 1 were considered. The Gibbs free energies of reaction (∆G), and Gibbs free energies of activation (∆G = ) through direct addition, are reported in Table S2, while the transition structures are presented in Schemes S4-S8. The ∆G energy values indicate exergonic reactions. However, the ∆G = values are negative, suggesting that, in the RAF mechanism, a more complex mechanism is involved, probably implicating the formation of a pre-reactive adduct as a crucial step before the addition reaction (Scheme 2). Therefore, in this case, the RAF reactions were evaluated considering the pre-reactive adduct and not the separated species as the RAF reactants (Table 7). From the results, it emerges that the RAF mechanism does not contribute to the OIL−2 behavior of PhAs, because all the RAF pathways are endergonic processes with high activation barriers. Scheme 2. General RAF mechanism for reactions between PhAs-Cu(II) and • OH. Table 7. Gibbs free energy of reaction (∆G, kcal/mol), Gibbs free energy of activation (∆G = , kcal/mol) and rate constants (k, M -1 s -1 ) for the RAF mechanism between PhAs-Cu(II) and • OH, since prereactive adduct. Solvent = water. Conclusions Nopal is a plant containing four phenolic acids: coumaric acid (COA), ferulic acid (FLA), protocatechuic acid (PRA), and gallic acid (GAA). These compounds had been shown to be efficient Cu(II) chelators, yielding five complexes with high thermodynamic stability. This implies that they can be efficient in treatments to detoxify the body from copper. Dianionic species of PhAs are capable of reducing Cu(II) to Cu(I) and therefore promote the formation of • OH radicals through Fenton-type reactions; for this reason, they can be proposed as species with a behavior of pro-oxidants. However, when the chelates are formed, they can inhibit the reduction of the metal by the reductants of the O 2 •− and Asc − , i.e., they can be proposed to act as OIL−1, especially the chelates of PRA and GAA, whereas the chelates of COA and FLA inhibit the reduction of Cu(II) by ascorbate and only slightly the reduction by the radical anion superoxide (O 2 •− ). All the complexes between Cu(II) and PhAs can deactivate the • OH radical through SET and f -HAT reactions that take place at diffusion-controlled rates. Therefore, they can act as OIL−2 agents, inhibiting the damage by hydroxyl free radical immediately as it is formed by Fenton reactions. The PhAs studied in this investigation are proposed as OIL−1 and OIL−2, and therefore they can be proposed as protectors of biomolecules since they inhibit the reduction of Cu(II) to Cu(I) and therefore the formation of the hydroperoxyl free radical, and additionally, they react with hydroxyl free radical once it is formed. Conflicts of Interest: The authors declare no conflict of interest.
v3-fos-license
2017-06-24T07:49:42.026Z
2009-01-01T00:00:00.000
263988203
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://jneuroengrehab.biomedcentral.com/counter/pdf/10.1186/1743-0003-6-46", "pdf_hash": "d300a02e05860eeb748cc4e381a9fac70dec88da", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41187", "s2fieldsofstudy": [ "Medicine", "Engineering" ], "sha1": "24021409858db5592fc53cbe7550580c482a6c91", "year": 2009 }
pes2o/s2orc
Journal of NeuroEngineering and Background: Robot-assisted therapy offers a promising approach to neurorehabilitation, particularly for severely to moderately impaired stroke patients. The objective of this study was to investigate the effects of intensive arm training on motor performance in four chronic stroke patients using the robot ARMin II. Background Stroke remains the leading cause of permanent disability. Recent studies estimate that it affects more than 1 million people in the EU [1,2] and more than 0.7 million in the U.S. each year [3]. The major symptom of stroke is severe sensory and motor hemiparesis of the contralesional side of the body [4]. The degree of recovery highly depends on the severity and the location of the lesion [5]. However, only 18% of stroke survivors regain full motor function after six months [6]. Restoration of arm and hand functions is essential [6] to cope with tasks of daily living and regain independence in life. There is evidence that the rehabilitation plateau can be prolonged beyond six months post-stroke and that improvements in motor functions can be achieved even in a chronic stage with appropriate therapy [7,8]. For this to occur, effective therapy must comprise key factors containing repetitive, functional, and task-specific exercises performed with high intensity and duration [9][10][11][12]. Enhancing patients' motivation, cooperation, and satisfaction can reinforce successful therapy [13]. Robotassisted training can provide such key elements for inducing long-term brain plasticity and effective recovery [14][15][16][17][18][19]. Robotic devices can objectively and quantitatively monitor patients' progress -an additional benefit since clinical assessments are often subjective and suffer from reliability issues [20]. Patient-cooperative control algorithms [21,22] can support patients' efforts only as much as needed, thus allowing for intensive robotic intervention. Several clinical studies have been successfully conducted with endeffector based robots [14,16,17,23]. In these robots, the human arm is connected to the robot at a single (distal) limb only. Consequently, endeffector based robots are easy to use but do not allow single joint torque control over large ranges of motion. In general, they provide less guidance and support than exoskeleton robots [24]. In this study we propose using an exoskeleton-type robot for the intervention. Such a type of robot provides superior guidance and permits individual joint torque control [24]. The device used here is called ARMin and has been developed over the last six years [21,25]. A first pilot study with three chronic stroke patients showed significant improvements in motor functions with intensive training using the first prototype ARMin I. Since ARMin I provided therapy only to the shoulder and elbow, there were no improvements in distal arm functions [25]. Consequently, the goal was to develop a robot, which enables a larger variability of different (also more complex and functional) training modalities involving proximal and distal joint axes [26,27]. For this study we used an enhanced prototype, ARMin II, with six independently actuated degrees of freedom (DOF) and one coupled DOF ( Figure 1). The robot trains both proximal joints (horizontal and vertical shoulder rotation, arm inner -outer rotation, and elbow flexionextension) and distal joints (pro -supination of lower arm and wrist flexion -extension). Together with an audiovisual display, ARMin II provides a wide variety of training modes with complex exercises and the possibility of performing motivating games. The goal of this study was to investigate the effects of ARMin II training on motor function, strength and use in everyday life. Participants Four patients (three male, one female) met the inclusion criteria and volunteered in the study. The inclusion criteria were i) diagnosis of a single ischemic stroke on the right brain hemisphere with impairment of the left upper extremity and ii) that stroke occurred at least twelve months before study entrance. Study exclusion criteria were 1) pain in the upper limb, so that the study protocol could not be followed, 2) mental illness or insufficient cognitive or language abilities to understand and follow instructions, 3) cardiac pacemaker, and 4) body weight greater than 120 kg. Mechanical structure of the exoskeleton robot ARMin II Figure 1 Mechanical structure of the exoskeleton robot ARMin II. Axis 1: Vertical shoulder rotation, Axis 2: Horizontal shoulder rotation, Axis 3: Internal/external shoulder rotation, Axis 4: Elbow flexion/extension, Axis 5: Pro/supination of the lower arm, Axis 6: Wrist flexion/extension. All four patients received written and verbal information about the study and gave written informed consent. The protocol of the study was approved by the local ethics committee. Procedure To investigate the effects of training with the rehabilitation device ARMin II, four single-case studies with A-B design were applied. Clinical evaluations of the Fugl-Meyer Score of the upper extremity Assessment (FMA), the Wolf Motor Function Test (WMFT), the Catherine Bergego Scale (CBS), and the Maximal Voluntary Torques (MVTs) were administered twice during a baseline period of three weeks (A). A training phase of eight weeks (B) followed. The same evaluation tools were applied every two weeks. Patients 1 and 4 executed three training hours per week (totally 24 hours over entire training period), patients 2 and 3 completed four training hours per week (totally 32 hours). A single training session comprised approximately 15 minutes passive mobilization and approximately 45 minutes active training. Training sessions were always led by the same therapist. Robotic therapy ARMin II [21] allows for complex proximal and distal motions in the functional 3-D workspace of the human arm ( Figure 1). The patient sits in a wheelchair (wheels locked) and the arm is placed into an orthotic shell, which is fixed and connected by three cuffs to the exoskeletal structure of the robot. Position and force sensors support active and passive control modes. Two types of therapy modes were applied: a passive 'teach and repeat' mobilizing mode and a game mode with active training modalities. For the passive therapy, the therapist can carry out a patient-specific mobilization sequence adapted to individual needs and deficits, using the robot's 'teach and repeat' mode. The therapist guides the mobilization ('teach') by moving the patient's arm in the orthotic shell. The trajectory of this guided mobilization is recorded by the robot, so that the same mobilization can be repeated several times ('repeat'). The patient receives visual feedback from an avatar on the screen, that performs the same movements in real-time. During the teaching sessions, the robot is controlled by a zero-impedance mode, in which the robot does not add any resistance to the movement, so that the therapist consequently only feels the resistance of the human arm. During the 'repeat' mode, the robot is position-controlled and repeats the motion that has been recorded before. For the active part of the therapy, a ball game and a labyrinth scenario were selected (see Figure 2). In the ball game, the patient moves a virtual handle on the screen. The aim is to catch a ball that is rolling down a virtual ramp by shifting the handle. When a patient is unable to succeed, the robot provides support by directing the handle to the ball (ARMin II in impedance-control mode). To give the patient visual feedback, the color of the handle turns from green to red when robot-support is delivered. Acoustic feedback is provided when a ball is precisely caught. The difficulty level of the ball game can be modified and adjusted to the patient's need by the therapist, i.e. the number of joint axes involved, the starting arm position, the range of motion, the robotic assistance, resistance or opposing force, and speed. In the labyrinth game, a red ball (cursor) moves according to the patient's arm motions. The objective is to direct the ball from the bottom to the top of the labyrinth. The cursor must be moved accurately. If the ball touches the wall too hard, it drops to the bottom and the game restarts. Like the ball game, the labyrinth provides various training modalities by changing the settings, such as the amount of arm weight compensation, vertical support, number of joint axes involved, working space and sensitivity of the wall [28]. Outcome measurements To ensure reproducibility and consistency of the testing procedure, all measurements were executed by the same person and with the same settings for each patient. Evaluations were always completed before training sessions. Subject in the robot ARMin II with labyrinth and ball game scenario Clinical assessments were filmed and later evaluated by an independent "blinded" therapist from "Charité, Median Clinic Berlin, Department Neurological Rehabilitation". The main clinical outcome was the Fugl-Meyer Assessment (FMA) of the upper-limb. This impairment-based test consists of 33 items with a total maximum score of 66. The test records the degree of motor deficits and reflexes, the ability to perform isolated movements at each joint and the influence of abnormal synergies on motion [29]. It shows good quality factors (reliability and validity) [30,31] and it is widely used for clinical and research assessments [32]. The Wolf Motor Function Test (WMFT) is a 15-item instrument to quantify disability and to assess performance of simple and complex movements as well as functional tasks [33]. This test has high interrater reliability, internal consistency, and test-retest reliability [34]. The WMFT is responsive to patients with mild to moderate stroke impairments. However, for severely affected patients it has low sensitivity due to a floor effect (when single test items are too difficult). Severity of neglect was evaluated with the Catherine Bergego Scale (CBS), a test that shows good reliability, validity [35], and sensitivity [36]. To assess sensory functions of the upper limb, the American Spinal Injury Association (ASIA) scoring system was used [37]. The degree of sensation to pinprick (absent = 0, impaired = 1, normal = 2) was determined at the key sensory points of the C4 to T1 dermatomes. The single scores were summed. In addition, a questionnaire was designed, referring to ADL-tasks, progress, changes, motivation etc. The patients then had to rate the different questions on a scale from 1 to 10, and furthermore, add a comment, expressing their subjective experiences and impressions. Measurements with ARMin II With the ARMin II robot, maximal voluntary torques (MVTs) were determined for six isometric joint actions including vertical shoulder flexion and extension, horizontal shoulder abduction and adduction, as well as elbow flexion and extension. Patients were seated in a locked wheelchair with the upper body fixed by three belts (two crosswise diagonal torso belts and one belt over the waist) to prevent the torso from assisting the movements. The starting position was always the same. The shoulder was flexed 70° and transversally abducted 20°, the rotation of the upper and lower arm was neutral (0°), and the elbow was flexed 90°. Patients were instructed to generate maximal isometric muscle contractions against the resist-ance of ARMin II for at least two seconds before relaxing. During the effort, verbal encouragement was given in each case. Data analysis From the main baseline measurements -FMA, WMFT, CBS, and MVT -the mean values and standard deviations were calculated. Data recorded during the intervention phases were evaluated by using the least square linear regression model with applied bootstrap resampling technique [38]. For the statistical analysis, the programs SYS-TAT 12 and Matlab 6.1 were used. The significance level p ≤ 0.05 of the slope of the regression line was considered to indicate a statistically significant improvement. Results The results of the FMA are presented in Table 1. From baseline to discharge, patients 1, 2, and 3 increased their scores significantly (p < 0.05). They continued to improve in the FMA at the six-month follow-up (see Figure 3). Patient 1 gained +17.6 points in the FMA (from 21 to 38.6 points), while at the follow-up, six months later, he demonstrated even further impressive progress, without having received additional therapy in the mean time. Overall, patient 1 showed an absolute improvement of +29 points (from 21 to 50 points), particularly due to high recovery in distal arm functions (+21 points). The FMA gains of patients 2 and 3 were +5 points (from 24 to 29 points) and +8 points (from 11 to 19 points). These findings were in line with other investigations about the effects of robot-assisted therapy in chronic stroke patients that demonstrated changes between 3.2 and 6.8 points [14,23,[39][40][41][42][43]. However, one must note that such comparisons have to be done with care since studies often differ in methods and criteria (e.g. intervention time, number of training sessions per week, duration of training sessions, type of stroke, affected brain side, time post-stroke, and severity of lesion). Patient 4 showed an increase of +3 points (from 10 to 13 points) in the FMA; however, this increase was statistically not significant. Typical arm functions that are relevant for activities of daily life can be expressed by the WMFT (Table 2). During the therapy, the WMFT scores of patients 1, 2 and 3 increased by +1.00, +0.5, and +0.86 points, respectively. Patients 2 and 3 slightly diminished at follow-up. Nevertheless, these three patients achieved significant progress (p < 0.05), in contrast to patient 4, who showed no significant improvement. However, at the follow-up examination, patient 4 was the only one who further improved in the WMFT (see Figure 4). A questionnaire was used to obtain further information about patient status. The patients reported progress of the affected upper extremity in everyday life activities (e.g. the arm can be lifted higher and better, is more integrated, feels lighter and is less stiff, able to lift glass, fold laundry, use index finger, and control motions better). The grades of patients 1 to 4 regarding the use of their impaired arm during ADLs after the intervention (scale range 1 to 10, no better use = 1, much better use = 10) were 5, 7, 4 and 3, respectively. Furthermore, they described to be more motivated and willing to try to engage their arm in diverse daily activities. An overview of the MVTs, consisting of six different torque measurements, is presented in Table 3. At the follow-up, improvement in muscle strength increased in patient 1, while it slightly diminished in patients 2 and 3. In patient 4, muscle strength returned to the base level at the followup. The demographic data and clinical characteristics of the four patients are summarized in Table 4. None of the patients reported any adverse effects from robot-mediated therapy. In contrast, patients 3 and 4 described reduced hardening and pain of their neck and shoulder muscles. Patients 1, 2 and 3 completed measurements and therapy sessions, except for patient 4, who missed one measurement date and two therapy sessions for reasons that are not related to the study. Discussion In this study, intensive therapy using the robot ARMin II was administered to four chronic stroke patients during eight weeks of training. Patients 1 and 4 received 32 and Clinical FMA scores across evaluation sessions The results of these single-case series underline prior findings with robotic therapy, namely that intensive, repetitive, task-specific, and goal-directed training can significantly improve motor functions in chronic stroke patients -even years post-stroke [14,23,44]. All four patients demonstrated improvements in motor and functional activities, but to various degrees. Overall, they sustained their functional gains at the six-month follow-up or even continued to improve after the end of treatment, indicating potential long-term benefits of robot-assisted therapy. Patient 4 had the lowest motor functions at study entrance and hardly any sensation in the clinical pinprick test. Such neurological deficits can make functional therapy very difficult as feedback functions are not, or hardly, available. This might explain why patient 4 could only profit little from the training with ARMin II. For stroke individuals with little sensory functions, as e.g. patient 4, a sensory intervention is suggested to be a more effective approach [45]. In general, it can be said that stroke patients with severe sensory loss benefited less from treatment than moderately impaired patients [10,46]. The gains in the WMFT likely reflect increased motor performance levels that are suggested to facilitate use of the impaired upper limb in daily activities. These changes seemed to be clinically significant from the patients' perspective. However, the analysis suggested that the impaired upper limb was mainly involved as an assist in bimanual ADLs after intervention. In addition, one must note that not only gains in motor abilities were achieved, but also positive impacts on concentration, neglect, physical capacity, well-being, body balance and posture were noticed. Patient 4, for example, diminished twelve points in the CBS, indicating a reduced neglect ( Table 5). The different responses of this pilot research could be explained by patients' heterogeneity, as patients differed in terms of age, time post-stroke, affected brain areas, sensation, muscle tone, etc. (Table 4) -all factors that influence motor relearning. The highest motor recoveries were experienced by patient 1, the youngest and least chronic patient. But note that patient 1 like patient 4 received more intensive intervention than the other two patients. It seems that treatment, with additional therapy hours, is primarily fruitful and beneficial for patients with a certain level of remaining sensory functions and motor abilities. In a comparable study that has been conducted with the robot ARMin I, patients I and II received 24 hours of therapy, while patient III received 40 hours. The improvement of patients I, II and III were +3.1, +3.0, and +4.2 points (initial FMA scores were 14, 26 and 15). The reason for the less distinctive improvements with the former version ARMin I and other previous robots might be due to the limited movement capabilities (only proximal [14,47,48] or distal [49] arm involvement) or the Clinical WMFT scores across evaluation sessions Figure 4 Clinical WMFT scores across evaluation sessions. missing control of proximal joints (endeffector-based robots). In contrast, ARMin II allows for authentic motion sequences, including coordinated interactions between wrist, elbow and shoulder joints. This seems to be an important feature since most everyday activities are composed of inter-joint coordination. ARMin II is an exoskeleton-based robot and, in general, better suited to train ADL-tasks than an endeffector-based robot. This is because, in an exoskeleton robot, the human arm is very well supported and guided by the robot, movements with large ROM can be trained, and the interaction torques that the robot apply to each joint of the human arm can be controlled individually. Complex movements also enable patients to break abnormal synergy patterns that are limiting arm motor func-tions [50][51][52]. As ARMin II provides support against gravity, abnormal synergy patterns in hemiparetic limbs can progressively be learned to be overcome, a matter that was observed in patients 1, 2 and 3. For this therapy issue, the labyrinth scenario seemed to be particularly suitable, as the parameters can be highly varied and adapted individually to the patient's needs. Similar to these findings, Sukal et al. [53] have shown in their research that a reduced range of motion in stroke patients was a result of pathologic synergies during arm lifting. By de-weighting and supporting the impaired arm in training sessions, the effect of gravity could be overcome. Another approach by Ellis et al. [52] demonstrated that abnormal joint torque coupling could be modified by 'multi-DOF progressive resistance training' in severely impaired chronic stroke individuals. Patients gained strength and simultaneously improved in multi-DOF joint torque combinations. The same relationship could also be observed in the patients that participated in this pilot study. An increase in muscle strength was associated with a larger active range of motion as well as improved muscle coordination. Patients dissociated from synergistic co-activation in the FMA and WMFT. A distinct finding of the ARMin II study was that posttreatment further progress was achieved in the FMA. These continuing improvements might be due to therapy of distal arm functions since neither the ARMin I study showed such additional effects at follow-up nor any other proximal robotic study in literature that the authors are aware of. Krebs et al. [41] found that training of the more distal limb segments led to twice as much carryover effect to the proximal segments than vice-versa. Moreover, they observed that improvement in more distal segments continued significantly even without further training for that particular limb segment. This finding supports our assumption that the patients were better able to use their arm in daily activities after robotic treatment, allowing them to further improve at the six-month follow up. With eight weeks of robot training, patient 1 enhanced his performance to such an extent (from initially 21 points to 50 points in the follow-up in the FMA) that he reached a higher functional state -opening up new therapy approaches like constrained induced movement therapy CIMT. In the present study the two game scenarios (labyrinth and ball game), were particularly suitable to create an enjoyable, efficient and motivating intervention. Patients' interests could be incorporated into therapy by choosing different game settings and levels with miscellaneous arm positions and various joint axes. Nevertheless, complementary therapy modes focusing on specific ADL-tasks and/or virtual reality scenarios might additionally help to facilitate a transfer to ADLs [26,27]. An additional hand module for opening and closing hand function and/or single finger functions would enable more specific and individualized therapy of hand and fingers, allowing for the implementation of more authentic ADL-tasks. So far, treatment with robotic devices [17,19,54] shows no consistent improvement in functional abilities of daily activities. Although high functional improvements and a transfer to ADLs were achieved in this investigation, these findings are limited to single cases. The pilot study included only four, rather heterogeneous chronic stroke patients. Despite the fact that functional stability could be verified in all patients at baseline, no separate control group was used. All patients continued with their conventional outpatient therapies (maximum 1 hour of physical and occupational therapy per week, focusing on the gait and posture only). However, the patients were encouraged to continue their standard therapies on a constant level, so that possible improvements due to this small amount of conventional therapy can be excluded from the study. Overall, these encouraging results definitely justify starting a large randomized clinical trial. Conclusion This paper presents preliminary results of a pilot study with four heterogeneous, chronic stroke patients using the robotic device ARMin II. The findings support the assumption that robot therapy can significantly influence therapy outcomes. A robot that includes proximal and distal joints, such as ARMin II, allows for a wide field of specific training modalities with natural and complex motions. In this study it was noted that a subgroup of patients could achieve a transfer to ADLs after performing the training phase. This finding is of great importance since treatment with robotic devices to date did not result in consistent improvement in activities of daily living. A transfer to everyday life should be indeed a central intention of rehabilitation as it brings independence and improve quality of life. Competing interests Tobias Nef and Robert Riener are inventors of patents describing the ARMin invention (WO2006058442, EP1827349, EP07020795). The owners of the patents are the ETH Zurich and the University of Zurich. Authors' contributions PS developed the study design, performed subject recruitment, data acquisition and statistical analysis. She was the primary composer of this manuscript. TN provided feedback and expert guidance throughout this study and was also involved in data analysis. TN and RR designed and built the robotic device ARMin II used in this work. All four authors contributed significantly to the intellectual content of the manuscript and have approved the final version to be published.
v3-fos-license
2018-12-15T08:59:49.204Z
2016-07-21T00:00:00.000
58924530
{ "extfieldsofstudy": [ "Biology" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://academicjournals.org/journal/AJMR/article-full-text-pdf/86B556959459.pdf", "pdf_hash": "3e9f5ad972d6a3039d4d22b163dcfa26ca2b50bf", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41188", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "sha1": "3e9f5ad972d6a3039d4d22b163dcfa26ca2b50bf", "year": 2016 }
pes2o/s2orc
Bacteriological and physicochemical qualities of traditionally dry-salted Pebbly fish ( Alestes baremoze ) sold in different markets of West Nile Region , Uganda The present study aimed at estimating the microbiological and chemical characteristics of traditionally dry-salted fish product, Alestes baremoze. A total of 40 random dry fish samples were collected from Arua, Nebbi, Packwach and Panyimur markets. Moisture content, pH, crude protein, crude fat and sodium chloride were analysed to determine chemical quality while Escherichia coli, fecal streptococci, Staphylococcus aureus, Salmonella, Vibrio parahaemolyticus, Bacillus cereus and Pseudomonas spp. were determined to estimate the microbial quality. The moisture content of dry-salted fish collected from different markets was in the range of 37 to 41%. Mean values of sodium chloride obtained in the fish muscle were in the range of 13 to 14% and significantly differed across fish markets. Results from microbial analysis expressed as colony-forming units per gram of sample indicated that S. aureus was the most dominant bacteria identified in dry-salted fish sold in all markets with Nebbi market having the highest counts (9.4×10 6 ), Panyimur (2.2×10 6 ), Packwach (2.3×10 5 ) and Arua (9.6×10 4 ). Salmonella was absent in fish samples collected from three markets of Arua, Packwach and Panyimur apart from Nebbi market. E. coli counts were found to be < 10 1 and fecal streptococci counts were relatively high in fish from Panyimur market (1.1×10 3 ). There was presence of B. cereus in all the samples ranging from 8×10 1 INTRODUCTION Dry salting has been traditionally used as a method of fish preservation, since it lowers the water activity of fish flesh (Horner, 1997).The salt mainly contains chloride ions that are toxic to some microorganisms (Leroi et al., 2000;Goulas and Kontominas, 2005).This technique is hence used to preserve fish from spoilage owing to tissue autolysis and microbial action (Chaijan, 2011).Bacterial spoilage is for example characterized by softening of the muscle tissue, which can however be prevented by salt, because it forms a more membranous surface that inhibits the growth of microorganisms (Horner, 1997;Rorvik, 2000).Although salting reduces the rate of autolysis, it does not completely stop enzymatic action that increases with increasing temperature. The presence of foodborne pathogens in a fish product is a function of the harvest environment, sanitary conditions, and practices associated with equipment and personnel in the processing environment (FDA, 2001).The handling of fish products during processing involves a risk of contamination by pathogenic bacteria such as Vibrio parahaemolyticus and Staphylococcus aureus causing foodborne human intoxication (Huss et al., 1998;Shena and Sanjeev, 2007).There is substantial evidence that fish and seafood are high on the list of foods associated with outbreaks of food borne diseases around the world (Kaysner and DePaola, 2000;Huss et al., 2003).The safety of foodstuffs should be ensured through preventive approaches, such as implementation of good hygiene practices and application of procedures based on hazard analysis and critical control point (HACCP) principles. Alestes baremoze commonly known as Angara in Uganda is highly marketable and valued fish in Northern Uganda, South Sudan, Sudan and in the Democratic Republic of Congo (Kasozi et al., 2014).In Sudan, A. baremoze is normally prepared by wet salting.After several methods of salting, fermentation and storage, the final product is called fassiekh (Yousif, 1989;Adam and Mohammed 2012).Angara is prepared by dry salting which involves stacking the fish in salt and the formed brine is allowed to drain away while allowing it to dry under natural sunlight for two to three days.Many consumers, especially in the West Nile region appreciate the taste, special flavour and texture characteristics of this fish.Salting is not only a method to prolong shelf life, but a method to produce fish products that meet demand of consumers.Almost 90% of the total catches of Angara around Lake Albert are dry-salted.However, the available traditional fish processing practices expose the fish to different kinds of microbial and chemical degradation.The current wide spread practice of drying the fish directly on the ground and use of old fishing nets results in microbial contaminated fish products.There are currently no published work on the microbiological changes during production and storage of salted Angara yet the quality of salted and sun dried fishes are adversely affected by the occurrence of microorganisms.The need for determination of microbiological quality of dry-salted fish products is important to prevent risk of bacterial infection to the consumers.This study therefore evaluates the bacteriological and physicochemical qualities of dry-salted Angara sold in different markets to serve as a guide to consumers and regulatory bodies. Sample collection The study was conducted in four selected markets in West Nile region of Uganda.The process of dry salting (Figure 1) is normally carried out at Panyimur landing site and it's from this site that the dry-salted fish products are obtained and transported to other markets within the region.A total of 40 dry-salted fish samples were purchased from the markets of Arua, Nebbi, Packwach where they had been on stall ready for sale for five days and from Panyimur market where they had been dried for one day (Figure 2).At least 10 samples were collected from each market.These were labeled, sealed in airtight polythene bags and later transported to the laboratory for analysis. Physicochemical analysis Fish samples were analysed to determine the moisture content, fat, protein, sodium chloride and pH.Moisture content was determined by oven drying of 5 g of fish fillet at 105ºC until a constant weight was obtained (AOAC, 1995).Measurement of salt content was carried out using the Volhard method according to AOAC (1985).Crude protein was determined by the Kjeldahl method using sulphuric acid for sample digestion.Crude fat was obtained by exhaustively extracting 2.0 g of each sample in a Soxhlet apparatus using petroleum ether (b.p. 40 -60°C) as the extractant (AOAC, 2000).pH was determined after homogenizing 10 g of fish sample into 100 ml of distilled water.The pH of filtrate was then measured using pH meter (HI 8014, USA). Enumeration and isolation of bacteria Serial dilutions from each sample were prepared before subsequent culturing according to the microbiological techniques of AOAC (1995).The total viable count of Angara samples were carried out using plate count agar according to the standard methods of AOAC (1995).The microbiological parameters were conducted in duplicate, the means and standard deviations were also calculated.Plate count number was presented as colony-forming units per gram of sample (cfu/g). Pseudomonas Pseudomonas was determined by spread plate method where 0.5 ml of decimal dilution was spread on the surface of Pseudomonas CN Selective Agar and incubated at 37°C for 48 h.V. parahaemolyticus V. parahaemolyticus was detected according to the General guidance for the detection of V. parahaemolyticus (ISO 8914:1990).Twenty five grams of each sample were weighed into sterile stomacher bag containing 225 ml alkaline peptone water and then blended for 60 s.Serial dilutions were prepared to get 10 2 and10 3 diluents, and 1 ml aliquot of samples were transferred into 3% NaCl dilution tubes, and incubated at 35˚C for 24 h.The turbid tubes were streaked on Thiosulfate Citrate Bile Salt Sucrose Agar (TCBS) plates and incubated at 37˚C for 24 h.Distinct colonies with blue green color were presumed as V. parahaemolyticus and yellow colonies were presumed as Vibrio cholera.To facilitate identification of suspect Vibrio isolates, the isolated colonies were further identified by API 20E system. S. aureus S. aureus was determined according to the method for the enumeration of coagulase-positive staphylococci (S. aureus and other species) using Baird-Parker agar medium (ISO 6888-1: 1999).Twenty five grams of each sample were weighed into stomacher bag containing 225 ml peptone water and then blended for 60 s.The resultant stock solution was then serially diluted and 0.5 ml diluents were spread on Baird-Parker agar plate.All inoculated plates were dried and incubated at 37˚C for 48 h.Then clear zone with typical gray-black colonies was taken as presumptive evidence of S. aureus.Confirmation of Staphylococcus spp. was done using Staphylococcus latex test. Salmonella spp. Salmonella spp. was determined according to the horizontal method for the enumeration of Salmonella spp.(ISO 6579: 2002).Pre-enrichment was conducted with 25 g of sample diluted in 225 ml sterile buffered peptone water incubated at 37°C for 24 h.Secondary selective enrichment was performed in Rappaport-Vassiliadis peptone broth (41°C for 24 h) and Muller-Kaufmann tetrathionate broth with Novobiocin (37°C for 24 h), and streaking on Xylose Lysine Desoxycholate (XLD) agar and incubated at 37°C for 24 h.Typical Salmonella spp.exhibited pink colonies with black centers. Escherichia coli E. coli was determined by pour plate method using Rapid' E.coli 2 Agar (AFNOR BRD 07/1 -07/93).Using a sterile pipette, 1 ml of each decimal dilution was inoculated to a sterile Petri dish and then 15 ml of Rapid Ecoli Agar was dispensed, mixed thoroughly and after setting, a thin overlay of 5 ml of Rapid Ecoli agar and later incubated at 44ºC for 24 h.Plates with purple colonies were counted and confirmed with Kovac's reagent and all positive colonies showed a purple layer. Bacillus cereus B. cereus was determined according horizontal method for the enumeration of presumptive B. cereus (ISO 7932:2004).Twenty five grams of each sample were homogenized in 225 ml sterile peptone water for 60 s.Serial dilution was carried out and 0.1 ml diluents were spread on B. cereus Selective Agar.The inoculated plates were then incubated at 30˚C for 24 h; large pink colonies with egg yolk precipitate were presumed as B. cereus. Confirmation was done with haemolysis test. Fecal streptococci Fecal streptococci was determined by spread plate method where 25 g of fish sample was taken aseptically and homogenized with 225 ml sterile peptone water for 60 s.0.5 ml of each of decimal dilutions of the samples was spread on Typhon Soya Broth Agar and overlay with Kanamycin Esculin Azide Agar added and later incubated at 42°C for 24 h.The characteristic black colonies were counted after incubation confirmatory tests. Statistical analysis Data was analysed using Graph pad version 6 statistical software. Comparisons between means for physicochemical parameters were carried out using a One Way Analysis of Variance (ANOVA) and results with p values < 0.05 were considered statistically significant. Comparisons between mean values of physicochemical parameters across fish markets were done using Tukey`s multiple comparison test.Data are represented as means ± standard deviation.Results of physicochemical analysis and mean microbial counts of the drysalted fish samples were compared with the set standards established by UNBS. Chemical analysis Results from the chemical analysis (Table 1) revealed that moisture content significantly varied across fish markets (F 3, 12 = 4.0, p = 0.0014).The results showed that moisture content was significantly (p> 0.05) higher in fish collected from Panyimur market (41.6±0.47%) as compared to Nebbi (36.0±0.83%) and Arua (37.0±2.97%)fish markets.The relative higher moisture content in fish samples from Panyimur might be due to a shorter storage period since it is from this site that fish is distributed to other markets.Findings of this study show that values of37 to 41% of dry-salted fish collected from different markets are in accordance to 35 to 40% standard range for moisture content of dry-salted fish and fish products(UNBS, 2012).Accordingly, moisture content of A. baremoze flesh without any processing ranged between 72 and 75% (Kasozi et al., 2014).Therefore dry salting method employed by fisher folk results in considerable loss of water due to heavy uptake of salt.The moisture content is an indicator of the susceptibility of a product to undergo microbial spoilage.It has a potential effect on the chemical reaction rate and microbial growth rate of the food product.Since moisture content is an indicator of the susceptibility of food products to undergo microbial and chemical spoilage (Horner, 1997;Chaijan, 2011;Goulas and Kontominas, 2005), traditional dry-salting of fish can result in storage stability. The changes in the pH of dry-salted A. baremoze significantly varied across fish markets (F 3, 12 = 1.5, p < 0.0001).Fish from Arua were associated with significantly lower pH as compared to other fish markets (6.3±0.01).This could be attributed to relatively higher sodium chloride (14.9±0.01%)found in samples collected from this market.Goulas and Kontominas (2005) reported that salt had a highly significant linear decreasing effect on the pH of chub mackerel after day one of storage.Similarly, Chaijan (2011) reported a rapid decrease in the pH of dry salted Oreochromis niloticus muscle in the first 10 min of salting.The pH decrease in fish flesh by the addition of salt is explained by the increase of the ionic strength of the solution inside of the cells (Goulas and Kontominas, 2005). The fat content significantly varied across fish markets (F 3, 12 = 0.9, p < 0.0001).The lowest fat content reported in fish samples from Arua market (12.9±0.66%)might be due to relatively higher sodium chloride (14.9±0.01%)since increased salt content induces lipid oxidation in muscle tissues and reported to accelerate progressively during dry salting of Oreochromis niloticus (Chaijan, 2011). The protein content of processed fish significantly (F 3, 12 = 0.1, p = 0.0002) differed across fish markets ranging from 31 to 35% (Table 1).Comparison across fish markets revealed that protein content was significantly higher (35.1±0.88%) in fish from Panyimur as compared to other fish markets.Salting of fish is usually accompanied by protein losses, as water is drawn out and meal brine is formed, with some protein dissolved into the brine (Chaijan, 2011).Since fish was only stored for one day at Panyimur, this might explain the relatively higher protein levels compared to other markets. Mean values of sodium chloride obtained in the fish muscle were in the range of 13 to 14% and significantly (F 3 , 12 = 0.8, p = 0.0003) differed across fish markets.Comparisons across fish markets revealed significantly higher (p> 0.05) sodium chloride levels in fish from Arua market (14.9±0.01%). Although salting effectively prevents the growth of both spoilage and pathogenic bacteria (Leroi et al., 2000); it has been reported that salt content in fish muscle enhances oxidation of the highly unsaturated lipids.Many of the fresh-fish-spoiling bacteria are quite active in salt concentrations up to 6% (Horner, 1997).Above 6 to 8%, they either die or stop growing.As the salt concentration is increased beyond 6%, bacteria of another group, consisting of a much smaller number of species, are still able to grow and spoil the fish.However, the halophiles "salt-loving" can still grow best in salt concentrations that range from 12 or 13% to saturated brine.Therefore, certain halophilic micro-organisms can multiply under the conditions of drysalting and can also spoil the product. Bacteriological quality The quality of salted and sun dried fishes are adversely affected by microbial contamination.Determination of microbiological quality of processed dried fish product is very important for protecting consumer's health (Lilabati et al., 1999).The presence of potentially pathogenic bacteria in dried fishes is critical with regard to safety and quality of seafood.The acceptable microbiological limits set by UNBS for dried and salt-dried fish are indicated in Table 2 and these were compared with the results from the total plate counts of Angara from different markets.Our study showed that S. aureus was the most dominant microorganism identified in dry-salted fish sold in all markets of West Nile region (Table 2).S. aureus does not appear as a part of the natural microflora of newly caught marine and cultivated fish (Herrero et al., 2003).Therefore, the presence of S. aureus is an indicator of poor hygiene and sanitary practices, during processing and storage.In this study, counts of S. aureus were above the limit of 2×10 3 cfu /10 g, recommended by the Uganda National Bureau of Standards (2012).However, lower bacterial load in fishery products might not be a serious risk, however, but food poisoning may occur if the product is handled carelessly resulting in high multiplication (>1×10 5 cfu/g) (Varnam and Evans, 1991;Vishwanath et al., 1998). Although E. coli and fecal coliform bacteria can be found in unpolluted warm tropical waters (Huss, 1993;Hazen 1988;Fujioka et al., 1988), they are particularly useful as indicators of fecal contamination and poor handling of seafood.According to UNBS (2012) absence of E. coli has been recommended as an upper limit for a very good quality dry salted fish.In this study, E. coli counts were found to be <10 1 cfu/g and fecal streptococci counts were relatively high (Table 2).Similar results have also been reported by Colakoglu et al. (2006) for fecal streptococci counts between <10 1 and 10 3 cfu/g in the fish from wholesale and between <10 1 and 10 5 cfu/g in retail markets.It is reported that unclean boat deck, utensils in the boat, polluted water can certainly add the fecal bacteria load (Sugumar, et al., 1995) and this might explain the high fecal streptococci counts of 1.1×10 3 cfu/g (Table 2) of dry-salted fish at Panyimur market situated close to a landing site where fisher folk uses the lake water during the salting process.Salmonella is highly pathogenic and this is the major reason for isolation of such bacteria from sample fishes.Salmonella was absent in three markets of Arua, Packwach and Panyimur apart from Nebbi market (Table 2).Incidence of Salmonella in the sample of fish from this market may be attributed to external contamination and poor handling at ambient temperature. V. parahaemolyticus is an indigenous bacterium in the marine environment and can also grow at 1 to 8% salt concentrations (Huss, 1993).It occurs in a variety of fish and shellfish, including clams, shrimp, lobster, crayfish, scallops and crabs, as well as in oysters (Kaysner and DePaola, 2000) It is very heat sensitive and easily destroyed by cooking (Huss et al., 2003).B. cereus strains are widely distributed in the environment and their spores are resistant to drying and can easily be spread with dust (Huss et al., 2003).There was presence of low density of B. cereus in all the samples ranging from 8×10 1 B. cereus (cfu/g) in Arua market to <20 cfu/g in Nebbi and Panyimur (Table 2).A small number of Bacillus spp. in foods is not considered significant (Beumer, 2001). Many species of Pseudomonas spp.have a psychrophilic nature and are regarded as part of the natural flora of fish.The species can form aldehydes, ketones, esters and sulphides following food spoilage, causing odours described as fruity and rotten (Tryfinopoulo et al., 2002).The isolation of Pseudomonas spp from the collected fish samples is of high importance because this bacterium plays considerable role as a potential pathogenic bacteria for human and as an indicator of food spoilage.According to UNBS (2102), Pseudomonas spp.should be absent in dried and salted dried fish however this study reveals that Pseudomonas was detected in all samples, < 20 cfu/g in three markets of Arua, Nebbi and Packwach and 12×10 1 cfu/g for Panyimur (Table 2). Conclusion Bacteriological quality of most Angara samples analyzed in this study did not meet the standards established by the Uganda National Bureau of Standards (UNBS) for dried and dry-salted fish.The study pointed out that Angara obtained from the markets was contaminated with substantial number of S. aureus.Salmonella and fecal streptococci and were also detected in fish from Panyimur and Nebbi markets, respectively.The substantial number of these microorganisms in Angara suggests poor personal hygiene, particularly among fish handlers and improper storage.Hence control measures such as use of good quality raw material, hygienic handling practices, potable water, good quality packaging material, hygienic processing place may be considered to improve the microbial quality of the dried fish product.Proper cooking procedures should be emphasized to eliminate or reduce the microorganisms to an acceptable level. Figure 1 . Figure 1.Traditional method for the production of salted Alestes baremoze (Angara) at the landing site of Panyimur. Figure 2 . Figure 2. Angara fish samples: A: Gutted Angara samples on ground at the landing site; B: Sun drying of split fish on raised platforms and C: Angara fish on stalls in different markets. Figure Figure 1.Traditional method for the produc Table 1 . Physicochemical analysis of dry-salted A. baremoze samples collected from different markets after a storage period of five days. *,Significant differences across markets; Fish samples at Panyimur market had been stored for one day. Table 2 . Total viable bacterial count of dry-salted Pebbly fish (Alestes baremoze) samples collected from different markets after a storage period of five days.
v3-fos-license
2018-12-07T12:44:42.872Z
2003-01-01T00:00:00.000
54979289
{ "extfieldsofstudy": [ "Chemistry" ], "oa_license": "CCBYSA", "oa_status": "GOLD", "oa_url": "http://www.doiserbia.nb.rs/ft.aspx?id=1450-81090302179Z", "pdf_hash": "1c07cbb1c23d937ff30ba538cf32f6d603dd655e", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41189", "s2fieldsofstudy": [ "Agricultural and Food Sciences" ], "sha1": "1c07cbb1c23d937ff30ba538cf32f6d603dd655e", "year": 2003 }
pes2o/s2orc
SENSORY PROPERTIES OF SMOKED PORK LOIN AS DETERMINED BY APPLIED ADDITIVES Authors examined the effects of SUPRO 595 and GRINSTEDTM Carrageenan CC 250 addition on selected sensory properties of smoked pork loin. Sensory evaluation of products included an estimate of cut appearance, texture, odor, taste and color. Using photoelectric tristimulus colorimeter (the MOM Color D) determination of color characteristics in pork loin samples was carried out. The values for psychometric lightness (L*), psychometric hue redness (a*) and psychometric chrome yellowness (b*) were expressed based on the CIELAB system. Tenderness and firmness of samples were instrumentally measured on an "INSTRON" – 4301, under the given working conditions. The variants of smoked pork loins with soy isolate were better evaluated compared to the variants with carrageenan. The results of instrumental determination of color characteristics of the products showed that between the variants with soy isolate and the variants with carrageen there existed insignificant deviation in the values of psychometric lightness (L*), in the presence of redness (a*) and in the presence of yellowness (a*). Instrumentally measured tenderness and firmness showed that the samples with carrageen were characterized by somewhat greater tenderness and firmness compared to the samples with soy isolate. I n t r o d u c t i o n Intensive development of technology, especially in the production of semidurable meat products, is mostly based on curing processes and additives utilization.Because of the tendency to reduce the time of processing as well as losses during thermal treatment, mechanical treatment during curing becomes necessary (Petrović and Šibalić, 1980). That is why the composition and quantity of some components of brine, must be adapted to the regime and duration of mechanical treatment.Brine must contain ingredients, which influence speed and flow of the process as well as sensory properties of end product (Modić et al, 1980). Injecting a relatively large amount of brine (25-40%), shorter curing process, and damages of muscle stroma during mechanical treatment can cause less desirable sensory properties of the products (Stamenković, 1985).Usual problems occuring in the production of semi durable meat products, especially smoked pork loin, are wet cross section and less desirable appearance and texture (flaky, tender or plastic).Defects are, as a rule, more frequent, when the water content in the product is higher as a result of injecting a large amount of brine. To reduce the losses during thermal treatment and to preserve satisfactory sensory quality of the products, it is usual to add soy isolate as an ingredient of brine.The uses of soy isolate contribute to better water binding and texture properties (Vomberger Blanka et al, 1988). In products produced from musculature with very delicate connective tissue stroma, such as pork loin it is, sometimes difficult to reach a desired connection of muscle structure and to obtain the product with desirable structure and cut appearance only with usual additives, such as salt, polyphosphates and protein preparations, but also by using hydrocolloids (Stamenković, 1994;Lisicin et al, 1997). Hydrocolloids are high-molecular weight biopolymers that belong to two different classes: polysaccharides and proteins.They have the ability to thicken or gel aqueous systems.These properties are the basis for their use in food and for other applications ( Pedersen, 1980). Polysaccharides such as carrageenans, alginates, galactomanans and xantans are polymers of plant origin made up of at least two different monosaccharides.Physical properties of polysaccharides (solubility, thickening, stabilizing or gelling properties) depend on the size and structure of the macromolecules, their shape, flexibility, capacity to self-associate, presence of sulfates, methyl ethers, acetyl esters or pyruvate groups ( Wallingford and Labuzza, 1983). Carrageenans are obtained from red and brown seaweeds.They are made up of sulfated galactose units in D form.They contain little or no pyruvic or methyl groups and their sulfate contents are between 15% and 40%.The carrageenan family is extremely diverse, and it can be broadly classified into 4 main types, split into two groups: gelling carrageenans and thickening carrageenans (Ray and Labuzza, 1981;Lewicki, 1978). In gelling carrageenans, alternation of 4C1 and 1C4 allows carrageenans chains to form a double helix.This structure is stabilized by hydrogen bonds, which are easily disrupted by heating, so carrageenan gels are thermally reversible. Due to their excellent water binding properties and ability to form very firm gel at low concentrations, carrageenans have been recently successfully used in the production of canned ham and similar products (S t a m e n k o v i ć, 1994; L i s i c i n et al, 1997). Considering high hydration demands in the production of semi-dry meat products, and knowing that muscle structure could be injured during mechanical treatment, which could lead to texture and appearance defects of products, we have decided to investigate the possibilities of using carrageenan in the production of smoked pork loin instead of soy isolate. Brine made of nitrite -14%, polyphosphate preparation -1.2%, dextrose -2% and Na-ascorbate -0.2%, was added in the amount of 25% regarding muscle mass.In 60% of prepared brine, carrageenan GRINSTED TM CC 250 (0.25%, 0.40% and 0.55%), or soy isolate SUPRO 595 (0.75%, 1.0% and 1.25%), related to the raw material mass, was dissolved, and that amount was injected with twoneedle hand injector.The rest of the brine was added in tumbler.Mechanical treatment lasted 6 hours, and thermal treatment was accomplished by usual method and lasted 5 hours.After thermal treatment and smoking, pork loins were held for 12 hours at 5-10 o C. Sensory evaluation of products was performed by 6 assessors applying the five -point scoring system (Joksimović, 1977). Using photoelectric tristimulus colorimeter (the MOM Color -D) determination of color characteristics in pork loin samples was carried out .The values for psychometric lightness (L*), psychometric hue -redness (a*) and psychometric chrome -yellowness (b*) were expressed based on the CIELAB, 1976 system ( Robertson, 1977). Tenderness and firmness of samples were instrumentally measured on an "INSTRON" -4301 under the given working conditions. Results and Discussion Regarding the results obtained by sensory evaluation of smoked pork loin (Table 1), it can be seen that the cut appearance of all tested concentrations was almost identical.The cross-section of products showed consistent texture, with light gray colored fields of gelled carrageen , which resulted in somewhat lower scores compared to samples with soy isolate.Samples with soy isolate had smaller damage of muscle structure.In all examined samples, except one with soy isolate (0.75%), water was adequately fit, and product cross-section wasn't wet. The samples with carrageenan compared to the samples with soy isolate with respect to texture were somewhat firmer and harder and less juicy, while the sample made with 0.55% of carrageenan was extremely firm and rubber-like while chewing.The texture of products prepared with 1% and 1.25% of soy isolate were very good and scored 4.7 and 4.0 points.The odor of all the samples was typical for the specified product and very agreeable but more expressive in products with soy isolate.The taste of products was also very agreeable, and it was observed that increase in additive concentration improved the taste.The products made with 0.55% carrageenan and 1.25% soy isolate appeared to be the best. The color of all products was very good and it scored above 4 points, except for the sample made with the lowest concentration of both additives.Therefore, the increase in additive concentration improved the color of the product. The results of instrumental determination of color characteristics of the products are shown in Table 2.The values for psychometric lightness (L*) were very alike in samples produced with soy isolate and carrageenan, respectively, so that it can be concluded that both additives had very similar effect on this color characteristic.A slight increase of the product lightness caused by increase in the additive concentrations was noticed. The presence of redness (a*), was also slightly higher in the products with lower concentration of both additives; nevertheless, it is most likely to be related with a non-uniform distribution of the pigment material (raw material color).The presence of yellowness (b*) was almost identical, in all the investigated products, except in the samples with 1% of soy isolate, where it was somewhat lower. The data of testing color characteristics of smoked pork loin were mostly correlated with the results of sensory evaluation, and led to the conclusion that the additive used in applied concentrations did not essentially affect the product color.In table 3 are presented the results of tenderness and firmness evaluation.Samples produced with carrageenan had higher firmness and tenderness compared to the samples with soy isolate.Penetration force was significantly higher (30-39 N) for samples with carrageenan, while for samples with soy isolate it was 20-27N.Shear force was higher for samples with carrageenan too (50.2-57N)compared to samples with soy isolate (40.2 -48.9 N).It can be noticed that the results are directly proportional to the additive concentrations, which means that with the increase of additive content, yields increase in the firmness and decrease in the tenderness of the products. C o n c l u s i o n On the basis of presented data obtained by instrumental determination of color characteristics, as well as firmness and tenderness of product, we can conclude the following: -In the production of smoked pork loin it is possible to use both of the examined additives; addition of carrageenan had a positive effect on binding of the product and its cut appearance, which is an advantage compared to usually utilized soy isolate; samples produced with the addition of soy isolate have somewhat better odor; higher additive concentration effects favorably the taste of the product; -None of the applied additives in the examined concentrations has essential effects on the color of products, which was confirmed by instrumental determination of color characteristics. -Samples produced with the addition of carrageenan are characterized by increase of firmness and decrease of tenderness compared to samples with soy isolate, which was confirmed by instrumental analysis. T a b.1.-Results of sensory evaluation of smoked pork loin
v3-fos-license
2022-07-06T15:01:50.886Z
2022-07-04T00:00:00.000
250289445
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://journals.iium.edu.my/ejournal/index.php/iiumej/article/download/2143/867", "pdf_hash": "36737e24eecdcf4df549284fe36cfd23be34f4ee", "pdf_src": "ScienceParsePlus", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41190", "s2fieldsofstudy": [], "sha1": "a5071c56c206c33006fc353edd3809a1b06549a9", "year": 2022 }
pes2o/s2orc
CHARACTERIZATION AND SINTERING PROPERTIES OF HYDROXYAPATITE BIOCERAMICS SYNTHESIZED FROM CLAMSHELL BIOWASTE : Hydroxyapatite (HA) is a type of calcium phosphate-based bioactive ceramic that resembles the mineral phase of bone and teeth with great potential for bone substitution and biomedical implants. Biogenic-derived HA emerges as a cheap and eco-sustainable alternative to improve waste utilization. However, hydroxyapatite has limited applications due to its apparent brittleness, thus prompting investigation for enhanced sintering properties. In the present study, the combination of calcination and chemical precipitation technique was used to extract hydroxyapatite (HA) from ark clamshells ( Anadara granosa ). The method successfully produced HA powder with a Ca/P ratio of 1.6 and characteristic bands corresponded to pure HA via Fourier Transform Infrared Spectroscopy (FTIR). The synthesized HA powder was then sintered at temperatures ranging from 1200 °C to 1300 °C, followed by mechanical evaluation of the density, Vickers hardness, fracture toughness and grain size. It was revealed that the sintered 1250 °C achieved a relative density of  88%, Vickers hardness of 5.01 fracture toughness of 0.88  0.07 MPa.m 1/2 and average grain size of  3.7 results clamshell functional bioceramics for biomedical INTRODUCTION Hydroxyapatite (HA) with a chemical formula of Ca10(PO4)6(OH)2, is a type of calcium phosphate-based ceramic that comprises the main mineral constituent to human bones and teeth, which is widely used in dental and orthopedic applications. The conventional methods to synthesize HA are solid-state, mechanochemical, chemical precipitation, hydrolysis, solgel, hydrothermal, emulsion, sonochemical, high-temperature processes or a combination of a few techniques [1]. Among these methods, wet-chemical precipitation is the most promising and low-cost technique [2,3]. In recent years, biowaste-derived HA has attracted attention as numerous food wastes such as bones, eggshells and seafood shells had piled up in landfill globally. Specifically, shell wastes such as oyster, mussel, scallop, clam and cockle are discarded in an abundant amount. Million tons of shell wastes have been discarded and piled up in landfills in China, Taiwan, Spain, South Korea, Peru, Indonesia, Nigeria, and Malaysia [4][5][6][7]. Instead, these shell wastes could be utilized to synthesize HA owing to the rich calcium carbonate (CaCO3) content [8][9][10]. Typically, HA has been successfully synthesized via hydrothermal synthesis method by utilizing various species of clamshell such as Strombus gigas, Tridacna gigas [11], Venerupis [12], Corbicula [13,14], Mercenaria [15], and Anadara granosa [16]. However, the majority of studies did not report the mechanical properties of sintered HA. In the current study, Anadara granosa clamshell will be used as the calcium precursor to synthesize natural HA powder, followed by characterization of its properties at various sintering temperatures. Synthesis of Powder Biogenic sources such as seashells or eggshells are good natural sources of calcium precursor for the synthesis of HA bioceramics. In this study, the Anadara granosa clamshells collected from peninsular Malaysia were used as the starting materials to synthesize HA. The as-received clamshells were washed thoroughly, rinsed with distilled water, and dried in an oven at 80 °C for one hour. The dried clamshells were then crushed, ground and sieved through a 300 μm test sieve. This was followed by calcination at 1000 °C for four hours in an electrical furnace (Carbolite Gero, UK), to transform the calcium carbonate (CaCO3) into calcium oxide (CaO). Figure 1 depicts the flow chart of the HA synthesis process via wet chemical precipitation technique. First of all, 0.25 M of calcium precursor to 0.15 M of phosphorus precursor was employed to achieve stoichiometric HA with the calcium/phosphorus (Ca/P) concentration ratio of 1.67. 2.8 g of CaO powder was then added in 200 ml of distilled water to formulate the Ca(OH)2 solution as the calcium precursor. The solution was subsequently magnetic stirred at 400 rpm for an hour and maintained at pH 12. On the other hand, the 2.05 ml concentrated H3PO4 (phosphorus precursor) was diluted into 200 ml of distilled water, stirred and kept at about pH 2. Subsequently, the prepared H3PO4 solution was then added dropwise into Ca(OH)2 solution to begin the titration process, continued with a vigorous stirring at 700 rpm for 30 minutes. The NH4OH solution was then added to adjust the pH to 10. This was followed by magnetic stirring at 500 rpm for an hour after the titration process and aging for 21 hours (to form white precipitate). Vacuum filtration was subsequently performed on the precipitates using an electrical aspirator pump (Jerio Tech, Korea). Finally, the precipitate was dried in an oven at 100 °C for 16 hours and then crushed and sieved through a 300 μm test sieve to obtain ark clamshell synthesized HA (ACS) powder. Sample Preparation The ACS synthesized HA powders were compacted into 20 mm disc samples (Fig. 2) by a hydraulic press machine (Enerpac, USA) at 1000 psi, which was set at a pressure lower than 3000 psi, as recommended by Mel et al. [17]. The green samples were then conventionally sintered (Carbolite Gero, UK) at 1200 °C, 1250 °C and 1300 °C for two hours with a ramp rate of 10 °C/min. The dimension of the green and as-sintered samples was recorded with a digital Vernier caliper (Mitutoyo, Japan) for the shrinkage measurements. Prior to characterization, the sintered disc samples were ground with silicon carbide (SiC) sandpapers and polished to achieve a 1 μm optical reflective surface. Characterization and Mechanical Property Evaluation A Fourier transform infrared (FTIR) Spectrum 65 Spectrometer (Perkin Elmer Inc., USA) was used to identify the functional groups and composition present in the synthesized powder, at the scan range of 650 to 4000 cm -1 . Differential scanning calorimetric (DSC)/ Thermogravimetric analysis (TGA) (TA Instruments, USA) was employed to determine the weight loss and phase change of synthesized HA powder, from room temperature to 1400 °C, with a heating rate of 10 °C/min under nitrogen gas environment. Bulk density measurement was also performed on the sintered samples using the Archimedes' principle, by taking the theoretical density of HA as 3.156 g/cm 3 . The microstructure of synthesized and sintered samples was examined via scanning electron microscopy (SEM) (SEC Co. Ltd., Korea). Energy dispersive X-ray (EDX) spectroscopy was used to determine the Ca/P ratio of the synthesized and sintered samples. The grain size of the sintered samples was measured using the linear intercept method according to ASTM E112-96. Vickers hardness of the sintered samples was evaluated via micro-hardness tester (Bowers ESEWAY, UK), with an applied load of 200 gf at a loading time of 10 seconds based on ASTM E384-99. For each sample, at least five indentations were used to obtain the average hardness and to calculate the standard deviation value. Fracture toughness was also obtained via the relationship derived by Niihara et al. [18]. FTIR Analysis of ACS Synthesized HA Powder The functional groups in the ACS synthesized HA powder were identified using FTIR and the spectrum is shown in Fig. 3. The result confirms that the synthesized powder exhibited the typical spectrum of pure HA powder, with the chemical groups of phosphate group (PO4 3-), hydroxyl groups (OH -) and carbonate groups (CO3 2-). The distinctive peaks at 1025 cm -1 and 1087 cm -1 are corresponding to the PO4 3-(v3), while the peak at 962 cm -1 corresponds to the PO4 3-(v1). On the other hand, weak characteristic peaks observed at 3350 cm -1 and 3570 cm -1 could be related to the OHgroup. The FTIR peaks exhibited key characteristics of HA phase. The peaks at 1456, 1420 cm -1 and 874 cm -1 show the presence of CO3 2in the samples. Similar carbonate bands were also reported in previous studies [19][20]. Figure 4 shows that the EDX spectrums of ACS synthesized and sintered HA samples consist of three main elemental constituents of HA bioceramics, which are calcium (Ca), phosphorus (P) and oxygen (O). From the atomic percentage (At %), the calculated Ca/P ratio of synthesized biogenic HA powder is 1.60, while the sintered HA bioceramics increased to 1.88 when sintered at 1200 °C and 1250 °C, and reached at 1.97 when sintered at 1300 °C. The obtained Ca/P ratio from this work deviated from the theoretical value for pure stoichiometric HA of 1.67. Similar observation was reported by Ramesh et al. [21] Microstructural and Grain Size Analysis Figure 5 (a) shows the SEM micrograph of the ACS synthesized HA powder, which consists of large agglomerates. The microstructural evolution of the ACS sintered HA ceramics is presented in Fig. 5 (b)-(d). The SEM investigation revealed that the average grain size for the sintered HA samples increases with the increase of sintering temperatures. The results show a gradual increase in grain size from 2.14 µm at 1200 °C to 3.70 µm at 1250 °C. As the sintering temperature increased to 1300 °C, the grain size dramatically increased to 6.44 µm. Accelerated grain growth in HA samples sintered beyond 1250 °C implied a change in phase stability of HA. Figure 6 shows the differential scanning calorimetric (DSC)/ thermogravimetric analysis (TGA) measurement of the ACS synthesized HA powder. At 300 °C, a pronounced weight loss of 5.3% was observed from total weight loss of 7.6% of the HA sample. The weight loss could be ascribed to the evaporation of physically adsorbed water molecules. With additional heating upon 500°C, the insignificant weight loss (0.75%) is attributed to the release of interstitial water molecule in the crystal lattice of HA. An endothermic peak at 1000 °C can be postulated as dehydroxylation and decarboxylation of the HA powder. Weight loss at higher temperatures may be due to the dissociation of HA to tricalcium phosphate (TCP) and tetracalcium phosphate (TTCP). The decomposition of HA phase is believed to cause the increased in the average grain size and reduced the mechanical properties of the HA. Mechanical Properties The effect of sintering temperature on the relative density, Vickers hardness, and fracture toughness of HA ceramics are presented in Table 1. The density of ACS sintered HA increased from 83.8% at 1200 °C to 88% at 1250 °C. A slight decrease in density (86.7%) was observed when the sintering temperature increased to 1300 °C. The result is in agreement with the grain growth of the sintered HA as shown in the SEM images. On the other hand, the Vickers hardness of the sintered HA increased from 4.35  0.43 GPa at 1200 °C to a maximum of 5.01  0.39 GPa at 1250 °C. When the sintering temperature reached 1300 °C, the Vickers hardness was reduced to 4.03 ± 0.35 GPa, which is corresponding with the reduction in relative density and the grain growth of the sintered HA. This is in agreement with Aminzare and co-authors, where the Vickers hardness of the sintered biomimetic-synthesized HA decreased from 2.52 GPa to 2.23 GPa when sintered at 1250 °C and 1300 °C, respectively [22]. In the current study, the fracture toughness exhibited similar trends as the hardness, i.e. the fracture toughness increased from 0.67  0.20 MPa.m 1/2 to a maximum of 0.88  0.07 MPa.m 1/2 , when sintered at 1200 °C and 1250 °C, respectively. This was followed by a decrease to 0.71  0.10 MPa.m 1/2 when the sintering temperature reached 1300 °C. Similarly, the decreased fracture toughness at 1300 °C is related to the decrease of relative density. It is postulated that the reduced density and mechanical properties at 1300 °C was due to the decomposition of HA phase at a high sintering temperature regime (1250 °C) [22][23]. The results also show that calcium-rich HA with the Ca/P ratio of 1.88 possessed an overall higher density and mechanical properties as compared to HA with Ca/P ratio of 1.97. CONCLUSION The present study revealed that ACS synthesized HA samples were successfully synthesized via the calcination and wet chemical precipitation using ark clamshells (Anadara granosa) as the calcium precursor. The results also show that ACS sintered HA with the Ca/P ratio of 1.88 possessed a higher density (88%) and mechanical properties (Vickers hardness of 5 GPa and fracture toughness of 0.88 MPa.m 1/2 ) when sintered at 1250 ºC, which shows that the phase stability of HA was retained up to 1250 ºC and the decomposition of HA to TCP and TTCP occurred above 1250 ºC. The finding of this study would promote the recycling and reuse of the animal shells or bones to convert biowaste into value added biomedical products which would help in attaining Sustainable Development Goal targets. Future research would encompass further works on two-step or hybrid sintering routes to produce finer microstructure with enhanced mechanical properties.
v3-fos-license
2021-05-11T00:03:40.271Z
2021-01-18T00:00:00.000
234157450
{ "extfieldsofstudy": [ "Geology" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1029/2020EA001299", "pdf_hash": "aad4e31134cb019a0281ac8b4ecb8a5e4fb29004", "pdf_src": "Wiley", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41191", "s2fieldsofstudy": [ "Geology", "Environmental Science" ], "sha1": "63504bbe40c0b78b30d08ef61f2df5ee63274c59", "year": 2021 }
pes2o/s2orc
Present Heat Flow and Paleo‐Geothermal Anomalies in the Southern Golan Heights, Israel While present heat flow (HF) throughout Israel and the Dead Sea Transform (DST) area is considered low, a thermal anomaly exists near the Yarmuk valley at the southern tip of the Golan Heights. New temperature measurements in the southern Golan Heights show that the distribution of the thermal anomaly is significantly wider than previously thought. The new data conforms to either a local and shallow magmatic chamber, which is in thermal steady state with the surrounding rocks, or to a transient heat front associated with localized Pleistocene magmatic intrusions in the Yarmuk. An alternative mechanism suggests that the present HF intensity is higher than the HF value calculated based on the apparent borehole temperatures due to heat removal by deep aquifer flow. A paleothermal analysis was performed using high‐resolution organic matter maturation profiles on the Late Cretaceous source rocks, across the southern Golan basin. The maturation data show remarkably similar values throughout the basin, indicating that a single thermal event led to the source rock maturation. Constraints given by geological considerations have associated the paleothermal event with Pliocene volcanic eruptions, which has allowed the calculation of the basin paleo‐heat flows, and indicated on a basin wide heat source, such as a crustal heat source. The paleothermal event is sharply bounded by a DST branching fault. This observation is suggested to be related to either strike‐slip movement associated with the DST or to heat removal and modulation by a deep hydrological system. Introduction The regional present-day heat flow (HF) throughout Israel ( Figure 1) and the Arabian shield is low with average values between 40 and 45 mW m −2 (Eckstein & Simmons, 1977;Gettings & Showail, 1982;Shalev et al., 2013). While HFs are usually elevated within rift basins due to lithospheric thinning and rising mantle plumes (Ruppel, 1995), the HFs along the Dead Sea Transform (DST) are generally low (Ben-Avraham et al., 1978). This excludes the Red Sea and the Sea of Galilee (Yarmuk) regions that show elevated HFs. The main observations for elevated HF near the Yarmuk river are the geothermal springs of Hammat Gader and Mokhaba (discharging 45 × 10 6 m 3 yr −1 with temperatures reaching up to 50°C) (Baijjali et al., 1997;Levitte & Eckstein, 1978;Mazor et al., 1973;Starinsky et al., 1979), elevated thermal gradients in the Ein Said well (subhydrostatic) and the high water temperature in some artesian wells in the area (Arad & Bein, 1986;Michelson, 1981). While there are indications that the elevated HF at the Red Sea is associated with extension tectonics (spreading rate of 10-16 mm yr −1 [Chu & Gordon, 1998]), and opening of the crust, the source for elevated HF near the Sea of Galilee and the Yarmuk valley is not straightforward as the transform in this area shifts to a strike slip fault with a predominant shear component (slip rate of 1-10 mm yr −1 [Ben- Avraham et al., 2005]). Since the tectonic setting associated with the DST cannot easily explain the source of the elevated HF, the mechanism responsible for the thermal anomaly has been the focus of many studies. The heat source in the area was previously associated with a shallow seismogenic zone (Aldersons et al., 2003;Ben-Avraham et al., 1978;Davies & Davies, 2010;Shalev et al., 2013;Shamir, 2006), magmatic activity of the southern Golan Heights (Ben-Avraham, 2014;Stein et al., 1993;Weinstein & Garfunkel, 2014), thermally driven rising plume (Goretzki et al., 2016), groundwater convection systems (Gvirtzman et al., 1997) as well as ascending hot water from deep hydrological systems, which discharge into the springs of the Yarmuk valley (Arad & Bein, 1986;Roded et al., 2013). It is important to note that some of these suggested mechanisms and explanations, can be indirectly associated with the nearby plate boundary and therefore the activity of the transform throughout the geological history is likely to have played a key role in the generation of the thermal anomaly. Abstract While present heat flow (HF) throughout Israel and the Dead Sea Transform (DST) area is considered low, a thermal anomaly exists near the Yarmuk valley at the southern tip of the Golan Heights. New temperature measurements in the southern Golan Heights show that the distribution of the thermal anomaly is significantly wider than previously thought. The new data conforms to either a local and shallow magmatic chamber, which is in thermal steady state with the surrounding rocks, or to a transient heat front associated with localized Pleistocene magmatic intrusions in the Yarmuk. An alternative mechanism suggests that the present HF intensity is higher than the HF value calculated based on the apparent borehole temperatures due to heat removal by deep aquifer flow. A paleothermal analysis was performed using high-resolution organic matter maturation profiles on the Late Cretaceous source rocks, across the southern Golan basin. The maturation data show remarkably similar values throughout the basin, indicating that a single thermal event led to the source rock maturation. Constraints given by geological considerations have associated the paleothermal event with Pliocene volcanic eruptions, which has allowed the calculation of the basin paleo-heat flows, and indicated on a basin wide heat source, such as a crustal heat source. The paleothermal event is sharply bounded by a DST branching fault. This observation is suggested to be related to either strike-slip movement associated with the DST or to heat removal and modulation by a deep hydrological system. Further evidence for elevated HF in the Yarmuk area through geologic time is the presence of hydrocarbons which were detected in relatively shallow source rock deposits in the Ein Said well (Michelson, 1981). These hydrocarbons, mostly bituminous in nature, are considered as indicative of early thermal maturation triggered by a thermal event (Tannenbaum & Aizenshtat, 1984). Since the 1980s, no additional wells were drilled in the area and therefore, the extent, spatial distribution and mechanism of the thermal event has remained unclear. In this paper, temperature data from six new wells is presented which allows to delineate the magnitude and spatial extention of the present HF anomaly. Furthermore, for the same wells, high-resolution maturation data of thick Late Cretaceous source rock is provided as a function of depth, allowing the derivation of paleo HF by applying a forward basin model approach as previously performed by Feinstein (1987), Horkowitz et al. (1989), Majorowicz et al. (1985), McKenzie (1981), Shaaban et al. (2006), Tagiyev et al. (1997), and Tissot et al. (1987. The current HF are then compared with the modeled paleo-HF derived from source rock maturation data, in attempt to better understand the various possible mechanisms and scenarios for the heat source and thermal history of the region. Geological Background The Golan Heights is an elevated plateau covered by an extensive basalt flows and located at the edge of the massive Harrat Ash Shaam volcanic field (∼50,000 km 2 ) which extends into Syria, Jordan, and Saudi-Arabia ( Figure 1). The Golan basin is a large synclinal structure, considered as part of the Syrian Arc fold system ( Figure 2). This basin hosts a thick Late Cretaceous organic rich sequence of chalk, cherts, and phosphate deposits of the Har-Hatzofim Group (Menuha, Mishash, and Ghareb formations). The thick organic source rock (>400 m) is composed of Type IIS kerogen that was deposited in many basins of Syrian Arc system due to upwelling circulation in the Late Cretaceous Tethys Sea (Almogi-Labin et al., 1993). Overlying the source rocks are marine deposits of ages ranging from the Paleocene (Taqiye formation) up to the middle Eocene (Adulam and Maresha formations) (Michelson, 1972). These marine REZNIK AND BARTOV 10.1029/2020EA001299 2 of 12 sediments also point to a syn-depositional tectonic activity with thick deposits in the basin center and erosion and onlaps toward the basin margins. During the Neogene, mostly terrestrial deposits (Hordos and Gesher formations) punctuated by shortlived marine ingressions (Bira formation) filled and flattened the basin (Michelson, 1972;Rozenbaum et al., 2016). The shift from a marine environment to a terrestrial one occurred in response to the major tectonic event of the DST, which split the African and Arabian Plates (Garfunkel et al., 1981) and bounded the Golan basin from the west. In the northern part of the syncline, a branching fault of the DST (Sheik Ali fault ,SAF) that trends to the northeast displaced the basin by more than 900 m, while hosting a thick Neogene section of a different sedimentary fill from the southern Golan basin (Figure 2). Since Miocene times, several volcanic events occurred in the northeastern part of Israel. These events believed to be associated with lower lithospheric melts that produced basaltic flows differentiated by location and composition (Stein et al., 1993;Weinstein et al., 2006). These volcanic events were attributed to the tectonic activity of the DST (Heimann et al., 1996;Joffe & Garfunkel, 1987;Weinstein, 2012). Eruptions and lava flows in the region were discontinuous in time with volcanic quiescence in between eruption episodes. The main volcanic events in northern Israel are ( Figure 1): (a) Lower Basalt of late Oligocene to early Miocene times, mostly found west of the DST; (b) Cover Basalt of early Pliocene, which overlays most of the southern Golan basin, with additional eruptions in the late Pliocene north to the SAF; (c) Young Basalts, which occur at the center and northern Golan of late Pliocene and Pleistocene times, and locally in the Yarmuk and Raqqad valleys of Pleistocene age (Mor, 1993). Downhole Temperature Measurements Fluid temperature measurements were performed as a function of depth along the fluid column in all six boreholes examined in the study, using a conventional EC/T probe capable of measuring temperatures up to 80°C ± 0.1°C, at logging speeds of 4 m/min (Robertson Geologging). The measurements were performed on the first downhole logging trip following a minimal period of months (Ness 02, 03, 05, 12) up to years (Ness 10, ES) after the finalization of drilling and completion operations, in order to allow thermal equilibration between the wellbore, near wellbore environment and reservoir temperatures. In addition, the derived temperature log-based thermal gradients were extrapolated and compared to continuous downhole temperature gauge readings (Spartek Systems) during shut-in periods of drill stem tests, which targeted eight separate 15 m intervals, yielding an average difference of 1°C ( Figure 3). The log-based wireline temperatures were also verified using maximum temperature registering thermometers (Kessler thermometer) which recorded the bottom-hole temperatures (BHT) yielding differences of less than 1°C. Source Rock Maturation Throughout the drilling phase of the source rocks, cuttings were collected at a resolution of 3-5 m. A few grams of each sample from the target layer were ground using a mortar and pestle. A subsample of ∼100 mg was taken from the ground sample to perform a source rock thermal maturation analysis using the Rock-Eval 6 (Vinci technologies). During the analysis, the sample is heated rapidly and steadily (between 200°C and 650°C) in the absence of oxygen, leading to thermal cracking of the kerogen (pyrolysis), into liquid and gaseous hydrocarbons. The temperatures at which the maximum release of hydrocarbons from cracking of kerogen occurs during pyrolysis (often designated as T max ) is used as a maturation index and a proxy for thermal maturation of the sample. For the purpose of basin modeling, The T max values measured on the cutting material from the wellbores were converted to transformation ratio (TR). TR is defined as the ratio of the generated petroleum from the kerogen to the total amount of HC that the kerogen is capable of generating (Schwartz et al., 1980). In terms of RockEval properties, TR is defined according to RockEval S2 peak as: where S 2 (i) and S 2 (immature) are S 2 measurement of a given sample and of an immature sample, respectively (Tissot & Welte, 1984). The conversion of T max to TR was performed using a REZNIK AND BARTOV 10.1029/2020EA001299 calibration curve from an artificial maturation laboratory experiment performed on an analogous immature type IIS deposit of the same formation (Amrani et al., 2005;Rosenberg et al., 2021). The laboratory experiment allowed to define the oil and gas windows, specifically for the type IIS deposit, in terms of TR values as follows: 0-0.10: Immature, 0.10-0.25: Bitumen generation, 0.25-0.75: Oil window, 0.75-1.00: Gas window. A detailed discussion of the calibration experiment, Golan basin source rock richness, composition and maturation state can be found in Rosenberg et al. (2021). Basin Modeling A forward basin model (BasinMod-1D, January 2017 Release, Platte River Associates, Inc.) was constructed taking into account the stratigraphy, formation ages and known hiatuses, lithology, organic matter type and modern-day HF. Using these inputs, the burial history was reconstructed while simulating compaction. Comparison between well-log derived porosity (averaged per strata) and the forward-model calculated porosity based on compaction curves of a given lithology, validated the model calculation and allowed to rely on the subsequent effect on the thermal conductivity of the rocks. By applying HF levels through time, the model calculates temperatures as a function of depth and time, which allows the calculation of the kerogen TR. The independent thermal history of each well is iterated until the modeled TR the measured values. It should be noted, that fitting the model to the TR data may be attained by multiple solutions, as the increase in maturation is proportional to the product of time and temperature (Lopatin, 1971;Waples, 1980). As will be discussed below, the geological history of the basin provides constraints to the timing of and duration of the elevated HF which allows to narrow the range of possibilities. Results Maturation and thermal data from six new boreholes between the Yarmuk and Yehudiya valleys (∼30 km apart), is presented. Depending of the topography, basin structure, thickness of the Har-Hatzofim Group (Menuha, Mishash, and Ghareb formations) and displacement along faults, the source rock depth ranges from ∼500 to 2,250 m below ground level (mBGL) enabling to examine a wide range of thermal conditions. The data from the wells consists of information on stratigraphy, lithology, and temperature logs as well as Rock-Eval measurements on the source rock drill-cuttings. The depositional history was integrated with the organic maturation data in order to reconstruct the thermal history of the region. Present Thermal State Thermal gradients calculated based on the temperature logs and drill stem test data, vary by a factor of 2 and range from 5.15°C/100 m at Ein Said well to 2.5°C/100 m in Ness 10 ( Figure 3). A clear exponential decrease is noticed from south to north along cross section A-A' (Figure 1) showing that the center of the thermal anomaly is located near the Yarmuk valley and extends by more than 15 km to the north (Figure 3, inset). The geothermal gradients were converted to HF (mW m −2 ) using the thermal conductivities calculated based on a high resolution stratigraphic and lithologic record. The calculated HF ranges from 45 to 85 mW m −2 increasing from north to south (Figure 4a). The newly estimated spatial coverage of the elevated HF extends northwards and covers a significantly larger area (∼150 km 2 ) than previously reported (Figure 4a). The temperatures at the base of the Late Cretaceous source rocks, which depend on the combination between the local HF and burial depths, range from 63°C to 67°C in most parts of the basin. Despite the fact that the lowest present thermal gradients in the basin were found in Ness 10 (2.5°C/100 m or 45 mW m −2 ), the temperatures at the base of the source rocks are the highest (77°C) due to the significantly deeper burial in the downthrown block of the SAF. Source Rock Maturation Due to the uniquely thick Late Cretaceous source rocks (>350 m), it was possible to track the maturation increase as a function of depth. of the SAF (Ness 10), which shows distinctly low TRs of <0.15 (immature), despite the fact that the source rock found in this well was buried ∼900 m deeper than the rest of the basin. Forward Basin Model The purpose of modeling the TR values in each well was to identify the limits on the HF and the duration of a paleothermal event in order to reach the measured maturation levels. The burial history of 10 key horizons (Bina-Cover basalt) were constructed according to the detailed lithological logs of each well (see Ness 03 example, Figure 6). Once the burial history constraints were taken into account, various HF scenarios were modeled. Figure 7 provides an example for Ness 02 well, where a Pliocene paleothermal event was defined between 5.3 and 3.5 My. The HF was adjusted in order for the modeled TR (solid lines in Figure 5) to best fit the measured TR data (solid circles in Figure 5). Discussion The data from the boreholes of the Golan Heights show that the present Yarmuk valley thermal anomaly extends further north than previously reported , gradually decreasing northward (Figure 4a). Despite the fact that the present thermal gradients or HF are anomalously high, the temperatures (65°C-70°C) at the base of the Late Cretaceous source rocks are insufficient to transform the kerogen to the measured TR levels. Furthermore, the TR of the source rocks show remarkably uniform TR values with REZNIK AND BARTOV 10.1029/2020EA001299 6 of 12 almost no spatial variance across the basin, extending far past the present thermal anomaly coverage area (Figure 4b). This leads to the conclusion that the southern Golan basin has been subjected to higher HF and temperatures in the past. These observations raise several questions, which will be addressed below: What is basin's heat source, timing and duration? What is the relationship between the current thermal anomaly and the paleothermal event? Is the current thermal anomaly a continuation of the paleothermal event (i.e., similar source with an extended duration) or are these heating events separate? Present Thermal State Two optional first order estimates for heat sources are evaluated below using the new thermal data in the southern Golan basin. Localized Magmatic Heat Source The first source assumes the presence of a small and localized subsurface magma chamber near the Yarmuk valley where the geothermal gradients are the highest. Such a magma chamber can be described by assuming a hot spherical body at a constant temperature, radius and depth which diffuses heat according to the following equation, at steady state (Carslaw & Jaeger, 1959): where: T 2 are the background temperatures (estimated as the temperature at given depths of 0.5-2 km at a distance of 25 km), T chamber and R 0 are temperature and radius of the intrusion, respectively, both of which are mathematically indistinguishable from one another; x and z are the horizontal and vertical distance from the heat source to the measured temperature location. Fitting Equation 1 to the measured temperatures (at reference depths of 0.5, 1, 1.5, and 2 km) while constraining T 2 and x, yield the solution presented in Figure 8. Two parameters were extracted from the fitting exercise: (a) the product of (T chamber −T 2 ) and R 0, (70 m °C), which enables the derivation the intrusion temperature as a function of the radius of the magma chamber by examining the interplay between parameters. (b) The depth of the suggested magma chamber (z) of 3.5 km. While it is hard to ascertain at this point whether a relatively shallow magmatic chamber reside beneath the nearby Yarmuk valley as there is a lack of seismic, gravimetric or magnetic data, it is not unlikely that REZNIK AND BARTOV 10.1029/2020EA001299 7 of 12 Figure 7. HF curve at Ness 02 well. Heating event was constrained to 5.3-3.5 m.y ago. The amplitude of the thermal event was adjusted by numerical iterations, in order to attain a fit between measured TR data (solid circles, Figure 5) and the basin model (solid lines, Figure 5). the current temperature field is influenced from the younger and localized volcanic events associated with the Yarmuk basalts (0.8-0.6 m.y) and/or Raqqad basalts (0.2-0.1 m.y). In order to examine this possibility, the following point source transient solution for Green's function (Carslaw & Jaeger, 1959) was used to calculate the timescales required for a heat pulse to radially diffuse and propagate via thermal conduction in an infinite medium: where: α is thermal diffusivity, which is defined as  k Cp (41 and 83 km 2 /m.y, for the Ghareb and Mishash formations, respectively), k is thermal conductivity (2.2 and 4.9 J/(m·k·s) for the Ghareb and Mishash, respectively, [Schütz et al., 2012]), ρ is bulk densities of the rocks (1,800-2,100 kg/m 3 , for the Ghareb and Mishash, respectively, [Shitrit et al., 2016]), C p is specific heat capacity (900 J/(kg·k) for both the Ghareb and Mishash formations) and t is time, m.y. The results, presented in Figure 9, indicate that the timescales required for of a transient heat front to reach the measured temperatures in the boreholes is in the order of several hundreds of thousands of years, depending on the exact location of the point source. These timescales conform with the Pleistocene volcanic events, suggesting that the current anomaly could be related to localized Yarmuk or Raqqad eruptions within the Yarmuk valley (Figure 1). This implies that the present temperature field could be explained by a local and short Pleistocene heat pulse associated with magmatic intrusions which have lost most of their heat to the surrounding rocks via thermal conduction. The solution of Equation 2 provides an alternative explanation to the heat source. It indicates that the postulated hot magma chamber at a depth of 3.5 km, as implied by the steady state model in Equation 1 is not a unique solution. While evidence for massive and shallow Pliocene and middle-late Miocene magmatic gabbro deposits were found in the nearby Zemach well (Marcus & Slager, 1985;Segev, 2017) (Figure 1), no indication for magmatic intrusions associated with the Pleistocene eruptions is yet to be provided, as this region lacks deep boreholes and geophysical data. Regional Heat Source An alternative scenario, follows Roded et al., (2013) who numerically modeled groundwater flow and heat transfer in the Golan Heights by applying an energy balance approach. The model suggests a relatively high basal HF of 100 mW m −2 , a value higher by a factor of 2.5 than the estimated mean basal HF in the region. Such elevated HF values indicate according to Roded et al., (2013) deep crust conditions (exceptional source of geothermal heat) or magmatic origin. Furthermore, this elevated basal HF is reduced by deep groundwater flow, which is recharged from the northern most part of the basin in Mt. Hermon and flows southwards to the Hammat Gader and Mokhaba as geothermal springs (Yarmuk springs). Therefore, the groundwater temperatures gradually increase through their flow path southward which leads to less heat removal and higher geothermal gradients in the south, which is consistent with the new temperature measurements reported in this paper. The fact that the current basal HF is estimated to be 100 mW m −2 is especially interesting since similar HF values were derived for paleo conditions based on the source rock maturation data and basin modeling. This explanation implies that the basal HF has barely changed since the Pliocene, while the hydrologic system has gradually developed through time, leading to currently lower apparent HF values. This model will be further discussed below in relation to paleo thermal conditions. Paleo Thermal Event Three observations point to the fact that a paleo thermal event has occurred in the geological past: (a) the present temperature regime at the base of the Late Cretaceous source rocks is insufficient to mature the source rock to its measured level. (b) The area that has undergone thermal maturation is significantly larger than the spatial distribution of the present thermal anomaly. (c) When observing the TR trends as a function of depth from the surface (mBGL), a large variance between wells is exhibited ( Figure 5a). However, when using a structural datum for calculating the depth of the source rocks, such as depth below sea level (mBSL), the TR data with depth converge and show remarkably similar values throughout the southern Golan basin, except for Ness 10 (Figure 5b). The usage of a constant datum removes the post Pliocene topographic features (young basalts flows and incising rivers) and serves as a proxy to the basin structure prior to the heat pulse that matured the source rocks ( Figure 2). This indicates that the southern basin has undergone REZNIK AND BARTOV 10.1029/2020EA001299 9 of 12 a similar thermal history throughout the basin and that the thermal event occurred prior to the end of the Pliocene. Since the regional HF in the Arabian plate has been low through the Cenozoic (Eckstein, 1979;Stein et al., 1993), the heating event which led to maturation of the Senonian source rocks in the basin had to be local and therefore, likely to correspond in time with the volcanic history of the Golan Heights. Out of the three main volcanic events that have been identified in the area (Early and Late Pliocene and Pleistocene, Figure 1), the only sufficiently large event that overlies the entire thermally matured rocks in the southern Golan basin, is the late Pliocene Cover Basalt volcanic event dated to 5.3-3.5 Ma. Over this time period, we assumed a constant HF for each well (Wangen et al., 2007). The HF intensity (Figure 7) was adjusted, in order to individually fit the Basin model TR values to the measured data (Figures 8 and 5a). The HF calculated for the thermal event show little variation across the basin (100-112 mW m −2 ), suggesting a uniform heat source that extended over an area of at least 450 km 2 (Figure 4b). It is important to note that a sensitivity test which assumed a baseline HF up to 80 mW m −2 (instead of 40 mW m −2 ) prior to the Pliocene, did not prompt additional maturation of the source rocks, due to their shallow burial depth at the time. Such long term and uniform HF distribution on a basin wide scale suggests a different heat source than the one suggested for the present thermal state, most probably a deep magma chamber or a crustal heating source. The spatial coverage of the paleo heating event overlaps the present thermal anomaly and extends northward. The intensity of the widespread paleo thermal event was significantly higher than the spatially limited present thermal event with HF of 45-85 mW m −2 . At this point, it is impossible to determine if the paleo and present HF anomalies are genetically tied or not. The following options are brought forward: (a) the paleo and present thermal events are separate and discreet. The paleo heat source has completely decayed since the Pliocene. The new heat source located near the Yarmuk, points to either an active hot shallow magmatic chamber (3.5 km depth) or a local Pleistocene intrusion associated with the younger basalt eruptions. (b) The present thermal anomaly is a relic of the paleothermal event which decayed due to volcanic eruptions, DST plate tectonics related effects, or heat removal by groundwater flow. (c) The Pliocene thermal anomaly, of the order of 100 mW m −2 , has remained relatively constant through time. According to energy balance approach of Roded et al. (2013), the lower (apparent) HF measured north of the Yarmuk (45-85 mW m −2 ) is explained by deep groundwater flow, removing some of the basal heat. To this end, this value is practically similar (or perhaps slightly lower, indicating some cooling of the heat source through time) to the paleothermal HF modeled based on source rock transformation ratios (100-112 mW m −2 ). This implies that the deep hydrogeological system has been gradually developing with time, allowing groundwater flow from the northern to the southern Golan while gradually cooling the overlying strata and inhibiting further source rock maturation. The SAF Anomaly In the downthrown block, north of the SAF, the source rock is buried 900 m deeper than the rest of the basin (Figure 2). According to the basin model, if this block would have been exposed to a similar heat pulse as in the southern block, the source rocks would have over matured. However, despite the deeper burial, the kerogen did not undergo almost any transformation (TR < 0.15, Figure 5). Two main options can explain this observation. Tectonics associated with the DST plate boundary (Bartov et al., 1980;Girdler & Styles, 1978) and the strike slip component on the SAF (Rotstein et al., 1992;Sneh et al., 1998) could have shifted the block over a distance of a few km away from its current position, from an area that was not influenced by the elevated HF of the magmatic chamber ( Figure 5b). Alternatively, the SAF served as a hydraulic barrier which separated the northern and southern blocks, leading to different heat regimes: North of the fault, the energy conducted from deep was removed by a deep hydrological system that flowed in an underling aquifer. These hydrological systems did not pass through the fault zone, thereby allowing coeval maturation south of the SAF. Over time, with the continuous throw of the SAF and the placement of permeable layers from both sides of the fault, the hydrological system could have developed across the fault southward, allowing the southern basin to cool, as described above. Conclusions The present temperature field, as measured in the new boreholes, indicates that the thermal anomaly around the Yarmuk is wider than previously thought. The present temperature field points to local heat sources (magmatic chamber or magmatic intrusions) or a regional (crustal) basal heat source, which is modulated by deep groundwater flow. During the Pliocene, a regional (crustal) scale thermal source has led to strikingly similar degrees of maturation of Late Cretaceous source rocks across the basin. Further thermal modeling of both the present and paleo thermal anomalies is required in order to determine the heat source origin, heating mechanism, timing and duration. Additionally, further study is needed in order to determine whether the present and paleo thermal anomalies can be considered separate and discrete events with potentially different heat sources or whether these events are genetically tied. Data Availability Statement Global Heat Flow Database of the International Heat Flow Commission; re3data.org -Registry of Research Data Repositories. http://doi.org/10.17616/R3G305.
v3-fos-license
2020-04-19T20:12:43.637Z
2021-04-28T00:00:00.000
215820659
{ "extfieldsofstudy": [ "Computer Science", "Mathematics" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.jstatsoft.org/index.php/jss/article/download/v100i11/4222", "pdf_hash": "6a70a633faa77b80c9eda7bdfc5824dda89b8f40", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41192", "s2fieldsofstudy": [ "Computer Science" ], "sha1": "da2941b7828a8e44583b4943fbdb8c11160fa25b", "year": 2021 }
pes2o/s2orc
BayesSUR: An R package for high-dimensional multivariate Bayesian variable and covariance selection in linear regression In molecular biology, advances in high-throughput technologies have made it possible to study complex multivariate phenotypes and their simultaneous associations with high-dimensional genomic and other omics data, a problem that can be studied with high-dimensional multi-response regression, where the response variables are potentially highly correlated. To this purpose, we recently introduced several multivariate Bayesian variable and covariance selection models, e.g., Bayesian estimation methods for sparse seemingly unrelated regression for variable and covariance selection. Several variable selection priors have been implemented in this context, in particular the hotspot detection prior for latent variable inclusion indicators, which results in sparse variable selection for associations between predictors and multiple phenotypes. We also propose an alternative, which uses a Markov random field (MRF) prior for incorporating prior knowledge about the dependence structure of the inclusion indicators. Inference of Bayesian seemingly unrelated regression (SUR) by Markov chain Monte Carlo methods is made computationally feasible by factorisation of the covariance matrix amongst the response variables. In this paper we present BayesSUR, an R package, which allows the user to easily specify and run a range of different Bayesian SUR models, which have been implemented in C++ for computational efficiency. The R package allows the specification of the models in a modular way, where the user chooses the priors for variable selection and for covariance selection separately. We demonstrate the performance of sparse SUR models with the hotspot prior and spike-and-slab MRF prior on synthetic and real data sets representing eQTL or mQTL studies and in vitro anti-cancer drug screening studies as examples for typical applications. Introduction With the development of high-throughput technologies in molecular biology, the large-scale molecular characterization of biological samples has become common-place, for example by genome-wide measurement of gene expression, single nucleotide polymorphisms (SNP) or CpG methylation status. Other complex phenotypes, for example, pharmacological profiling from large-scale cancer drug screens, are also measured in order to guide personalized cancer therapies (Garnett et al. 2012; Barretina et al. 2012;Gray and Mills 2015). The analysis of joint associations between multiple correlated phenotypes and high-dimensional molecular features is challenging. When multiple phenotypes and high-dimensional genomic information are jointly analyzed, the Bayesian framework allows to specify in a flexible manner the complex relationships between the highly structured data sets. Much work has been done in this area in recent years. Our software package BayesSUR gathers together several models that we have proposed for high-dimensional regression of multiple responses and also introduces a novel model, allowing for different priors for variable selection in the regression models and for different assumptions about the dependence structure between responses. Bayesian variable selection uses latent indicator variables to explicitly add or remove predictors in each regression during the model search. Here, as we consider simultaneously many predictors and several responses, we have a matrix of variable selection indicators. Different variable selection priors have been proposed in the literature. For example, Jia and Xu (2007) mapped multiple phenotypes to genetic markers (i.e., expression quantitative trait loci, eQTL) using the spike-and-slab prior and hyper predictor-effect prior. Liquet, Mengersen, Pettitt, and Sutton (2017) incorporated group structures of multiple predictors via a (multivariate) spike-and-slab prior. The corresponding R (R Core Team 2021) package MBSGS is available from the Comprehensive R Archive Network (CRAN) at https:// CRAN.R-project.org/package=MBSGS. Bottolo et al. (2011) and Lewin et al. (2015b) further proposed the hotspot prior for variable selection in multivariate regression, in which the probability of association between the predictors and responses is decomposed multiplicatively into predictor and response random effects. This prior is implemented in a multivariate Bayesian hierarchical regression setup in the software R2HESS (Lewin, Campanella, Saadi, Liquet, and Chadeau-Hyam 2015a), available from https://www.mrc-bsu.cam.ac.uk/software/. Lee, Tadesse, Baccarelli, Schwartz, and Coull (2017) used the Markov random field (MRF) prior to encourage joint selection of the same variable across several correlated response variables. Their C-based R package mBvs (Lee, Tadesse, Coull, and Starr 2021) is available from CRAN (https://CRAN.R-project.org/package=mBvs). For high-dimensional predictors and multivariate responses, the space of models is very large. To overcome the infeasibility of the enumerated model space for the MCMC samplers in the high-dimensional situation, Bottolo and Richardson (2010) proposed an evolutionary stochastic search (ESS) algorithm based on evolutionary Monte Carlo. This sampler has been extended in a number of situations and efficient implementation of ESS for multivariate Bayesian hierarchical regression has been provided by the C++-based R package R2GUESS (Liquet, Bottolo, Campanella, Richardson, and Chadeau-Hyam 2016). Richardson, Bottolo, and Rosenthal (2011) proposed a new model and computationally efficient hierarchical evolutionary stochastic search algorithm (HESS) for multi-response (i.e., multivariate) regression which assumes independence between residuals across responses and is implemented in the R2HESS package. Petretto et al. (2010) used the inverse Wishart prior on the covariance matrix of residuals in order to do simultaneous analysis of multiple response variables allowing for correlations in response residuals, for more moderate sized data sets. In order to analyze larger numbers of response variables, yet retain the ability to estimate dependence structures between them, sparsity can be introduced into the residual covariances, as well as into the regression model selection. Holmes, Denison, and Mallick (2002) adapted seemingly unrelated regression (SUR) to the Bayesian framework and used a Markov chain Monte Carlo (MCMC) algorithm for the analytically intractable posterior inference. The hyper-inverse Wishart prior has been used to learn a sparser graph structure for the covariance matrix of high-dimensional variables (Carvalho, Massam, and West 2007;Wang 2010;Bhadra and Mallick 2013), thus performing covariance selection. However, these approaches are not computationally feasible if the number of input variables is very large. Bottolo, Banterle, Richardson, Ala-Korpela, Järvelin, and Lewin (2021) recently developed a Bayesian variable selection model which employs the hotspot prior for variable selection, learns a structured covariance matrix and implements the ESS algorithm in the SUR framework to further improve computational efficiency. The BayesSUR package implements many of these possible choices for high-dimensional multiresponse regressions by allowing the user to choose among three different prior structures for the residual covariance matrix and among three priors for the joint distribution of the variable selection indicators. This includes a novel model setup, where the MRF prior for incorporating prior knowledge about the dependence structure of the inclusion indicators is combined with Bayesian SUR models (Zhao, Banterle, Lewin, and Zucknick 2021). BayesSUR employs ESS as a basic variable selection algorithm. Models specification The BayesSUR package fits a Bayesian seemingly unrelated regression model with a number of options for variable selection, and where the covariance matrix structure is allowed to be diagonal, dense or sparse. It encompasses three classes of Bayesian multi-response linear regression models: hierarchical related regressions (HRR, Richardson et al. 2011), dense and sparse seemingly unrelated regressions (dSUR and SSUR, Bottolo et al. 2021), and the structured seemingly unrelated regression, which makes use of a Markov random field (MRF) prior ). The regression model is written as where Y is a n × s matrix of outcome variables with s × s covariance matrix C, X is a n × p matrix of predictors for all outcomes, B is a p × s matrix of regression coefficients, U is the matrix of residuals, vec(·) indicates the vectorization of a matrix, N (µ, Σ) denotes a normal distribution with mean vector µ and covariance matrix Σ, 0 denotes a column vector with all elements zero, ⊗ is the Kronecker product and I n is the identity matrix of size n × n. We use a binary latent indicator matrix Γ = {γ jk } to perform variable selection. A spikeand-slab prior is used to find a sparse relevant subset of predictors that explain the variability Table 1: Nine models across three priors of C by three priors of Γ. The precision matrix W is generally decomposed into a shrinkage coefficient and a matrix that governs the covariance structure of the regression coefficients. Here we use W = w −1 I sp , meaning that all the regression coefficients are a priori independent, with an inverse gamma hyperprior on the shrinkage coefficient w, i.e., w ∼ IGamma(a w , b w ). The binary latent indicator matrix Γ has three possible options for priors: the independent hierarchical Bernoulli prior, the hotspot prior and the MRF prior. The covariance matrix C also has three possible options for priors: the independent inverse gamma prior, the inverse Wishart prior and hyper-inverse Wishart prior. Thus, we consider nine possible models (Table 1) across all combinations of three priors for C and three priors for Γ. Hierarchical related regression (HRR) The hierarchical related regression model assumes that C is a diagonal matrix which translates into conditional independence between the multiple response variables, so the likelihood factorizes across responses. An inverse gamma prior is specified for the residual covariance, i.e., σ 2 k ∼ IGamma(a σ , b σ ) which, combined with the priors in (2) is conjugate with the likelihood in the model in (1). We can thus sample the variable selection structure Γ marginally with respect to C and B. For inference for this model, Richardson et al. (2011) implemented the hierarchical evolutionary stochastic search algorithm (HESS). HRR with independent Bernoulli prior For a simple independent prior on the regression model selection, the binary latent indicators follow a Bernoulli prior γ jk |ω jk ∼ Ber(ω j ), j = 1, · · · , p, k = 1, · · · , s, with a further hierarchical Beta prior on ω j , i.e., ω j ∼ Beta(a ω , b ω ), which quantifies the probability for each predictor to be associated with any response variable. Richardson et al. (2011) and Bottolo et al. (2011) proposed decomposing the probability of association parameter ω jk in (4) as ω jk = o k × π j , where o k accounts for the sparsity of each response model and π j controls the propensity of each predictor to be associated with multiple responses simultaneously. HRR with MRF prior To consider the relationship between different predictors and associate highly correlated responses with the same predictors, we set a Markov random field prior on the latent binary vector γ where G is an adjacency matrix containing prior information about similarities amongst the binary model selection indicators γ = vec(Γ). The parameters d and e are treated as fixed in the model. Alternative approaches include the use of a hyperprior on e (Stingo, Chen, Tadesse, and Vannucci 2011) or to fit the model repeatedly over a grid of values for these parameters, in order to detect the phase transition boundary for e (Lee et al. 2017) and to identify a sensible combination of d and e that corresponds to prior expectations of overall model sparsity and sparsity for the MRF graph. Dense seemingly unrelated regression (dSUR) The HRR models in Section 2.1 assume residual independence between any two response variables because of the diagonal matrix C in (3). It is possible to estimate a full covariance matrix by specifying an inverse Wishart prior, i.e., C ∼ IW(ν, τ I s ). To avoid estimating the dense and large covariance matrix directly, Bottolo et al. (2021) exploited a factorization of the dense covariance matrix to transform the parameter space (ν, τ ) of the inverse Wishart distribution to space {σ 2 k , ρ kl |σ 2 k : k = 1, · · · , s; l < k}, with priors Here, we assume that τ ∼ Gamma(a τ , b τ ). Thus, model (1) is rewritten as where u l = y l − Xβ l and β l is the lth column of B, so again the likelihood is factorized across responses. Similarly to the HRR model, employing either the simple independence prior (4), the hotspot prior (5) or the MRF prior (6) for the indicator matrix Γ results in different sparsity specifications for the regression coefficients in the dSUR model. The marginal likelihood integrating out B is no longer available for this model, so joint sampling of B, Γ and C is required. However, the reparameterization of the model (8) enables fast computation using the MCMC algorithm. Sparse seemingly unrelated regression (SSUR) Another approach to model the covariance matrix C is to specify a hyper-inverse Wishart prior, which means the multiple response variables have an underlying graph G encoding the conditional dependence structure between responses. In this setup, a sparse graph corresponds to a sparse precision matrix C −1 . From a computational point of view, it is infeasible to specify a hyper-inverse Wishart prior directly on C −1 in high dimensions (Carvalho et al. 2007;Jones, Carvalho, Dobra, Hans, Carter, and West 2005;Uhler, Lenkoski, and Richards 2018;Deshpande, Ročková, and George 2019). However, Bottolo et al. (2021) used a transformation of C to factorize the likelihood as in Equation 8. The hyper-inverse Wishart distribution, i.e., C ∼ HIW G (ν, τ I s ), becomes in the transformed variables the scalar variance σ 2 qt and the associated correlation vector ρ qt = (ρ 1,qt , ρ 2,qt , · · · , ρ t−1,qt ) with where Q is the number of prime components in the decomposable graph G, and S q and R q are the separators and residual components of G, respectively. |S q | and |R q | denote the number of variables in these components. For more technical details, please refer to Bottolo et al. (2021). As prior for the graph we use a Bernoulli prior with probability η on each edge E kk of the graph as in The three priors on β γ , i.e., independence (4), hotspot (5) and MRF (6) priors can be used in the SSUR model. MCMC sampler and posterior inference To sample from the posterior distribution, we use the evolutionary stochastic search algorithm Bottolo et al. 2011;Lewin et al. 2015b), which uses a particular form of evolutionary Monte Carlo (EMC) introduced by Liang and Wong (2000). Multiple tempered Markov chains are run in parallel and both exchange and crossover moves are allowed between the chains to improve mixing between potentially different modes in the posterior. Note that we run multiple tempered chains at the same temperature instead of a ladder of different temperatures as was proposed in the original implementations of the (H)ESS sampler in Bottolo and Richardson (2010); Bottolo et al. (2011); Lewin et al. (2015b). The temperature is adapted during the burn-in phase of the MCMC sampling. The main chain samples from the un-tempered posterior distribution, which is used for all inference. For each response variable, we use a Gibbs sampler to update the regression coefficients vector, β k (k = 1, · · · , s), based on the conditional posterior corresponding to the specific model selected among the models presented in Sections 2.2 and 2.3. After L MCMC iterations, we obtain B (1) , · · · , B (L) and the estimate of the posterior mean iŝ where b is the number of burn-in iterations. Posterior full conditionals are also available to update σ 2 k and ρ kl for the dSUR model and σ 2 qt and ρ qt for the SSUR model. In the HRR models in Section 2.1, the regression coefficients and residual covariances have been integrated out and therefore the MCMC output cannot be used directly for posterior inference of these parameters. However, for B, the posterior distribution conditional on Γ can be derived analytically for the HRR models and this is the output for B that is provided in the BayesSUR package for HRR models. At MCMC iteration t we also update each binary latent vector γ k (k = 1, · · · , s) via a Metropolis-Hastings sampler, jointly proposing an update for the corresponding β k . After L iterations, using the binary matrices Γ (1) , · · · , Γ (L) , the marginal posterior inclusion probabilities (mPIP) of the predictors are estimated bŷ In the SSUR models, another important parameter is G in the hyper-inverse Wishart prior for the covariance matrix C. It is updated by the junction tree sampler (Green and Thomas 2013;Bottolo et al. 2021) jointly with the corresponding proposal for σ 2 qt and ρ qt |σ 2 qt in (10). At each MCMC iteration we then extract the adjacency matrix G (t) (t = 1, · · · , L), from which we derive posterior mean estimators of the edge inclusion probabilities aŝ Note that even though a priori the graph G is decomposable, the posterior mean estimateĜ can be outside the space of decomposable models (see Bottolo et al. 2021). The hyper-parameter τ in the inverse Wishart prior or hyper-inverse Wishart prior is updated by a random walk Metropolis-Hastings sampler. The hyper-parameter η and the variance w in the spike-and-slab prior are sampled from their posterior conditionals. For details see Bottolo et al. (2021). The R package BayesSUR The package BayesSUR is available from CRAN at http://CRAN.R-project.org/package= BayesSUR. This article refers to version 2.0-1. The main function is BayesSUR(), which has various arguments that can be used to specify the models introduced in Section 2, by setting the priors for the covariance matrix C and the binary latent indicator matrix Γ. In addition, MCMC parameters (nIter, burnin, nChains) can also be defined. The following syntax example introduces the most important function arguments, which are further explained below. The full list of all arguments in function BayesSUR() is given in Table 2. BayesSUR(data, Y, X, X_0, covariancePrior, gammaPrior, nIter, burnin, nChains, ...) The data can be provided as a large combined numeric matrix [Y, X, X 0 ] of dimension n × (s + p) via the argument data; in that case the arguments Y, X and X_0 need to contain the dimensions of the individual response variables Y, predictors under selection X and fixed predictors X 0 (i.e., mandatory predictors that will always be included in the model). Alternatively, it is also possible to supply X 0 , X and Y directly as numeric matrices via the arguments X_0, X and Y. In that case, argument data needs to be NULL, which is the default. The arguments covariancePrior and gammaPrior specify the different models introduced in Section 2. When using the Markov random field prior (6) for the latent binary vector γ, an additional argument mrfG is needed to assign the edge potentials; this can either be specified as a numeric matrix or as a file directory path leading to a text file with the corresponding information. For example, the HRR model with independent hierarchical prior in Section 2.1 is specified by (covariancePrior = "IG", gammaPrior = "hierarchical"), the dSUR model with hotspot prior in Section 2.2 by (covariancePrior = "IW", gammaPrior = "hotspot") and the SSUR model with MRF prior in Section 2.3 for example by (covariancePrior = "HIW", gammaPrior = "MRF", mrfG = "mrfFile.txt"). The MCMC parameter arguments nIter, burnin and nChains indicate the total number of MCMC iterations, the number of iterations in the burn-in period and the number of parallel tempered chains in the evolutionary stochastic search MCMC algorithm, respectively. See Section 2.4 and, e.g., Bottolo and Richardson (2010) for more details on the ESS algorithm. The main function BayesSUR() is used to fit the model. It returns an object of S3 class 'BayesSUR' in a list format, which includes the input parameters and directory paths of output text files, so that other functions can retrieve the MCMC output from the output files, load them into R and further process the output for posterior inference of the model output. In particular, a summary() function has been provided for 'BayesSUR' class objects, which is used to summarize the output produced by BayesSUR(). For this purpose, a number of predictors are selected into the model by thresholding the posterior means of the latent indicator variables. By default, the threshold is 0.5, i.e., variable j is selected into the model for response k ifγ jk > 0.5. The summary() function also outputs the quantiles of the conditional predictive ordinates (CPO, Gelfand 1996), top predictors with respect to average marginal Argument Description Numeric matrix or indices with respect to the argument data for the responses. X Numeric matrix or indices with respect to the argument data for the predictors. X_0 Numeric matrix or indices with respect to the argument data for predictors forced to be included (i.e., that are not part of the variable selection procedure). Default is NULL. outFilePath Directory path where the output files are saved. covariancePrior Prior for the covariance matrix; "IG": independent inverse gamma prior, "IW": inverse Wishart prior, "HIW": hyper-inverse Wishart prior (default). gammaPrior Prior for the binary latent variable Γ; "hierarchical": Bernoulli prior, "hotspot": hotspot prior (default), "MRF": Markov random field prior. mrfG A numeric matrix or a path to the file containing (the edge list of) the G matrix for the MRF prior on Γ. Default is NULL. nIter Total number of MCMC iterations. burnin Number of iterations in the burn-in period. nChains Number of parallel chains in the evolutionary stochastic search MCMC algorithm. gammaSampler Local move sampler for the binary latent variable Γ, either (default) "bandit" for a Thompson-sampling inspired sampler or "MC3" for the usual M C 3 sampler. gammaInit Γ initialization to either all zeros ("0"), all ones ("1"), MLE-informed ("MLE") or (default) randomly ("R"). hyperpar A list of named prior hyperparameters to use instead of the default values, including a_w, b_w, a_sigma, b_sigma, a_omega, b_omega, a_o, b_o, a_pi, b_pi, nu, a_tau, b_tau, a_eta and b_eta. They Maximum threads used for parallelization. Default is 1. output_* Allow (TRUE) or suppress (FALSE) the output for *; possible outputs are Γ, G, B, σ, π, tail (hotspot tail probability, see Bottolo and Richardson 2010) or model_size. Default is all: TRUE. tmpFolder The path to a temporary folder where intermediate data files are stored (will be erased at the end of the MCMC sampling). It is specified relative to outFilePath. Function Description BayesSUR() Main function of the package. Fits any of the models introduced in Section 2. Returns an object of S3 class 'BayesSUR', which is a list which includes the input parameters (input) and directory paths of output text files (output), as well as the run status and function call. print() Print a short summary of the fitted model generated by BayesSUR(), which is an object of class 'BayesSUR'. summary() Summarize the fitted model generated by BayesSUR(), which is an object of class 'BayesSUR'. coef() Extract the posterior mean of the coefficients of a 'BayesSUR' class object. fitted() Return the fitted response values that correspond to the posterior mean of the coefficients matrix of a 'BayesSUR' class object. predict() Predict responses corresponding to the posterior mean of the coefficients, return posterior mean of coefficients or indices of non-zero coefficients of a 'BayesSUR' class object. plot() Main plot function to be called by the user. Creates a selection of plots for a 'BayesSUR' class object by calling one or several of the specific plot functions below as specified by the combination of the two arguments estimator and type. elpd() Measure the prediction accuracy by the expected log pointwise predictive density (elpd). The out-of-sample predictive fit can either be estimated by Bayesian leave-one-out cross-validation (LOO) or by widely applicable information criterion (WAIC, Vehtari et al. 2017). See Appendix A for details. getEstimator() Extract the posterior mean of the parameters (i.e., B, Γ and G) of a 'BayesSUR' class object. Also, the log-likelihood of Γ, model size and G can be extracted for the MCMC diagnostics. plotEstimator() Plot the estimated relationships between response variables and estimated coefficients of a 'BayesSUR' class object with argument estimator = c("beta", "gamma", "Gy"). plotGraph() Plot the estimated graph for multiple response variables from a 'BayesSUR' class object with argument estimator = "Gy". plotNetwork() Plot the network representation of the associations between responses and predictors, based on the estimatedΓ matrix of a 'BayesSUR' class object with argument estimator = c("gamma", "Gy"). plotManhattan() Plot Manhattan-like plots for marginal posterior inclusion probabilities (mPIP) and numbers of responses of association for predictors of a 'BayesSUR' class object with argument estimator = "gamma". plotMCMCdiag() Show trace plots and diagnostic density plots of a 'BayesSUR' class object with argument estimator = "logP". plotCPO() Plot the conditional predictive ordinate (CPO) for each individual of a fitted model generated by BayesSUR() with argument estimator = "CPO". CPO is used to identify potential outliers (Gelfand 1996). To use a specific estimator, the function getEstimator() is convenient to extract point estimates of the coefficients matrixB, latent indicator variable matrixΓ or learned structurê G from the directory path of the model object. All point estimates are posterior means, thusγ jk is the marginal posterior inclusion probability for variable j to be selected in the regression for response k, andĜ kl is the marginal posterior edge inclusion probability between responses k and l, i.e., the marginal posterior probability of conditional dependence between k and l. The regression coefficient estimatesB can be the marginal posterior means over all models, independently ofΓ (with default argument beta.type = "marginal"). Then, β jk represents the shrunken estimate of the effect of variable j in the regression for response k. Alternatively,β jk can be the posterior mean conditional on γ jk = 1 with argument beta.type = "conditional". If beta.type = "conditional" and Pmax = 0.5 are chosen, then these conditionalβ jk estimates correspond to the coefficients in a median probability model (Barbieri and Berger 2004). In addition, the generic S3 methods coef(), predict(), and fitted() can be used to extract regression coefficients, predicted responses, or indices of non-zero coefficients, all corresponding to the posterior mean estimates of an 'BayesSUR' object. The main function for creating plots of a fitted BayesSUR model, is the generic S3 method plot(). It creates a selection of the above plots, which the user can specify via the estimator and type arguments. If both arguments are set to NULL (default), then all available plots are shown in an interactive manner. The main plot() function uses the following specific plot functions internally. These can also be called directly by the user. The function plotEstimator() visualizes the three estimators. To show the relationship of multiple response variables with each other, the function plotGraph() prints the structure graph based onĜ. Furthermore, the structure relations between multiple response variables and predictors can be shown via function plotNetwork(). The marginal posterior probabilities of individual predictors are illustrated via the plotManhattan() function, which also shows the number of associated response variables of each predictor. Model fit can be investigated with elpd() and plotCPO(). elpd() estimates the expected log pointwise predictive density (Vehtari et al. 2017) to assess out-of-sample prediction accuracy. plotCPO() plots the conditional predictive ordinate for each individual, i.e., the leave-oneout cross-validation predictive density. CPOs are useful for identifying potential outliers (Gelfand 1996). To check convergence of the MCMC sampler, function plotMCMCdiag() prints traceplots and density plots for moving windows over the MCMC chains. The igraph package (Csárdi and Nepusz 2006) is used for constructing the graph plots. Note that the igraph package creates the layout in a dynamic way, which is determined among other things by the size of the figure window. The layout of the plots obtained with the replication material may thus differ from those shown in the manuscript. Quick start with a simple example In the following example, we illustrate a simple simulation study where we run two models: the default model choice, which is an SSUR model with the hotspot prior, and in addition an SSUR model with the MRF prior. The purpose of the latter is to illustrate how we can construct an MRF prior graph. We simulate a data set X with dimensions n × p = 10 × 15, i.e., 10 observations and 15 input variables, a sparse coefficients matrix B with dimension p × s = 15 × 3, which creates associations between the input variables and s = 3 response variables, and random noise E. The response matrix is generated by the linear model Y = XB + E. R> plot(fit, estimator = c("beta", "gamma"), type = "heatmap", + fig.tex = TRUE, output = "exampleEst", xlab = "Predictors", + ylab = "Responses") Before running the SSUR model with the MRF prior, we need to construct the edge potentials matrix G. If we assume (in accordance with the true matrix B in this simulation scenario) that the second and third predictors are related to the first two response variables, this implies that γ 21 , γ 22 , γ 31 and γ 32 are expected to be related and therefore we might want to encourage these variables to be selected together. In addition, we assume that we know that the first and fourth predictors are associated with the third response variable, and therefore we encourage the selection of γ 13 as well. Since matrix G represents prior relations of any two predictors corresponding to vec{Γ}, it can be generated by the following code: R> G <-matrix(0, ncol = s * p, nrow = s * p) R> combn1 <-combn(rep((1:2 -1) * p, each = length(2:3)) + + rep(2:3, times = length(1:2)), 2) R> combn2 <-combn(rep((3-1) * p, each = length(c(1, 4))) + + rep(c(1, 4), times = length (3) Calling BayesSUR() with the argument gammaPrior = "MRF" will run the SSUR model with the MRF prior, and the argument mrfG = G imports the edge potentials for the MRF prior. The two hyper-parameters d and e for the MRF prior (6) Two extended examples based on real data In this section, we use a simulated eQTL data set and real data from a pharmacogenomic database to illustrate the usage of the BayesSUR package. The first example is under the known true model and demonstrates the recovery performance of the models introduced in Section 2. It also demonstrates a full data analysis step by step. The second example illustrates how to use potential relationships between multiple response variables and input predictors as the prior information in Bayesian SUR models and showcases how the resulting estimated graph structures can be visualized with functions provided in the package. Simulated eQTL data Similarly to Bottolo et al. (2021), we simulate single nucleotide polymorphism (SNP) data X by resampling from the scrime package (Schwender 2018), with p = 150 SNPs and n = 100 subjects. To construct multiple response variables Y (with s = 10) with structured correlation -which we imagine to represent gene expression measurements of genes that are potentially affected by the SNPs -we first fix a sparse latent indicator variable Γ and then design a decomposable graph for responses to build association patterns between multi-response variables and predictors. The non-zero coefficients are sampled from the normal distribution independently and the noise term from a multivariate normal distribution with the precision matrix sampled from the G-Wishart distribution W G (2, M ) (Mohammadi and Wit 2019). Finally, the simulated gene expression data Y is then generated from the linear model (1). The concrete steps are as follows: • Simulate SNPs data X from the scrime package, dim(X) = n × p. • Design a decomposable graph G as in the right panel of Figure 3, dim(G) = s × s. • Design a sparse matrix Γ as in the left panel of Figure 3, dim(Γ) = p × s. The resulting average signal-to-noise ratio is 25. The R code for the simulation can be found through help("exampleEQTL"). R> attach(exampleEQTL) In the BayesSUR package, the data Y and X are provided as a numeric matrix in the first list component data of the example data set exampleEQTL. Here the first 10 columns of data are the Y variables, and the last 150 columns are the X variables. The second component of exampleEQTL is blockList which specifies the indices of Y and X in data. The third component is the true latent indicator matrix Γ of regression coefficients. The fourth component is the true graph G between response variables. Throughout this section we attach the data set for more concise R code. Figure 3 shows the true Γ and decomposible graph G used in the eQTL simulation scenario. The following code shows how to fit an SSUR model with hotspot prior for the indicator variables Γ and the sparsity-inducing hyper-inverse Wishart prior for the covariance using the main function BayesSUR(). Figure 5: The estimated structure of the ten response variables is visualized by plot(fit, estimator = "Gy", type = "graph") withĜ thresholded at 0.5 (left). The true structure is shown with plotGraph(Gy), where Gy is the true adjacency matrix (right). Figure 4 summarizes the posterior inference results by plots forB,Γ andĜ created with the function plot() with arguments estimator = c("beta", "gamma", "Gy") and type = "heatmap". When comparing with Figure 3, we see that this SSUR model has good recovery of the true latent indicator matrix Γ and of the structure of the responses as represented by G. The function plot() can also visualize the estimated structure of the ten gene expression variables as shown in the right panel of Figure 5 with arguments estimator = "Gy" and type = "graph". For comparison, the true structure is shown in the left panel (created by function plotGraph()). When we threshold the posterior selection probability estimates for G and for Γ at 0.5, the resulting full network between the ten gene expression variables and 150 SNPs is displayed in Figure 6. Furthermore, the Manhattan-like plots in Figure 7 show both, the marginal posterior inclusion probabilities (mPIP) of the SNP variables (top panel) and the number of gene expression response variables associated with each SNP (bottom panel). after subtracting the burn-in length. The genomics of drug sensitivity in cancer data In this section we analyze a subset of the Genomics of Drug Sensitivity in Cancer (GDSC) data set from a large-scale pharmacogenomic study (Yang et al. 2013;Garnett et al. 2012). We analyze the pharmacological profiling of n = 499 cell lines from p 0 = 13 different tissue types for s = 7 cancer drugs. The sensitivity of the cell lines to each of the drugs was summarized by the log(IC 50 ) values estimated from in vitro dose response experiments. The cell lines are characterized by p 1 = 343 selected gene expression features (GEX), p 2 = 426 genes affected by copy number variations (CNV) and p 3 = 68 genes with point mutations (MUT). The data sets were downloaded from ftp://ftp.sanger.ac.uk/pub4/cancerrxgene/releases/release-5.0/ and processed as described in help("exampleGDSC"). Gene expression features are logtransformed. Garnett et al. (2012) provide the target genes or pathways for all drugs. The aim of this study was to identify molecular characteristics that help predict the response of a cell line to a particular drug. Because many of the drugs share common targets and mechanisms of action, the response of cell lines to many of the drugs is expected to be correlated. Therefore, a multivariate model seems appropriate: where the elements of B 0 and non-zero elements of B 1 , B 2 and B 3 are independent and identically distributed with the prior N (0, w). We may know the biological relationships within and between drugs and molecular features, so that the MRF prior (6) can be used to learn the above multivariate model well. In our example, we know that the four drugs RDEA119, PD-0325901, CI-1040 and AZD6244 are MEK inhibitors which affect the MAPK/ERK pathway. Drugs Nilotinib and Axitinib are Bcr-Abl tyrosine kinase inhibitors which inhibit the mutated BCR-ABL gene. Finally, the drug Methotrexate is a chemotherapy agent and general immune system suppressant, which is not associated with a particular molecular target gene or pathway. For the target genes (and genes in target pathways) we consider all characteristics (GEX, CNV, MUT) available in our data set as being potentially associated. Based on this information, we construct edge potentials for the MRF prior: • edges between all features representing genes in the MAPK/ERK pathway and the four MEK inhibitors; • edges between all features representing the Bcr-Abl fusion gene and the two Bcr-Abl inhibitors, see illustration in Figure 9(a); Figure 9: Illustration of the relationship between drugs and a group of related genes. The left panel is for the Bcr-Abl fusion gene and the corresponding related genes. The right panel is for all drugs and gene TP35 as one example with features representing all three data sources. The names with suffix ".GEX", ".CNV" and ".MUT" are features of expression, copy number variation and mutation, respectively. • edges between all features from different data sources (i.e., GEX, CNV and MUT) representing a gene and all drugs, see illustration in Figure 9(b). By matching the selected genes with the gene set of the MAPK/ERK pathway from the KEGG database, 57 features are considered to be connected to the four MEK inhibitors. The two genes (i.e., BCR and ABL) representing the Bcr-Abl fusion are connected with five features in the data set, which are BCR-ABL mutation, BCR gene expression, BCR copy number variation, ABL gene expression and ABL copy number variation (Figure 9(a)). In addition, there are 347 small feature groups representing the different available data sources for each of the genes in the data set, which are potentially connected to all drugs. Figure 9(a) illustrates the edges between drugs Nilotinib, Axitinib and the related genes of the Bcr-Abl fusion gene, and Figure 9(b) uses the TP53 gene as an example for how the different data sources representing a gene are related to each drug, thus linking the data sources together. Based on this information, we construct an edge list of the matrix G for the MRF prior. First, we load and attach the data. Note that in this example, we illustrate the use of the specific plot functions plotEstimator(), plotGraph() and plotNetwork(), which are called directly here rather than via the generic plot() function as in the examples above. R> data("exampleGDSC", package = "BayesSUR") R> attach(exampleGDSC) The following code chunk will run the MCMC sampler to fit the model. This represents a full analysis, which might take several hours to run with the chosen MCMC parameter values (nIter = 200000, nChains = 6, burnin = 100000) and no parallelization (maxThreads = 1 by default). Approximate results for an initial assessment of the model can be achieved with much shorter MCMC runs. Note that we use the X_0 argument for the thirteen cancer tissue types, which are included in the model as mandatory predictors that are always selected. R> hyperpar <-list(mrf_d = -3, mrf_e = 0.2) R> set.seed(6437) After fitting an SSUR model with the MRF prior, the structure of the seven drugs, G, has been learned as illustrated in Figure 10, where edges between two drugs k and k indicate thatĜ kk > 0.5. All expected associations between the drugs within each drug group are found, but some additional connections are also identified: there are edges between Axitinib and Methotrexate and between CI-1040 and both Nilotinib and Axitinib. Conclusion The BayesSUR package presents a series of multivariate Bayesian variable selection models, for which the ESS algorithm is employed for posterior inference over the model space. It provides a unified R package and a consistent interface for the C++ implementations of individual models. The package supports all combinations of the covariance priors and variable selection priors from Section 2 in the Bayesian HRR and SUR model frameworks. This includes the MRF prior on the latent indicator variables to allow the user to make use of prior knowledge of the relationships between both response variables and predictors. To overcome the computational cost for data sets with large numbers of input variables, parallel processing is also implemented with respect to multiple chains, and for calculation of likelihoods of parameters and samples, although the MCMC algorithm itself is still challenging to be parallelized. We demonstrated the modeling aspects of variable selection and structure recovery to identify relationships between multivariate (potentially high-dimensional) responses as well as between responses and high-dimensional predictors, by applying the package to a simulated eQTL data set and to pharmacogenomic data from the GDSC project. Possible extensions of the R package include the implementation of different priors to introduce even more flexibility in the modeling choices. In particular, the g-prior could be considered for the regression coefficients matrix B Richardson et al. 2011;Lewin et al. 2015b), whereas currently only the independence prior is available. In addition, the spike-and-slab prior on the covariance matrix C (Wang 2015;Banerjee and Ghosal 2015;Deshpande et al. 2019) might be useful, or the horseshoe prior on the latent indicator variable Γ, which was recently implemented in the multivariate regression setup by Ruffieux et al. (2020). where Var[·] denotes the variance of logarithm y i |y that can be estimated from the MCMC iterations.
v3-fos-license
2020-05-16T14:41:11.739Z
2020-05-15T00:00:00.000
218653729
{ "extfieldsofstudy": [ "Medicine", "Materials Science" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.nature.com/articles/s41598-020-63363-3.pdf", "pdf_hash": "1d81df9ca9830f4403d1309a049f0cb2aa6e3852", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41194", "s2fieldsofstudy": [ "Physics" ], "sha1": "1d81df9ca9830f4403d1309a049f0cb2aa6e3852", "year": 2020 }
pes2o/s2orc
Ab Initio and Theoretical Study on Electron Transport through Polyene Junctions in between Carbon Nanotube Leads of Various Cuts. In this study we look into the interference effect in multi-thread molecular junctions in between carbon-nanotube (CNT) electrodes of assorted edges. From the tube end into the tube bulk of selected CNTs, we investigate surface Green's function and layer-by-layer local density of states (LDOS), and find that both the cross-cut and the angled-cut armchair CNTs exhibit 3-layer-cycled LDOS oscillations. Moreover, the angled-cut armchair CNTs, which possess a zigzag rim at the cut, exhibit not only the oscillations, but also edge state component that decays into the tube bulk. In the case of cross-cut zigzag CNTs, the LDOS shows no sign of oscillations, but prominent singularity feature due to edge states. With these cut CNTs as leads, we study the single-polyene and two-polyene molecular junctions via both ab initio and tight-binding model approaches. While the interference effect between transport channels is manifested through our results, we also differentiate the contributions towards transmission from the bulk states and the edge states, by understanding the difference in the Green's functions obtained from direct integration method and iterative method, separately. Since the discovery of carbon nanotubes (CNTs) in 1991 1 , the properties of these fascinating quasi-one-dimensional, nano-scaled materials have been extensively pursued [2][3][4] . Such a trend has been enhanced further as graphene was rediscovered in 2004, which brought even more attention toward all different materials based on the honeycomb carbon structure. Later as the theoretical research turned to look at the boundaries of these materials, such as edges of graphene, graphite or nano ribbons [5][6][7][8][9][10][11] , or finite-sized CNTs of different tips [12][13][14][15] , experiments kept up as well. Scanning tunneling microscopy studies [16][17][18][19][20] on LDOS of different graphite edges have been performed, and the edge states of graphite are confirmed. However, similar experimental efforts on CNTs boundaries 21,22 are even more challenging and yet to be solidly realized. Due to the relatively well-defined lead-molecule covalent bonding and the finite transport channel involved, prospected nano-devices 23 using CNTs is an alternative and interesting choice, compared with the largely studied single-molecule junctions with metal-molecule linkages [24][25][26] . The CNTs, proposed to serve as the junction connecting the electrodes, or the electrodes themselves, or forming a heterostructured junction, give rise to peculiar properties in transport through these devices [27][28][29][30] . In recent years, molecular junctions employing CNT electrodes have been fabricated and measured by much more controlled and developed strategies. In some cases a single protein can be site-specifically attached to the leads 31 , while in others the linkage morphology, or the question of whether the linkage is formed by multiple molecules 32 , becomes important and therefore requires further confirmation. As the idea of a single-molecule transistor is gradually realized in these systems, the properties of the CNT leads and the details of the molecule-CNT contact are both crucial for further pursuit. Interference effect [33][34][35] , among all the above, should accompany the formation of multi-molecule junctions in between CNT leads, and is an example that illustrates the interplay between the two roles. In this study, we aim at understanding the assorted cross-cut or angled-cut semi-infinite CNTs and the transport through molecular junctions bridging such leads. On the one hand we use a one-parameter tight-binding (TB) model that describes the π pp ( ) hopping in the CNTs' honeycomb network, on the other hand we perform ab Method the ab initio approach. We use the geometry relaxation process implemented in the SIESTA package 36 to optimize the CNT bulk structure, with a force criterion of 0.02 eV/Å, an energy cutoff of 200 Ryd, the Double-ζ plus polarization (DZP) basis, and the Ceperley-Alder local density approximation (LDA) for the exchange and correlation functional. A rectangular supercell of 20 Å × 20 Å × a is used, where a is the lengthwise z-dimension of the unit cell, to be optimized. With this choice of supercell, the distance between nearest tube walls is 9 Å, which allows enough vacuum to pass the bond length convergence test. In a similar way, the structures of the junctions are then relaxed. For each junction we use a rectangular supercell of 20 Å × 20 Å × D Å, where D is the span of the junction. It contains 3 to 3.5 CNT unit cells from each lead, the hydrogen atoms that saturate the dangling bonds at the rims, and the bridging polyene molecule(s). Except for the carbon atoms in the CNT unit cells that sit farthest from the polyene molecules, which are kept fixed such that they hold the bulk CNT symmetry and bond lengths, all other atom positions are relaxed with no symmetry restriction. The relaxed junction structures are then put into the ab initio transport calculations, which are performed by the Nanodcal package 37 with the density functional LDA_PZ81 and the DZP basis. Non-equilibrium Green's function (NEGF) method 38 is adopted in this ab initio process, where the leads' self-consistent field (SCF) calculation uses the bulk CNT structure, and is followed by the SCF and transmission calculation for the open system of CNT-junction-CNT. Having done the transmission convergence test, we choose a 3-primitive-unit-cell buffer layer to sit at both sides of the scattering region, to form the linkages between the junction and the leads. Note that the Brillouin zone is sampled by a 1 × 1 × Labeling of the sites on the rim of an angled-cut n n ( , ) CNT electrode, the correspondence to angle θ, and the illustration for nodes (circles with a cross inside). Here we literally use = n 8 to illustrate the labeling. The black dots and the white dots represent sites from two different graphene sublattices. Angle θ is also used to describe the relative positions of two contact sites, when two polyenes are in parallel and bridging the electrodes. www.nature.com/scientificreports www.nature.com/scientificreports/ leads, the polyene molecule(s), and the coupling between the molecules and the leads. We also consider the on-site energy on all carbon atoms to be a constant that equals the CNTs' Fermi energy E F , and ≡ E 0 F . We calculate the surface Green's function of the electrodes with the TB Hamiltonian described above, via two different paths: (i) The iterative method: All CNT leads considered here have the semi-infinite symmetry. Based on such a symmetry, and this symmetry alone, one gets the self-consistent formula: E h i and h is the Hamiltonian of the surface layer (i.e., the unit cell at the rim), β describes the coupling between the surface layer and the bulk (what is left as the surface layer is peeled off), and g s is the surface Green's function. (ii) The integration method: Out of the electronic states of the infinite 2-D graphene, one finds the allowed k -point lines in the Brillouin zone, for any specific n m ( , ) indices that describe the chirality of the CNT of interest 3 . By doing so one obtains the n m ( , ) CNT band structure. Any specific cut of the n m ( , ) CNT determines a specific set of boundary conditions, namely, the nodal rim that defines the cut. From the pool of the n m ( , ) CNT's electronic states, we construct all possible linear combinations that give states vanishing at the nodal rim. These linearly combined states, satisfying the boundary condition of the cut, Figure 4. The LDOS of the n 4 carbon atoms in the surface unit cell (grey, from iterative method), for (a) crosscut (8,8), (b) cross-cut (9,9), (c) angled-cut (8,8), (d) angled-cut (9,9), (e) angled-cut (10, 10), (f) angled-cut (12,12), (g) cross-cut (8, 0), (h) cross-cut (9, 0), (i) cross-cut (10, 0), and (j) cross-cut (12, 0), compared with the corresponding bulk unit cell DOS (orange). Edge states appear in all angled-cut n n ( , ) CNTs, and all cross-cut (9,9), (c) angled-cut (8,8), (d) angled-cut (9,9), (e) angled-cut (10, 10), (f) angled-cut (12,12), (g) cross-cut (8, 0), (h) cross-cut (9, 0), (i) cross-cut (10, 0), and (j) cross-cut (12,0). In each case the layer is sliced according to the shape of the cut, and each layer contains 4n carbon atoms, same as a bulk unit cell. Green lines and cross symbols present results from the integration method. When in agreement with the bulk result (orange), only the integration method result is shown. www.nature.com/scientificreports www.nature.com/scientificreports/ tors k and −k, composing the complete orthonormal basis that satisfies the angled-cut boundary conditions. Obviously the boundary conditions of the angled-cut armchair CNTs' rim are more complicated than those of the cross-cut tubes, in the sense that there are n 2 non-equivalent sites appearing as nodes on the angled-cut rim, while all nodal sites of the cross-cut rim correspond to the same boundary condition. See illustrations in Figs. 1, 2 and 3. www.nature.com/scientificreports www.nature.com/scientificreports/ With this integration method, the Green's function thereby derived comes from the bulk's Bloch states only. In other words, evanescent waves are excluded in the integration method, while the iteration method includes everything. Our inspection via the integration method is therefore valuable in the sense that it helps to separate the contributions from the edge states and the bulk states. Results and Discussions The LDOS at the rims of differently cut CNTs. The surface LDOS of selected CNTs are shown in Fig. 4. The data shown in grey are obtained from the iterative method. In order to compare with the bulk unit cell DOS (in orange), we sum up the contributions from the n 4 carbon atoms (same number of atoms as in a bulk unit cell) of the outermost layer along the cut. As the tubes are cut, bulk features are suppressed in various ways, subject to different boundary conditions: In the cross-cut n n ( , ) (armchair) cases only the van Hove singularities originated from = k 0 band extrema are suppressed, while in the angled-cut n n ( , ) cases, the boundary condition leads to entanglement among different bands, and therefore all van Hove singularities are suppressed. In the cases of evenn cross-cut n ( , 0) (zigzag) CNTs, we also see the only un-suppressed singularities at = ± E t. This comes from the dispersionless bands in the TB model, where no other = k 0 band extrema are present. Moreover, for cases of cross-cut n ( , 0) CNTs and angled-cut n n ( , ) CNTs, considerable amount of states pop up at E F . The peak of these E F states is wider in all cases of angled-cut n n ( , ) CNTs, and not as singular as that in any cross-cut n ( , 0) CNT case. The cross-cut n n ( , ) CNTs don't have such a peak. Next we investigate the layer-by-layer LDOS, for energy at E F only. Results from path (i) and path (ii) are both shown in Fig. 5, to compare with the bulk value. It is clearly seen in (c)~(j) that the peak at E F is due to edge states, and this is true for all angled-cut n n ( , ) CNTs and all cross-cut n ( , 0) CNTs. While the LDOS of every cross-cut www.nature.com/scientificreports www.nature.com/scientificreports/ n ( ,0) CNT decays monotonically to the bulk value, in cases of the angled-cut n n ( , ) CNTs with ≠ n m 3 ,  ∈ m , oscillations of a three-layered cycle emerge from the decay of the edge state. As for the angled-cut n n ( , ) CNTs with = n m 3 , the edge state LDOS decays and converges to a single value, instead. However, the oscillating LDOS is the signature feature of all cross-cut n n ( , ) CNTs, and all angled-cut n n ( , ) CNTs. Even for angled-cut cases of = n m 3 , the oscillations are still vaguely seen, before the iterative LDOS fully converges to the single bulk value. Transmission. With the = n 8 angled-cut armchair CNT leads, we consider C 17 H 19 , the 17-carbon polyene molecule(s), and perform the ab initio calculations for transmission through all non-equivalent one-polyene junctions (shown in Fig. 6) and selected two-polyene junctions (shown in Figs. 7 and 8), excluding two cases where the two polyenes sit too close in reality. Depending on contact sites, the gap span is tuned to allowed the C 17 H 19 polyene to fit in, whereas the choice of C 17 H 19 is meant to prevent the two leads to touch at their most out-poking sites, even when the polyene is bridging their most in-tucking sites. For each junction the ab initio result is shown on top of that from the iterative method, and the contact site combinations are labeled with the index illustrated in Fig. 1. In all cases, the transmission results from both approaches reach agreement. Especially, via either approach, the two-polyene cases clearly show the transmission's dependence on contact combination, and therefore reveals the interference effect. The ab initio central (E F ) features exhibit shifts with respect to the iterative method results. This is due to charge transfer, as discussed before in the literature 34 , and maps to the TB model as a slightly negative effective molecular on-site energy. Other than this, the most obvious discrepancy mainly lies in the energy scale, as shown by the shifts between relative off-center features from both approaches. This is discussed in the next subsection. The 2-polyene transmission is composed of contributions from the even and the odd channels, where the even (odd) channel is the even (odd) combination of the real-space two parallel polyenes. These two channels diagonalize the effective two-dimensional subspace characterizing the junction of two threads. In Fig. 9 we use the most destructive case (a) and the most constructive case (e) from Fig. 8, to illustrate how the individual contributions from the even and the odd channels are simply summed up to give the total transmission. www.nature.com/scientificreports www.nature.com/scientificreports/ Molecular orbitals and features. Note that the hopping energy t is fitted by the bulk's band feature in the vicinity of E F . With this energy scale, we show the ab initio transmission data on top of the iterative method data. For all cases it appears that the TB polyene molecular level spacings are smaller than those in the ab initio version. To see this, we do the ab initio calculation for polyene C 17 H 19 in vacuum, via the SIESTA package with the same standard we use for the junctions. The HOMO and LUMO of C 17 H 19 suggest an intra-molecular hopping that is 1.3 times the CNT hopping. As the polyenes act as the bridge in the junction, the average molecular C-C bond length increases by only 0.5%, while the standard deviation shrinks to under 22% of the vacuum value. Although the increased average and the shrunken standard deviation of bond length should bring smaller molecular level spacings, all C-C bonds of the polyene in the junction are shorter than the bulk CNT's two different bond lengths (1.412 Å and 1.415 Å). This means, the intra-molecular hopping is still larger than the CNT hopping, even in the junction form, which explains the disagreement between the TB and the ab initio results on energy spacings of the molecular level resonance. Edge states from the difference between path (i) and path (ii). We use the two cross-cut zigzag CNTs, (12,0) and (13,0), to illustrate the difference between the iterative method and the integration method. In Fig. 10, results from both methods are shown for selected matrix elements of the surface Green's function, where the site indices follow the labeling illustrated in Fig. 3. The disagreement occurs at the vicinity of E F for all cases shown. With the detailed energy dependence of the surface Green's function matrix elements readily calculated, we then consider C 8 H 10 , the 8-carbon polyene(s) as the junction, and investigate the one-polyene case and the two-polyene cases, as shown in Fig. 11. Note that the reason of choosing a polyene species with even number of carbon atoms is inevitably the geometry of the cut, and such a choice gives no molecular level at E F . For the www.nature.com/scientificreports www.nature.com/scientificreports/ cross-cut (13,0) leads, the TB transmission results from both methods echo the gap of the bulk CNT, in other words, the edge states at E F does not make up for the gap, but does modify the transmission, especially near both band edges. For the cross-cut (12,0) leads, the original un-cut bulk (12,0) CNT is a semi-metal. However, the transmission in the vicinity of E F given by the integration method is taken away by the edge states. In Fig. 12 we show the transmissions of all 2 polyene cases (except contact combination (1, 3) that is geometrically too close for the two parallel polyene threads) with the cross-cut (12, 0) leads. The results from the iterative method and the integration method give the transmissions with and without the edge states. Within each individual method, the results from different contact combinations show the effect of interference. conclusion We present in this article a further study for the interference effect in multi-thread molecular junctions in between CNT leads of various cuts. To this purpose, we first show our calculations concerning the surface Green's functions of cross-cut and angled-cut n n ( , ) CNTs, as well as cross-cut n ( , 0) CNTs, using both the iterative method and the integration method within the TB model. The results from both methods, in comparison with the bulk values, show the effect brought by the formation of different cuts. The contributions from bulk states and edge states can be differentiated by the comparison between the 2 methods. While the cross-cut n n ( , ) CNTs present no edge states but oscillations of a 3-layer-cycle, the angled-cut cases exhibit both oscillations and the edge states in the vicinity of E F . Cross-cut n ( , 0) CNTs present edge states, but no oscillations. . TB transmission through cross-cut (12, 0) zigzag CNT leads bridged by two parallel 8-carbon polyenes, from iterative method (grey) and integration method (green), with contact site combinations θ π = 4 /12(1, 5), θ π = 6 /12(1, 7), θ π = 8 /12 (1,9), θ π = 10 /12 (1,11), and θ π = 12 /12 (1,13). In the follow-up calculations for transmissions through molecular junctions in between angled-cut (8,8) CNT electrodes, we compare the ab initio results with the iterative method results. In the two-thread cases, the agreement between both approaches displays the effect of interference between the even and the odd channels. The discrepancy between the two approaches mainly results from overlooking the difference between the intra-molecular hopping and intra-CNT hopping, and is explained by our observation on the molecular bond lengths. We also present a transmission study on one-and two-C 8 H 10 junction in between cross-cut (12,0) and (13,0) CNT electrodes, where we focus on the comparison between the iterative method and the integration method. For the = n m 3 cases, where the original bulk is gapless, the presence of edge states even takes away the transmission in the vicinity of the E F . Looking into the two-polyene cases with all possible combinations of contact sites on the (12, 0) CNT leads, we show that the interference effect is present via either path, and the comparison between both paths' results reveals the edge state influence in the vicinity of E F . As the fabrication of CNT-junction-CNT systems is becoming more developed and controlled, theoretical understanding of electronic structures at CNT edges, contact and molecule properties, and the interplay between the previous two aspects, are practically needed. Our study reveals the importance of the interference effect, through the discussion of the LDOS of CNT edges, and the interplay between the molecule and the leads via the contact selections.
v3-fos-license
2018-12-07T13:34:06.335Z
2016-03-01T00:00:00.000
56089281
{ "extfieldsofstudy": [ "Mathematics" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "http://www.astrj.com/pdf-61940-4062?filename=APPLYING%20CHAOTIC.pdf", "pdf_hash": "095b49e505b174855a8303183e2083b917fff75c", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41196", "s2fieldsofstudy": [ "Computer Science" ], "sha1": "095b49e505b174855a8303183e2083b917fff75c", "year": 2016 }
pes2o/s2orc
APPLYING CHAOTIC IMPERIALIST COMPETITIVE ALGORITHM FOR MULTI-LEVEL IMAGE THRESHOLDING BASED ON KAPUR’S ENTROPY Segmentation is one of the most important operations in image processing and computer vision. Normally, all image processing and computer vision applications are related to segmentation as a pre-processing phase. Image thresholding is one of the most useful methods for image segmentation. Various methods have been represented for image thresholding. One method is Kapur thresholding, which is based on maximizing entropy criterion. In this study, a new meta-heuristic algorithm based on imperialist competition algorithm was proposed for multi-level thresholding based on Kapur's entropy. Also, imperialist competitive algorithm is combined with chaotic functions to enhance search potency in problem space. The results of the proposed method have been compared with particle optimization algorithm and genetic algorithm. The findings revealed that the proposed method was superior to other methods. INTRODUCTION Image segmentation as a pre-processing phase plays an important role in image processing and computer vision applications.In fact, segmentation quality has great effects on other processing steps.Without proper segmentation, an efficient algorithm cannot operate efficiently.Hence, segmentation is constantly considered as an important phase for computer vision.There are various methods for segmentation.One method is applying histogram.Histogram of an image represents the way of intensity distribution of images.One segmentation method based on histogram is thresholding.Thresholding is an important technique for performing image segmentation and is computationally efficient.The main purpose is to determine a threshold for bi-level thresholding or several thresholds for multi-level thresholding.Bilevel thresholding determines only one threshold which separates pixels into 2 groups while multilevel thresholding determines several thresholds which separate pixels into multiple groups. Thresholding based on entropy was first introduced by Pun.Pun proposed a method to select image optimal thresholds through representing new criterion of maximizing entropy between gray-levels of objects and gray-levels of image background.Another entropy-based method is Kapur's method.Kapur modified Pun's thresholding method by adding 2 gray-level probability distribution: one for background and another one for objects [1]. Almost all primary thresholding methods are able to select optimal thresholding.Using these methods for multi-level thresholding, most of them are improper in terms of performance time and increased thresholds results in exponential increase of time.One recent thresholding method is using meta-heuristic algorithms.These algorithms seek to minimize or maximize a function.Maitra and Chatterjee [4] first introduced a particle optimization algorithm for image multi-level threshold-ing.In their method, a multidimensional particle is changed into several one dimensional particles.The method prevents early convergence of particle optimization algorithm.This combined algorithm has increased efficiency of particle optimization algorithm in thresholding.The cost function used by them was based on Kapur's function.Musrrat Ali et al. [4] applied synergetic differential evolution, which is an improved version of differential evolution for thresholding, and applied fitness function based on Kapur's function.Also, algorithms such as honey bee mating and tabu search [5] have been used for the purpose. IMPERIALIST COMPETITIVE ALGORITHM Imperialist competitive algorithm is an optimization algorithm introduced by Atashpaz and Lucas in 2007 [6].This algorithm is inspired from a socio-political process started by some initial populations.In this algorithm, every element of population is called a country.The countries are divided into 2 classes: colonies and imperialists.Each imperialist colonizes and controls some countries.The core of this algorithm consists of a policy of attraction and colonial competition.According to policy of attraction historically applied by imperialists, such as France and England in their colonies, imperialists made efforts to destroy culture, tradition, and language of their colonies (e.g. through building schools to teach their own languages).In this algorithm, the policy is applied through moving colonies of an empire based on a specific relation.When a colony reaches a better position than the current imperialist, it replaces the current imperialist state of the empire.Also, the power of an empire consists of imperialist's power and a percentage of average of its colonies' power.Imperialist competition is another important part of this algorithm.During this process, weak empires lose their power and vanish.Finally, there is one empire which controls the world.In this situation, imperialist competitive algorithm reaches optimal point of objective function and stops. PROPOSED APPROACH Thresholding through Kapur's method Let there be L gray-level in the image I and these gray-levels range {1, 2, …, L}.When the pixels of an image are in L gray-level ([0, …, L]) and the number of pixels in i level is n i , normal histogram is as follows: 4 Let there be L gray-level in the image I and these gray-levels range {1, 2, …, L}.Wh pixels of an image are in L gray-level ([0, …, L]) and the number of pixels in i leve normal histogram is as follows: Kapur's method maximizes criterion of entropy of histogram so that separated areas maximum central distribution [7].The criterion is represented as follows for bi thresholding: where Kapur's method maximizes criterion of entropy of histogram so that separated areas have maximum central distribution [7].The criterion is represented as follows for bi-level thresholding: Maximize J(t) = H 0 + H 1 (2) where: 4 normal histogram is as follows: Kapur's method maximizes criterion of entropy of histogram so that separ maximum central distribution [7].The criterion is represented as follo thresholding: Maximize () = H 0 + H 1 where Kapur's method maximizes criterion of entropy of histogram so that separ maximum central distribution [7].The criterion is represented as follow thresholding: Maximize () = H 0 + H 1 where Maximum threshold is a threshold that maximizes the previous equation.In fact, H 0 and H 1 are entropies of each part.The objective of Kapur's method is to maximize this sum (maximizing the entropy). Kapur's criterion of optimization for multilevel thresholding has developed.This criterion is defined as follows.Multi-level thresholding is supposed as an m dimensional optimization problem.We want to determine m for thresholding [t 1 , t 2 , …, t m ].The objective is to determine the maximum of the following function: Maximum threshold is a threshold that maximizes the previous equation.In fact, H 0 are entropies of each part.The objective of Kapur's method is to maximize th (maximizing the entropy).kapur's criterion of optimization for multi-level thresholding has developed.This c is defined as follows.Multi-level thresholding is supposed as an m dime optimization problem.We want to determine m for thresholding [t 1 , t 2 , …, t m objective is to determine the maximum of the following function: Chaotic Imperialist Competitive Algorithm In the proposed method, a sequence generated by chaotic maps is used in the param imperialist competitive algorithm instead of a random sequence of numbers.In this initial countries are generated as a repetition of chaotic maps.In this algorithm, N denote the number of dimensions of the problem and a country in the pop respectively.Varmin and varmax are lower and upper bounds, respectively.A Maximum threshold is a threshold that maximizes the previous equation.In fact, H 0 a are entropies of each part.The objective of Kapur's method is to maximize this (maximizing the entropy). kapur's criterion of optimization for multi-level thresholding has developed.This cri is defined as follows.Multi-level thresholding is supposed as an m dimens optimization problem.We want to determine m for thresholding objective is to determine the maximum of the following function: Chaotic Imperialist Competitive Algorithm In the proposed method, a sequence generated by chaotic maps is used in the paramet imperialist competitive algorithm instead of a random sequence of numbers.In this se initial countries are generated as a repetition of chaotic maps.In this algorithm, N denote the number of dimensions of the problem and a country in the popul respectively.Varmin and varmax are lower and upper bounds, respectively.Als Chaotic imperialist competitive algorithm In the proposed method, a sequence generated by chaotic maps is used in the parameters of imperialist competitive algorithm instead of a random sequence of numbers.In this section, initial countries are generated as a repetition of chaotic maps.In this algorithm, N and i denote the num- (5) ber of dimensions of the problem and a country in the population, respectively.Varmin and varmax are lower and upper bounds, respectively.Also, x i,j denotes j th dimension of i th country in the population.Pseudo-code of initial population generation algorithm using chaotic maps is as follows: 6 denotes jth dimension of ith country in the population.Pseudo-code of initial population generation algorithm using chaotic maps is as follows: Overall Structure of the Proposed Algorithm for Image Segmentation The proposed algorithm based on chaotic imperialistic competition for multi-level thresholding is described in this section. Step 1: Give initial values to ICA including Num Of Countries, Num Of Imp, and Num Of Col.The relationship between these parameters is as follows: Num of Countries= Num of Imp + Num of Col (6) Step 2: Produce initial countries by continuous repetition of chaotic map. Step3: Compute cost function for each country based on Kapur's method. Step 4: Select Num Of Imp of the strongest countries as imperialist, and the rest are Num Of Col. To establish an initial empire, the normalized cost of the empire can be defined as follows: MCI= the maximum number of chaotic iteration i =0 while ( i population size  ) Randomly initialize the first chaotic variables j =0 while (j<N) generate chaotic variable , i j cm according to the selected map , , var min cm (var max var min) Overall structure of the proposed algorithm for image segmentation The proposed algorithm based on chaotic imperialistic competition for multi-level thresholding is described in this section. Where c k is the cost of the k th empire and C k is its normalized cost.Finally, the normalized power of each empire is defined as follows: c k is the cost of the kth empire and C k is its normalized cost.Finally, the normalized of each empire is defined as follows: | rmalized power of an empire represents the number of initial colonies that should be ered by an imperialist and is defined as follows.N.C n denotes the initial number of es of the nth empire. : Apply revolution operator on each colony.In each repetition, produce a random max{ } The normalized power of an empire represents the number of initial colonies that should be conquered by an imperialist and is defined as follows.N.C n denotes the initial number of colonies of the n th empire. .( . ) • Step 5: Colonies turn into imperialists chaotically.• Step 6: Apply revolution operator on each colony.In each repetition, produce a random number between 0 and 1 for each imperialist.Next, the result is compared with probability of revolution rate.When the random number is less than the revolution rate, the revolution is produced. EXPERIMENTAL RESULTS AND COMPARATIVE PERFORMANCES Five pictures: "Lena", "Pepper", "Bird", "Camera", and "Goldhill", were applied to evaluate the performance of the proposed algorithm.These test images and corresponding histograms are shown in Figure 1. We implemented the proposed algorithm in language of Matlab (2014) under a computer with 2GHz CPU, 1 G RAM with Windows 8.1 system.The size of "Camera" and "Pepper" image is 256×256 and the size of other images is 512×512.The most appropriate parameters of the algorithm to conduct experiments are shown in Table 1. Computation time and thresholds In this section, computation time and thresholds are obtained using the particle optimization algorithm and genetic algorithm.In obtaining thresholds using meta-heuristic algorithm, time is one of the major aspects.By increasing the number of thresholds, computation time increases.Table 2 demonstrates optimal thresholds and computation times based on entropy criterion. As shown in Table 2, selected thresholds of the proposed algorithm are close to the optimal thresholds of the particle optimization algorithm and genetic algorithm.Also, the computation times of the proposed algorithm are often a little more than that of PSO and genetic algorithm due to using chaotic functions in producing random numbers. Analysis of signal to noise ratio The popular performance indicator, peak signal to noise ratio (PSNR) is used to compare segmentation results by threshold techniques of multilevel image [8][9][10].PSNR criterion has been investigated as an evaluation criterion of quality of thresholding.Increased value of PSNR results in higher quality of thresholding.We define the criterion, measured in decibel (dB) as: 10 255 20 log ( ) PSNR RMSE = (10) where: RMSE (root mean-squared error) is defined as follows: Table 3 shows that by increasing the number of thresholding, signal to noise ratio increases.The results show that the proposed method often presents higher PSNR.Also, most of the time, the genetic algorithm has lower accuracy compared with other methods. Cost values The purpose of all meta-heuristic algorithms is to maximize or minimize a function which is called fitness function or cost function.In this study, an entropy based cost function is used, and the purpose is to maximize this function.Table 4 shows the value of cost using the entropy based method. As shown in Table 4, the cost function increases by increased number of thresholds.Also, the values of the cost function of the proposed method were higher compared with other methods except for "Goldhill" image and the threshold of 4. Stability analysis In general, meta-heuristic methods are random algorithms; each implementation may reveal different results.The following formula is used to analyze stability of meta-heuristic algorithms: This formula is used to obtain standard deviation.Standard deviation shows the distribution of data.K denotes the number of implementations of algorithm (10 implementations), and δ i indicates the optimal value for objective function in each implementation.Also, the average of δ values is denoted by µ.Lower standard deviation results in higher stability of algorithm.Table 5 shows the standard deviations using the entropy based method. As shown in table 5, all standard deviations are low, which shows higher stability of the proposed method compared with other methods.Higher stability means that the results of different implementations are closer while lower stability means that the results are not close. CONCLUSIONS In this study, we proposed a maximizing entropy criterion based method to select multi-level thresholds using chaotic imperialist competitive algorithm.Imperialist competitive algorithm is a new meta-heuristic algorithm impressed from imperialistic competition between countries.In this algorithm, each solution is called a country.Imperialists attempt to attract colonies toward themselves, and this process continues until there is only one imperialist.This algorithm has been proved efficient for different optimization problems.The purpose of this study was to segment images using chaotic imperialist competitive algorithm.Thereby, using chaotic functions, the efficiency of the imperialist competitive algorithm was improved.Using chaotic functions for population production the variety of the population increased.Next, the new algorithm was used for image segmentation through thresholding.Kapur-based cost function which applies the entropy criterion was used as fitness function.The results were applied on some images.Several criteria including time, signal to noise ratio, fitness function and standard deviation were applied to evaluate the efficacy.The results revealed that the algorithm is superior to other algorithms in efficacy.Also, applying chaos results in a variety of initial results and increased efficacy of the algorithm. Fig. 1 . Fig. 1.Test images and corresponding histograms I and I ^ are original and segmented images of size M×N, respectively. • Step 7: Calculate the cost of all colonies and imperialists of the empire.In previous steps, if there is a colony in the empire which has lower cost than the imperialist that colony takes the control of the empire.• Step 8: Obtain the total power of all empires.• Step 9: Perform imperialistic competition.• Step 10: Remove the weakest empire.When an empire is removed, all its colonies are removed too.• Step 11: Add one unit to Dec value.• Step 12: When Dec value is more than max Dec (Dec>max Dec ), you can stop; otherwise return to step 6. Table 1 . Parameters used for thresholding images Table 2 . Optimal thresholds and computation times based on entropy criterion Table 3 . Signal to noise ratio for thresholding 5 images using entropy criterion Table 4 . Value of cost using the entropy based method Table 5 . Standard deviation for entropy criterion based thresholding
v3-fos-license
2014-10-01T00:00:00.000Z
2011-12-01T00:00:00.000
2866019
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://link.springer.com/content/pdf/10.1007/s13244-011-0138-8.pdf", "pdf_hash": "f19bbcbd6768ee6d9a546ffd2336e3a346c7cd09", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41200", "s2fieldsofstudy": [ "Medicine" ], "sha1": "05f10683b475e85bd6ad1035513c9dbdb306d56b", "year": 2011 }
pes2o/s2orc
Interventional radiology at the meetings of the German Radiological Society from 1998 to 2008: evaluation of structural changes and radiation issues Objectives Evaluation of structural changes and the weight given to radiation exposure of interventional radiology (IR) contributions at the Congress of the German Radiological Association from 1998 to 2008. Methods All IR abstracts were evaluated for type of contribution, design, imaging modality, and anatomic region. Weight given to radiation exposure was recorded as general statement, main topic and/or dose reduction. Statistical analysis included calculation of absolute/relative proportions of subgroups and ANOVA regression analyses. Results Out of 9,436 abstracts, 1,728 (18%) were IR-related. IR abstracts significantly rose to a maximum of 200 (20%) in 2005 (P = 0.048). While absolute numbers of scientific contributions declined, educational contributions significantly increased (P = 0.003). Computed tomography (CT) and magnetic resonance imaging (MRI) were the main IR imaging modalities, with growing use of CT (P = 0.021). The main body regions were vessels (45%) and abdomen (31%). Radiation exposure was addressed as a general statement in 3% of abstracts, as a main topic in 2%, and for dose reduction in 1%, respectively. During the study interval a significant growth of dose reduction abstracts was observed (P = 0.016). Conclusions IR emerged as a growing specialty of radiology, with a significant increase in educational contributions. Radiation exposure was rarely in the focus of interest but contributions relating to dose reduction demonstrated a significant growth during the study period. Main Messages • Interventional radiology emerged as a growing specialty at the German radiological congress. • Significant increments of educational and prospective research contributions could be observed. • Despite a significant trend towards computed tomography, radiation exposure of IR was rarely in the focus of interest. • Contributions related to dose reduction demonstrated a significant growth during the study period. Introduction Image-guided interventions are an acknowledged instrument in the diagnosis and therapy of a broad spectrum of diseases. Beside diagnostic procedures for harvesting histological specimens, Interventional Radiology (IR) covers a vast range of procedures such as dilatation or stenting of vessels and other tubular structures, embolisation of haemorrhages, draining of localised fluid collections, and catheter insertions to different body cavities [1]. In all cases, image guidance has become a necessary prerequisite to localise the target region, guide and document optimal deployment of the interventional device, and to evaluate success, failure or complications of the procedure. In several clinical situations, IR has proven to be a reasonable, safe, and cost-effective alternative to other therapeutic options such as open surgery [2,3]. With a growing proportion of elderly patients with higher comorbidity due to chronic diseases and, hence, fewer options for more invasive approaches, an increasing demand for IR experts and procedures is notable [1,[4][5][6][7][8]. Although interventions can principally be performed with all imaging modalities, computed tomography (CT) as a cross-sectional imaging method has several advantages as a guiding instrument. Due to its wide availability, CT imaging today is easy to achieve and available around the clock in most medical facilities. Moreover, CT produces non-superimposed images of anatomically complex body regions, does not interfere with most interventional devices, is increasingly resistant to motion artefacts due to rapid scanning, and is not hindered by bony structures. Thus, CT presents clear advantages for guiding interventions compared with other imaging modalities. One main disadvantage of CT is the concomitant radiation exposure, a topic that has incrementally evoked discussions about the appropriate use and concomitant risk of ionising imaging examinations in medicine-not only amongst professional health carers but also in the lay press. Being ex officio responsible in most interventional procedures with concomitant radiation exposure, interventional radiologists play the leading role in the appropriate application of ionising imaging modalities and the preservation of radiation protection. While the occupation with radiation exposure at the congress of the German Radiological Association (DRK) has been investigated with respect to general radiology and paediatric radiology from 1998-2008 elsewhere [9,10], little is known about the structure and quantity of IR contributions in scientific and educational meetings or congresses pertaining to the topics of the applied imaging modalities (especially with emphasis on CT usage) and the awareness of radiation exposure concomitant with IR procedures. The aim of the presented study was the systematic evaluation of structural changes of IR contributions and the weight given to radiation exposure related to IR at the DRK in an 11-year period from 1998 to 2008. Materials and methods The study was based on the abstracts of the scientific programs of the DRK congress from 1998 to 2008 [11] as published by Georg Thieme Verlag (Stuttgart/New York) publishers, therefore taking into account the 79th to 89th DRK. The two underlying assumptions for this investigation were that (1) the DRK accurately reflects the current state of scientific endeavour in German-speaking countries and (2) that the published abstracts contain all pertinent findings. Thus, the summarisations were considered to adequately state information on objectives, methods and results, as well as the authors′ conclusions. Moreover, any subject not expressly mentioned in the abstract should in fact not have been primary content of the investigation. Congress contributions that could not be evaluated (withdrawn contributions, missing abstract texts, etc.) were not included in the analysis. Every available abstract was evaluated according to the following variables. First, the thematic category of the contribution was documented as interventional versus noninterventional abstract. The type of contribution as scientific presentation, scientific poster, educational poster, workshop, refresher course, multimedia, highlight session, inventor's forum, radiology technician educational course and radiological technician clinical seminar was noted. Furthermore, the type of scientific study was assessed as prospective or non-prospective. All abstracts were checked for the imaging modalities used in IR as CT, magnetic resonance imaging (MRI), conventional radiography, angiography, ultrasound, and fluoroscopy. The anatomic region of IR procedure was noted as chest, heart, breasts, abdomen/pelvis, central nervous system including head/ neck, musculoskeletal system, and vascular area (encompassing arterial, venous, or lymphatic vessels). The weight given to the topic of radiation exposure was recorded in three categories: (1) a general statement on radiation exposure, if radiation dose was addressed in any form (e.g. concrete mention of dose-relevant terms such as "radiation dose", "radiation exposure", "radiation burden", etc. and precise specification of exposure as effective dose reports in mSv, etc.), (2) radiation exposure as main topic, if radiation dose was the primary subject of the contribution, and/or (3) radiation protection, if dose reduction was the primary subject of the contribution. Multiple naming was taken into consideration for body region, imaging modality (e.g. comparison of two imaging modalities in IR procedures) and the three dose categories. Statistical analysis was conducted with PASW Statistics, version 18.0.0 (SPSS, Chicago, IL, USA), calculating absolute and relative proportions of the investigated subgroups in the context of all contributions to the DRK from 1998 to 2008. ANOVA regression analysis was performed for: absolute numbers of all IR contributions, scientific, non-scientific and prospective IR contributions, contributions related to CT, combined non-CT ionising and non-ionising imaging modalities, and IR contributions relating to the dose categories. Significance level was set to 5%. Results Of 9,472 scientific contributions presented at the DRK during the study period, 9,436 (99.6%) were eligible for inclusion in our evaluation. The average number of published abstracts per congress per year was 858, ranging from 705 (2001) to 987 (2005). Abstracts related to IR counted for 1,728 of all abstracts in this 11-year period with a mean value of 157 IR abstracts per year (18.3%). Structure of IR contributions IR contributions covered all categories of the DRK program with 1,153/1,728 (67.9%) being scientific presentations (mean n = 105/year) and 245/1,728 (14.2%) scientific posters (mean n=22/year). Due to considerable changes in the structure of the DRK during the study period, only scientific posters and scientific presentations were noted in all DRK programs of the study period. During the study period, n=98 (7.3%) workshops (mean n=12/year), n=9 (1.7%) educational posters (mean n = 3/year), n =190 (14.2%) refresher courses (mean n=24/year), n=7 (1.2%) multimedia presentations (mean n=2/year), n=3 (2.2%) highlight sessions, n=2 (1.3%) contributions to the inventors' forum, n=15 (1.7%) radiology technician educational courses (mean n=3/year), and finally n=5 (0.8%) radiological technician clinical seminars (mean n=1/year) were IR-related. Relative proportions of scientific contributions in IR diminished with scientific IR posters being halved in 2008, but absolute numbers of all scientific IR contributions did not decline significantly. Non-scientific IR contributions demonstrated a highly significant increase from 7 to 33% with a mean value of 30 contributions per year; P=0.003 (Fig. 2). Of all IR abstracts, 166 (9.5%) showed a prospective study design with 20/245 (9.3%) being scientific posters and 146/1,153 (13.1%) scientific presentations. Based on a significant growth in absolute numbers (P=0.038), the relative proportion of prospective IR abstracts showed an increase from 4.5% in 1999 to a maximum of 14.5% in 2008. Absolute numbers remained small with a maximum of six prospective scientific posters in 2005 and 21 prospective scientific presentations in 2008. Imaging modalities and body regions Over the entire period, cross-sectional imaging modalities were the main imaging modalities of IR with CT encompassing n=320 (18.1%) and MRI n=330 (19.1%) of all IR-related abstracts, respectively. CT contributions in IR demonstrated a significant increase of 16.9% in 1998 to 23.5% in 2008 (P=0.021). Non-CT IR imaging modalities with ionising radiation were presented in 223 abstracts (12.5%) with 180 abstracts (10.2%) relating to angiography, 37 (2%) to fluoroscopy, and 6 (0.3%) to conventional radiography. ionising non-CT IR imaging modalities declined in their proportion from 11.8% (1998) to 5.4% (2008) with a peak in 2003-2005 (26.9-32.5%), mainly attributed to angiography. Combined non-CT, non-ionising imaging modalities (i.e. MRI and ultrasound) did not demonstrate significant numerical changes, with MRI declining from 20.6% in 1998 to 15.1% in 2008 and ultrasound displaying stable contributions with a mean value of 4.6% (Fig. 3). With respect to the body regions, principally presented locations of IR were vessels with n=760 (44.9%) and abdomen with n=537 (30.7%). Breasts, chest including lungs, bones and muscles, and central nervous system including head and neck accounted for 137 (8.1%), 153 (8.6%), 172 (9.6%), and 146 (8.5%) abstracts, respectively. The heart region or paediatric interventions were recorded at ≤3%. While abstracts concerning vessel interventions dem- Fig. 1 Absolute number of contributions in IR. All all DRK contributions, IR all IR contributions, Sc IR scientific contributions in IR, Non-Sc IR non-scientific contributions in IR onstrated a drop in percentages, contributions relating to interventions of the abdomen, chest and musculoskeletal system increased in their relative proportions (Fig. 4). Radiation exposure and IR The issue of radiation exposure was addressed over the entire study period as a general statement in 56 IR abstracts (3%). Thirty-one contributions (1.6%) of all dealt with radiation exposure as the main topic and 22 (1.3%) were involved with dose reduction. The relative proportion of dose-related abstracts increased from a minimum of 2.4% in 2000 to a maximum of 11% in 2005. On the basis of small absolute values, a significant growth of contributions relating to dose reduction (P=0.016) could be observed, while the two other dose categories also increased, albeit non-significantly (Fig. 5). Percentages of abstracts dealing with radiation dose demonstrated a general increase for all examined dose categories, in correlation most pronounced in the category of dose reduction (Fig. 6). With respect to CT and ionising non-CT imaging modalities, abstracts relating to CT occurred in 20 contributions in the sum of general statements on radiation (6.5% of all CT abstracts), radiation exposure as the main topic occurred in ten abstracts (2.9% of all CT abstracts), and dose reduction themes occurred in seven contributions (1.8% of all CT abstracts). ionising non-CT imaging modalities demonstrated higher values in all three dose categories [20 (9.6%), 15 (8.6%), and 12 (7.9%), respectively], as presented in Table 2. Discussion IR has proven to be a stable and appreciated topic at the DRK-the largest congress of radiology in Germanspeaking countries and the third largest congress in Europe. With a proportion of IR abstracts between 15 and 20% of all contributions, IR rose significantly in absolute numbers to a peak of 200 abstracts in 2005. This was primarily based upon structural changes of the DRK in the latter years clearly emphasising educational activities for radiologists and technicians and eliciting a substantial and significant growth of educational IR contributions. Given the increasing overall demand for educated staff in radiology, this is an impressive answer to persisting voices arguing for highlevel education to ensure the availability of properly trained radiologists [12][13][14][15]. In combination with the static time frame of the DRK, scientific IR contributions therefore declined only non-significantly. With 9-13% being prospective contributions, IR kept within an equal share of prospective scientific abstracts compared with general radiology [10]. The triplication of relative numbers for prospective contributions may be a calculation effect based on declining numbers of all scientific contributions and increasing numbers of prospective abstracts. Furthermore, absolute numbers of prospective contributions remainedunder postulation of evidence-based medicine as the goalquite small. This correlates to the warning comments of several authors about the amount and quality of IR literature and, moreover, demonstrates that despite requests for more research activities in IR [13,[16][17][18], only moderate increases of high-ranking studies have been reached in absolute numbers. Nevertheless, even when based on small absolute numbers, significantly more prospective research in IR has been presented at the DRK. The conflicting demands of raising absolute numbers of high-ranking prospective studies, managing increasing clinical workloads and-on the other hand-maintain the intensified educational activities, evidently advocate against further cutbacks in the equipment, manpower and funding of IR. Image-guidance is without question an essential prerequisite of modern IR procedures. Given the clear advantages of CT imaging mentioned above, the rising demand of CT examinations in total and for CT-guided IR procedures is [20]. Based on the linear no-threshold model, low-dose diagnostic radiation exposure (<100 mSv) accounts for a low but tangible risk of carcinogenesis [21][22][23]. Recent studies even report evidence for an increased risk for solid tumour development with exposures at a magnitude of 10-50 mSv [24][25][26][27]. Effective doses of fluoroscopic or angiographic IR procedures-especially when concerning the trunk as in embolisations or transjugular portosystemic shunt implantations-can be associated with a substantially increased likelihood of clinically significant patient doses [28]. As presented by Tsalafoutas et al. [29], median effective doses for CT-guided biopsies were about Percentages of IR abstracts relating to dose 23 mSv, for radiofrequency ablations 35 mSv, for abscess drainages 16 mSv and for nephrostomies 11 mSv. The maximum effective dose of these procedures reached 57 mSv and the unavoidable diagnostic part of the CTguided intervention produced the lion's share of radiation exposure. Although even these values of higher expositions are considered as "low" (i.e. one per 1,000 individuals) according to the proposed adequate risk terms of the UK Department of Health [30] compared with the general lifetime risk of cancer development, they are definitely above the effective doses accompanying most diagnostic procedures. Moreover, the increasing demand and sometimes repeated use of IR procedures make radiation protection an important issue. Studies revealed sobering results of the awareness and knowledge of non-radiologist healthcare professionals concerning the magnitude of radiation exposure combined with different radiological examinations, leaving a potential question mark on their ability to balance the risk-benefit ratio for a given patient [31][32][33][34][35][36]. Thus, interventional radiologists are by all means the primary correspondents of dose issues accompanying IR procedures. Despite small absolute numbers through the years for all dose categories, the presented results of our study demonstrate an increase in the awareness and occupation with radiation exposure measured in percentages of abstracts dealing with this topic at the DRK during the study period-concomitant with the above mentioned increase of IR abstracts in general. The significant increase of contributions in IR relating to dose reduction is especially encouraging. Obviously, the growing use of IR and ionising imaging modalities does not remain unanswered by interventionally acting radiologists with respect to radiation protection efforts. Nevertheless, compared with an analysis of general radiology and paediatric radiology in this 11-year period of the DRK presented elsewhere [9,10], percentages of dose relevant contributions were lower in IR -especially for contributions with relation to CT. Furthermore, even when considering multiple naming for these dose categories, 94% of all IR abstracts at the DRK did not mention radiation exposure at all-despite the abovementioned increasing trend towards CT as the guiding instrument. Finally, looking at the absolute numbers of all IR contributions, of IR contributions pertaining to ionising procedures and for dose-related IR contributions (Table 2), the last category-even if growing-is clearly outnumbered. Moreover, many abstracts did not report the target variable of radiation exposure-i.e. effective dose-for the mentioned procedures (neither expressed as typical doses for a given procedure nor based on their own calculated radiation data). This could have been for several reasons. First, effective dose as a calculated but not measurable variable represents in itself a concept that is combined with a relative uncertainty of up to 40% [30]. IR with IO/CT IR contributions with ionising imaging modalities, Ment dose mentioned, Topic dose the main topic, Reduct dose reduction, IO non-CT ionising, non-CT imaging modalities Some factors may influence the calculation of effective dose, especially factors as tissue weighting, scanning devices, region-based or organ-based calculation method and, finally, the patient's habitus compared with the proposed body model. Additionally, for some body regions tissue weighting factors are not disposable. Hence, some authors may have chosen to avoid the calculation of effective doses in favour of reporting measurable variables as the dose-length product orCT dose index. Moreover, the majority of evaluated abstracts concerned retrospective studies. It may have been impossible or to difficult to collect these data retrospectively-especially if radiation exposure was not in the focus of interest. Nevertheless, the seldom reporting of effective doses contradicts the intended use of this variable as a dose quantity that is easy to compare and-with respect to all its uncertainties-links the measured radiation dose to the risk of health detriment. Hence, it may be technically more precise to report the dose-length product or CT dose index, but the associated risk with a given procedure or for a given reference patient is not displayed. The contrast between this situation and the role of interventional radiologists as experts in radiation protection is potentially problematic. Moreover, it may be difficult to communicate to an incrementally attentive public, as issues dealt with at the DRK are frequently presented and discussed by the lay press. Given the interdisciplinary character of IR and the known turf battles around interventional medicine [1,37,38], deficiencies in the key qualification of radiation protection may ultimately heat discussions, which is why radiologists should be the primary experts and correspondents of IR. If radiologists want to maintain primacy in IR, further development of radiation protection knowledge based on intensified their own high-ranking research could be a cornerstone and unique selling proposition to succeed. There are several limitations of this study, most important being the above-mentioned assumptions, upon which the evaluations were based. The first hypothesis, that the published abstracts contain all pertinent findings, does not exclude radiation exposure as a topic to be touched upon in the contribution but is not expressively mentioned in the abstract. Nevertheless, an author considering radiation exposure important enough to be discussed at the oral presentation, but not as eminent to be mentioned in the abstract, reflects an attitude towards the subject that is in itself problematic. Secondly, the DRK may not accurately reflect the current state of all scientific proceedings in Germany or other German-speaking countries. Moreover, as the number of accepted abstracts will have been different from the number of submitted abstracts to a given subject, there may have been several factors influencing the decision process of accepting contributions for the annual meetings of the DRK, including structural changes as mentioned above or the wish to even out contributions over different topics. But again, as the largest congress in the above-mentioned countries, there will be no better platform for such an analysis concerning the handling of ionising imaging modalities and dose in IR. Furthermore, the categorisation of the abstracts was intrinsically not immune to subjectivity, but over 11 years of annual meetings this would result in a systematic error probably factoring itself out. Moreover, because of the chosen study design, other indirect forms of radiation exposure reduction could have been missed. If, for instance, a study with comparison of an ionising and non-ionising imaging method in IR results in non-inferiority of the non-ionising imaging method, then this would also be a contribution to dose reduction, but would not have been included in our analysis. Finally, the question for the appropriate proportion of abstracts in a congress programme dealing with radiation exposure and dose reduction remains open. In conclusion, IR emerged as a substantially growing specialty of radiology at the DRK from 1998 to 2008, with significant increments of educational activities and prospective research contributions. Despite a significant trend towards CT as an IR imaging modality, radiation exposure of IR was rarely in the focus of interest. Nevertheless, contributions relating to dose reduction demonstrated an encouraging and significant growth during the study period.
v3-fos-license
2016-05-18T05:41:07.082Z
2014-11-06T00:00:00.000
17620792
{ "extfieldsofstudy": [ "Computer Science", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "http://downloads.hindawi.com/journals/cin/2014/240828.pdf", "pdf_hash": "94d941f2a3da3fee587beeb48828300890fbd634", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41201", "s2fieldsofstudy": [ "Computer Science" ], "sha1": "988b05e24a3bb459ca8e425c9673d329a45d2c90", "year": 2014 }
pes2o/s2orc
Modeling Design Iteration in Product Design and Development and Its Solution by a Novel Artificial Bee Colony Algorithm Due to fierce market competition, how to improve product quality and reduce development cost determines the core competitiveness of enterprises. However, design iteration generally causes increases of product cost and delays of development time as well, so how to identify and model couplings among tasks in product design and development has become an important issue for enterprises to settle. In this paper, the shortcomings existing in WTM model are discussed and tearing approach as well as inner iteration method is used to complement the classic WTM model. In addition, the ABC algorithm is also introduced to find out the optimal decoupling schemes. In this paper, firstly, tearing approach and inner iteration method are analyzed for solving coupled sets. Secondly, a hybrid iteration model combining these two technologies is set up. Thirdly, a high-performance swarm intelligence algorithm, artificial bee colony, is adopted to realize problem-solving. Finally, an engineering design of a chemical processing system is given in order to verify its reasonability and effectiveness. Introduction Due to fierce market competition, product design and development process is faced with a huge challenge. In addition, in the initial stage of industrialization, competitiveness mainly lies with the prices of products. Only if the products were cheap and usable, would they be of competitive advantage in the market. This type of competition is named the cost-based competition. However, with the development of economy, the quality, time-to-market, and service turned up trumps, which led to the competition being quality based as well as time based. As a result, to succeed in this type of competition, it is necessary for most of enterprises to introduce some new competitive products more quickly so as to occupy the global market share. It also means that new product development has become a key factor to keep the core competitiveness. Therefore, many enterprises adopt concurrent engineering (CE) technology to support product design and development. Nevertheless, due to the existing of coupling in product design and development, it is difficult to manage this process. Particularly when take execution may produce new information flow or affect other interdependent tasks, more complex information flows among interdependent tasks will be generated. At the same time, due to the randomness of information flow, incomplete information may often be used for design decision, which usually leads to design iteration [1]. Design iteration generally causes increases of product cost and delays of development time as well, so how to identify and model couplings among tasks in product design and development has become an important issue for enterprises to settle. Many of the traditional project management techniques (e.g., Gantt chart, critical path method (CPM), and program evaluation and review (PERT)) only describe the sequential and parallel relationships, not the interdependent relationships in tasks. The design structure matrix (DSM) model presented by Steward [2] can express the interdependent relationships as well as the iterations induced by the relationships. It is a useful tool in concurrent engineering management and implementation. Moreover, in practical product development process, resource constraints from machine equipment, staffs, and so on should be considered, but the traditional 2 Computational Intelligence and Neuroscience methods cannot deal with this problem. Therefore, in this paper, we use DSM to identify and analyze design iteration. In current researches, only valid iterations were considered, but some invalid especially harmful ones were not studied. However, due to the existing of these invalid iterations, the whole product design and development process may not be convergent. As a result, how to avoid these harmful iterations needs further study. In this paper, we use tearing approach combined with inner iteration technology to deal with task couplings, in which tearing approach is used to decompose a large coupling set into some small ones and the inner iteration technology to find out iteration cost. The paper is organized as follows. In Section 2, we survey the previous literatures on disposal of coupled relationships. Section 3 presents the model for solving coupled task sets based on tearing approach and inner iteration technology. In Section 4, an efficient artificial bee colony algorithm (ABC) is used to search for a near-optimal solution of the model. In Section 5, the model is applied to an engineering design of a chemical processing system and some discussion on the obtained results is also given. Section 6 offers our concluding remarks and potential extensions of this research. Related Works DSM is an efficient management tool for new product development. In the past decades, many researches have shown its efficiency. Currently, DSM has been widely used in decomposition and clustering of large-scale projects [3,4], identification of task couplings and minimization of project durations [5,6], project scheduling [7][8][9][10], and so on. Because coupling of tasks is a key characteristic of product development, how to deal with couplings among tasks is a hot issue in present. Yan et al. [11,12] focused upon the optimization of the concurrency between upstream product design task and downstream process design tasks in the concurrent engineering product development pattern. First, a new model of concurrent product development process, that is, the design task group model, was built. In this model, the product and process design tasks were carried out concurrently with the whole design process divided into several stages, every two of which are separated by a design review task. The design review tasks might lead to design iterations at a certain rate of probability. Therefore, a probability theory-based method was proposed to compute the mean duration of the design task group and the mean workloads of all the design and review tasks, with design iterations taken into consideration. Then, the problem of concurrency optimization was defined mathematically, whose objective was to minimize the total costs for delay of design task group completion time and unnecessary design revision workloads. Their research proved that the cost function was convex with respect to the concurrent (or overlap) degree between design tasks and that it must have a minimum value at a unique optimum point. Huang and Gu [13,14] viewed the product development process as a dynamic system with feedback on the basis of feedback control theory. The dynamic model and its design structure matrix were developed. The model and its design structure matrix could be divided farther to reflect the interaction and feedback of design information. The mode and direction of the development process could be selected to satisfy constraints of process data flow and process control. A fuzzy evaluation method was presented to evaluate the performance of the dynamic development process; this allowed the development process to be optimized based on reorganizing design constraints, reorganizing design processes, and reorganizing designer's preferences. Finally, an application shows that modeling the product development process as a dynamic system with feedback was a very effective method for realizing life cycle design, optimizing the whole development process, improving the degree of concurrent, speeding information flow, and reducing modification frequency. However, due to complexity of product development, this model did not consider the currency and overlapping among tasks. Its efficiency needs further study and verification. Zhang et al. [15] constructed a new method to measure the coupled strength and to calculate the first iteration's gross workload of a different sequence of coupled tasks, thereby ascertaining the best sequence of coupled tasks based on existent research. However, this model may not correspond to real-world product development process and it is also dependent on expert's experiences. Moreover, Xiao et al. [16] adopted analytic hierarchy process (AHP) to deal with coupling tasks, which might cause quality loss. Smith and Eppinger [17,18] set up two different iteration models based on DSM. One is the sequential iteration model and the other is the parallel iteration model. The former supposed that coupled tasks were executed one after the other and rework was governed by a probabilistic rule. Repetition probabilities and task durations were assumed constant in time. The process was modeled as a Markov chain and the analysis could be used to compute lead time for purely sequential case and to identify an optimal sequence of the coupled tasks to minimize iteration time. The main limitation of this model is that how to determine repetition and rework probabilities is difficult. The latter supposed that the coupled design tasks were all executed in parallel and iteration was governed by a linear rework rule. This model used extended DSM called work transformation matrix (WTM) to identify the iteration drivers and the nature and rate of convergence of the process. WTM has been popularly used in many areas. For instance, Fontanella et al. [19] developed a systematic representation of the work transformation matrix method, with a discrete state-space description of the development process. With this representation, the dynamics of the development process can be easily investigated and predicted, using wellestablished discrete system analysis and control synthesis techniques. In addition, Ong et al. [20] developed nonhomogenous and homogenous state-space concepts, where the nonhomogenous one monitored and controlled the stability and the convergence rate of development tasks and at the same time predicted the number of development iterations; the homogenous one did not consider external disturbances and its response was only due to initial conditions. Computational Intelligence and Neuroscience 3 Xiao et al. [21] put forward a model for solving coupled task sets based on resource leveling strategy. However, it is hypothesized that once resources allocated to coupled task sets are ascertained, then, in all iterations' process, they no longer change. It does not exactly accord with the real product development process. So, the authors [22] further proposed an approach to analyze development iteration based on feedback control theory in a dynamic environment. Firstly, the uncertain factors, such as task durations, output branches of tasks, and resource allocations, existing in product development were discussed. Secondly, a satisfaction degree-based feedback control approach is put forward. This approach includes two scenarios: identifying of a satisfaction degree and monitoring and controlling of iteration process. In the end, an example of a crane development was provided to illustrate the analysis and disposing process. Different from the above research, we propose a method to solve coupled task sets combined with tearing approach and inner iteration technology in this paper. Its obvious advantages lie in identifying invalid iteration process and further analyzing its effects on time and cost of the whole product development process. Modeling Design Iteration Based on Tearing Approach and Inner Iteration Technology The Limitations of Classic WTM Model for Identifying Design Iteration. In the classic WTM model, the entries either in every row or in every column of WTM sum to less than one so as to assure that doing one unit of work in some task during an iteration will create less than one unit of work for that task at a future stage. Such design and development process will converge. However, in real-world product design and development process, some unexpected situations may occur. For example, there is no technically feasible solution to the given specifications or the designers are not willing to compromise to reach a solution, which represents that the corresponding design process will not converge and the entries either in every row or in every column of WTM sum to more than one. Figure 1 denotes this situation. As can be seen from it the entries in the first column sum to 1.1 (i.e., 0.4 + 0.2 + 0.5 = 1.1). This design and development process is unstable and the whole process will not converge. Tearing is the process of choosing the set of feedback marks that if removed from the matrix (and then the matrix is repartitioned) will render the matrix a lower triangular one. The marks that we remove from the matrix are called "tears" [23]. According to its definition, an original large coupled set can be transformed into some small ones through tearing approach. In doing so, these small coupled sets may easily satisfy precondition of WTM. Take the coupled set shown in Figure 1 as an example; after tearing approach, two small ones (i.e., (A, B) and (C, D)) are obtained as shown in Figure 2. We can see from Figure 2 that the entries either in every row or in every column of these two coupled sets sum to less than one and WTM model can be used in this situation. However, because tearing algorithm neglects dependencies among tasks in fact, some quality losses may be generated. Therefore, how to reduce these quality losses needs to be studied. In Figure 2, there exist many tearing results. For instance, Figure 3 shows two different results using tearing approach and diverse quality losses can be obtained, where the symbol "×" denotes dependencies neglected among tasks. According to the analysis mentioned above, it is easy to find that the tearing approach can transform the large coupled set into some small ones but may bring some quality loss. As a result, quality loss is one of the important indexes when using tearing approach to deal with coupled sets. In addition, development cost is another important index that should be considered when using WTM model. In this paper, a hybrid iteration model used to solve coupled sets is set up. In this model, two objectives including quality loss and development cost are defined and the constrained condition is proposed so as to satisfy the premise of WTM model. The following section will go on analyzing how to build this model. Modeling Design Iteration Based on Hybrid Iteration Strategy. For a coupled set , its execution time TT (total time) includes consuming time of task transmission and interaction. Define the task execution sequence after tearing as and the abstract model of this problem is where the target of tearing operator is to search for a feasible task execution sequence so as to make execution time shortest; however, formula (1) is very abstract and needs further discussion. denotes a feasible task execution sequence after tearing a coupled set. Every feasible task 4 Computational Intelligence and Neuroscience The second tearing result sequence corresponds to a kind of time consumption. The relationship is expressed: where represents task sequence through the th tearing operation for a coupled task set and function () is used to calculate the corresponding design and execution consuming time of task sequence. Suppose the coupled task set has kind of way for tearing; combining with formula (2), formula (1) can be transformed into Formula (3) is time aggregative model based on task transmission and interaction. As can be seen from this model the shortest task transmission and interaction represent an optimal task execution sequence. According to this task sequence, the whole design duration of coupled set will come to the shortest one. Moreover, the measurement of aggregative time is to calculate the execution time of all the tasks. The measurement of task transmission and interaction is described as follows: where is practical transmission time. can be calculated by the following formula, where is the number of impact influences, is the value of , and is the weight of : According to the analysis, the model can be built based on the following assumptions [18]. (1) All tasks are done in every stage. (2) Rework performed is a function of the work done in the previous iteration stage. (3) The work transformation parameters in the matrix do not vary with time. We take formula (5) mentioned above as the first objective function which is used to measure the quality loss of decoupling process. The other objective function, development cost, is adopted by using cumulative sum of the whole iteration process. In addition, the constraint condition of the model can be expressed as follows: Ω = ∑ =1 < 1 ( , ∈ ), which makes the entries either in every row or in every column sum to less than one. Based on these analyses, the hybrid model set up in this paper is described as follows: Satisfy where formulas (6) and (7) are objective functions, where the first one represents quality loss and the other development cost. The symbol in constraint condition (8) denotes small coupled sets after tearing approach and is an element in . This constraint condition is used to assure that the decomposed small coupled set can converge. Artificial Bee Colony Algorithm for Finding a Near-Optimal Solution The hybrid model set up in the above section is difficult in finding out the optimal solution by conventional methods such as branch and bound method and Lagrangian relaxation method. Due to its simplicity and high-performance searching ability, heuristic algorithm has been widely used in NP-hard problems. As a new swarm intelligence algorithm, artificial bee colony algorithm (ABC) has strong local and global searching abilities and has been applied to all kinds of engineering optimization problems. In this section, the ABC algorithm is used to solve this coupled problem. Artificial Bee Colony Algorithm. The ABC algorithm is one of the most recently introduced optimization algorithms inspired by intelligent foraging behavior of a honey bee swarm. It was firstly proposed by Karaboga [24] for optimizing multivariable numerical functions. Furthermore, Basturk et al. [25] also applied ABC to function optimizations with constraints and the simulation results had shown that this Computational Intelligence and Neuroscience 5 intelligent algorithm is superior to other heuristic algorithms such as ant colony optimization (ACO) [26], particle swarm optimization (PSO) [27], and artificial plant optimization (APO) [28] in 2006. In addition, the ABC algorithm has been also used to solve large-scale problems and engineering design optimization. Some representative applications are introduced as follows. Singh [29] applied the ABC algorithm for the leaf-constrained minimum spanning tree (LCMST) problem and compared the approach against GA, ACO, and tabu search. In literature [29], it was reported that the proposed algorithm was superior to the other methods in terms of solution qualities and computational time. Zhang et al. [30] developed the ABC clustering algorithm to optimally partition objectives into cluster and Deb's rules were used to direct the search direction of each candidate. Pan et al. [31] used the discrete ABC algorithm to solve the lotstreaming flow shop scheduling problem with the criterion of total weighted earliness and tardiness penalties under both the idling and no-idling cases. Samanta and Chakraborty [32] employed ABC algorithm to search out the optimal combinations of different operating parameters for three widely used nontraditional machining (NTM) processes, that is, electrochemical machining, electrochemical discharge machining, and electrochemical micromachining processes. Chen and Ju [33] used the improved ABC algorithm to solve the supply chain network design under disruption scenarios. The computational simulations revealed the ABC approach is better than others for solving this problem. Bai [34] developed wavelet neural network (WNN) combined with a novel artificial bee colony for the gold price forecasting issue. Experimental results confirmed that the new algorithm converged faster than the conventional ABC when tested on some classical benchmark functions and was effective in improving modeling capacity of WNN regarding the gold price forecasting scheme. All these researches illustrated that the ABC algorithm has powerful ability to solve much more complex engineering problems [35,36]. In the basic ABC algorithm, the colony of artificial bees contains three groups of bees: employed bees, onlookers, and scouts. Employed bees determine a food source within the neighborhood of the food source in their memory and share their information with onlookers within the hive, while onlookers select one of the food sources according to this information. In addition, a bee carrying out random search is called a scout. In ABC algorithm, the first half of the colony consists of the employed bees and the remaining half includes the onlookers. There is only one employed bee corresponding to one food source. That is to say, the number of employed bees is equal to the number of food sources around the hive. The position of a food source denotes a possible solution for the optimization problem and the nectar amount of a food source corresponds to the quality (fitness) of the associated solution. The initial population of solutions is filled with number of randomly generated -dimensional real-valued vectors (i.e., food sources). Each food source is generated as follows: where = 1, 2, . . . , , = 1, 2, . . . , , and min and max are the lower and upper bounds for the dimension , respectively. These food sources are randomly assigned to number of employed bees and their fitness is evaluated. In order to produce a candidate food position from the old one, the ABC used the following equation: where ∈ {1, 2, . . . , } and ∈ {1, 2, . . . , } are randomly chosen indexes. Although is determined randomly, it has to be different from . is a random number in the range [−1, 1]. Once is obtained, it will be evaluated and compared to . If the fitness of is equal to or better than that of , will replace and become a new member of the population; otherwise is retained. After all employed bees complete their searches, onlookers evaluate the nectar information taken from all employed bees and choose one of the food source sites with probabilities related to its nectar amount. In basic ABC, roulette wheel selection scheme in which each slice is proportional in size to the fitness value is employed as follows: where fit( ) is the fitness value of solution . Obviously, the higher the fit( ) is, the more the probability is that the th food source is selected. If a position cannot be improved further through a predetermined number of cycles, then that food source is assumed to be abandoned. The scouts can accidentally discover rich, entirely unknown food sources according to (9). The value of predetermined number of cycles is called "limit" for abandoning a food source, which is an important control parameter of ABC algorithm. There are three control parameters used in the basic ABC: the number of the food sources which is equal to the number of employed bees ( ), the value of limit, and the maximum cycle number (MEN). Figure 4 summarizes the steps of the basic ABC. A Novel Artificial Bee Colony Algorithm for Identity Design Iteration. The iteration model built in Section 3 is a typical NP-hard problem. Therefore, it is difficult to find out the optimal solution using conventional technologies. In the past decades, ABC algorithm, as a typical method of swarm intelligence, is more suitable to solve combination optimization problems. However, the basic ABC algorithm mentioned in Section 4.1 is only designed to solve continuous function optimization problems and is not suitable for discrete problems. As a result, in this section, we design discrete ABC algorithm to solve coupled sets and the detailed process is shown as follows. (1) Solution Representation. According to the characteristics of the problem, real number encoding is adopted. The solution representation is shown in Figure 5. Because matrix = ( ) × includes three rows and three columns, the real numbers 1, 2, and 3 represent the corresponding row and column of DSM matrix, respectively. Figure 5 shows three different chromosomes representing three different spread patterns. (2) Population Initialization. To guarantee an initial population with certain quality and diversity, we use two strategies. One is to assign a randomly generated solution to every employed bee; the other is to generate a portion of food sources by using experiential knowledge so as to describe the uncoupled schemes having less quality loss or lower development cost. (3) Food Source Evaluation. In this discrete ABC algorithm, there are two indexes used to evaluate food source: one is the quality loss when using tearing approach described by formula (6); the other is development cost caused by iteration process and it is defined by formula (7). Note that these two objectives are mutually exclusive. It means the more the quality losses are the lower the development cost is and vice versa. The two extreme cases are corresponding to the maximum quality loss and the minimum development cost shown in Figure 6. As can be seen from Figure 6 suppose that the coupled set is composed of 5 tasks. In the first situation, if tearing approach is not used, there exists no quality loss in Computational Intelligence and Neuroscience development process and WTM model is used to analyze the coupled set. However, the entries either in every row or in every column should sum to more than one so as to satisfy the premise of WTM model. Otherwise, the whole development process does not converge. The other situation represents that the dependencies among tasks are not considered and the large coupled set is decomposed into five independent tasks. The development cost is equal to the sum of these five tasks' cost which is described by execution time of tasks. In this situation, due to no iterations existing, the development cost is the minimum. The target of the ABC algorithm is to search a feasible decoupling scheme in order to reduce development cost and quality loss as well. In this paper, setting weights are adopted to transform a multiple-objective problem into a single-objective one so as to simplify problemsolving process. Figure 7. Note that if the neighboring solutions do not satisfy preference constraints, the old one should be retained. Furthermore, in order to enrich searching region and diversify the population, five related approaches based on SWAP, INSERT, or INVERSE operators are adopted to produce neighboring solutions, which are shown as follows: (1) performing one SWAP operator to a sequence; (2) performing one INSERT operator to a sequence; (3) performing two SWAP operators to a sequence; (4) performing two INSERT operators to a sequence; (5) performing two INVERSE operators to a sequence. The food sources in the neighborhood of their position mentioned above may have different performances in evaluation process, so a feasible self-learning form should be selected. In addition, for the selection of food sources, if new food source is better than the current one, the new one should be accepted. It also means the greedy selection is adopted. (5) Onlooker Bee Phase. In the basic ABC algorithm, an onlooker bee chooses a food source depending on the probability value associated with that food source. In other words, the onlooker bee chooses one of the food sources after making a comparison among the food sources around current position, which is similar to "roulette wheel selection" in GA. In this paper, we also retain this approach to make the algorithm converge fast. (6) Scout Bee Phase. In the basic ABC algorithm, a scout produces a food source randomly. This will decrease the search efficacy, since the best food source in the population often carried better information than others. As a result, in this paper, the scout produces a food source using several SWAP, INSERT, and INVERSE operators to the best food source in the population. In addition, to avoid the algorithm trap into a local optimum, this process should be repeated several times. (7) Disposal of Constraint Condition. The constraint condition may affect the feasibility of decoupling scheme. As a result, we introduce penalty function method to dispose of constraint condition and make the scheme that does not satisfy constraint condition have a lower possibility to be selected in the next generation. Application Example In this section, a numerical example deriving from an engineering design of a chemical processing system [37] is utilized so as to help to understand the proposed approach. In this example, an engineering design of a chemical processing system has 20 tasks and detailed task information is listed in Table 1. Firstly, DSM method is used to model the dependencies among tasks; then analytic hierarchy process (AHP) is adopted to set up 0-1 DSM and partitioning algorithm is used to find out the coupled sets existing in DSM; subsequently, the hybrid iteration model proposed in this paper is introduced to deal with the decoupling problem; finally, the simulation is obtained. In the first step, according to dependency modeling technology mentioned in literature [2], the DSM model is set up as shown in Figure 8, where the empty elements represent no relationships between two tasks and number "1" represents input or output information among tasks. For example, task 1 requires information from tasks 13 and 15 when it executes. Additionally, task 1 must provide information to tasks 4, 5, 10, 14, 16, and 18; otherwise they cannot start. Nevertheless, Figure 8 only denotes the "existence" attributes of a dependency between the different tasks. In order to further reveal their matrix structure, it is necessary to quantify dependencies among tasks. Because quantification of dependencies among tasks is helpful to reveal essential features of tasks, we introduce a two-way comparison scheme [4] to transform the binary DSM into the numerical one. The main criteria of this approach are to perform pairwise comparisons in one way for tasks in row and in another way for tasks in columns to measure the dependency between different tasks. In the rowwise perspective, each task in rows will serve as a criterion to evaluate the relative connection measures for the nonzero elements in that row. It means that for each pair of tasks in rows, which one can provide more input information than the other. Similarly, in the column-wise perspective, each task in columns will serve as a criterion to evaluate the relative connection measures in that column. It also means that for every pair of tasks compared in columns, which one can receive more output information than the other. The detailed process is omitted due to the length limitation of this paper and authors may refer to literature [4] to know of this approach. The final numerical DSM is shown in Figure 9. Subsequently, partitioning algorithm is adopted and five subprocesses have been obtained as shown in Figure 10. The first subprocess contains 3 tasks such as 3, 7, and 12, and all of them can be executed without input information from others; the second one consists of tasks 2, 9, 13, and 15, and they must receive information from the first subprocess; the third one is a large coupled set including tasks 1, 4, 5, 8, 10, 11, 17, and 18, and all the tasks are interdependent; the fourth one is a small coupled set comprised of tasks 6, 14, 16, 19, and 20, where all the tasks must depend on information from the first, the second, and the fourth subprocess. The fifth one includes tasks 16 and 19 and all the tasks are independent. As can be seen from Figure 10 block 2 is a small coupled set and the classic WTM can be used to solve this problem. However, block 1 is a large coupled set and the entries either in every row or in every column of WTM sum to more than one, so the hybrid iteration method should be used in this situation. When using the hybrid iteration model, tearing approach is applied to transform the large coupled set into some small ones and then improved ABC algorithm is used to find the optimal decoupling schemes according to measuring two objectives including quality loss and development cost as well. The related parameters of ABC algorithm are set as follows: = 10, = 20, and = 500. The simulations results are shown in Figures 11 and 12. Due to the exclusiveness of these two objectives, the best tearing result should bring the minimum quality loss and the original coupled set does not decompose. Nevertheless, the iteration process does not converge and the development process is not feasible. In addition, the minimum development cost corresponds to eight independent tasks and all relationships among tasks are not considered. The development cost can be calculated as follows: 6 + 8 + 4 + 3 + 5 + 9 + 5 + 5 = 45 (Yuan/Time). Furthermore, the effects of the double-objectives on the coupled set decomposition are analyzed. Figure 13 describes the change curves including these two objectives. We can see from it that different schemes have their own advantages. Decision makers can select different design iteration process according to practical product development requirements. For example, Table 2 displays development cost and quality loss corresponding to different decoupling schemes and design engineer can choose different strategies to decompose large coupled sets. According to different strategies, expected objectives may be achieved at the expense of the other ones. All in all, the higher the development cost is, the lower the quality loss is and vice versa. 12 Computational Intelligence and Neuroscience Conclusions In this paper, the shortcomings existing in WTM model are discussed and tearing approach as well as inner iteration method is used to complement the classic WTM model. In addition, the ABC algorithm is also introduced to find out the optimal decoupling schemes. The main works are as follows: firstly, tearing approach and inner iteration method are analyzed for solving coupled sets; secondly, a hybrid iteration model combining these two technologies is set up; thirdly, a high-performance swarm intelligence algorithm, artificial bee colony, is adopted to realize problem-solving; finally, an engineering design of a chemical processing system is given in order to verify its reasonability and effectiveness. The future research may focus on how to extend the model to other real-world practices. In addition, how to further improve the performance of the ABC algorithm is another issue needing to be studied.
v3-fos-license
2007-10-09T13:56:06.000Z
2007-07-27T00:00:00.000
55245666
{ "extfieldsofstudy": [ "Physics" ], "oa_license": "CCBY", "oa_status": "GREEN", "oa_url": "https://escholarship.org/content/qt2t94k3vb/qt2t94k3vb.pdf?t=pazrlp", "pdf_hash": "7cdea2730f209da4d467665345f283513f9a0ea4", "pdf_src": "Arxiv", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41202", "s2fieldsofstudy": [ "Physics" ], "sha1": "7cdea2730f209da4d467665345f283513f9a0ea4", "year": 2007 }
pes2o/s2orc
Gauged Discrete Symmetries and Proton Stability We discuss the results of a search for anomaly free Abelian Z_N discrete symmetries that lead to automatic R-parity conservation and prevents dangerous higher-dimensional proton decay operators in simple extensions of the minimal supersymmetric extension of the standard model (MSSM) based on the left-right symmetric group, the Pati-Salam group and SO(10). We require that the superpotential for the models have enough structures to be able to give correct symmetry breaking to MSSM and potentially realistic fermion masses. We find viable models in each of the extensions and for all the cases, anomaly freedom of the discrete symmetry restricts the number of generations. I. INTRODUCTION Supersymmetry (SUSY) is widely believed to be one of the key ingredients of physics beyond the standard model (SM) for various reasons: (i) stability of the Higgs mass and hence the weak scale; (ii) possibility of a supersymmetric dark matter; (iii) gauge coupling unification, suggesting that there is a grand unified theory (GUT) governing the nature of all forces and matter. The fact that the seesaw mechanism for understanding small neutrino masses also requires a new scale close to the GUT scale adds another powerful reason to believe in this general picture. There are however many downsides to SUSY. For instance, while the SM guarantees nucleon stability, in its supersymmetric version, there appear two new kinds of problems: (i) There are renormalizable R-parity breaking operators allowed by supersymmetry and standard model gauge invariance, e.g. Here, Q, L, u c , d c and e c denote the left-chiral quark-doublet, lepton doublet, u-type, d-type and electron superfields. Combination of the last two terms leads to rapid proton decay and present limits on nucleon stability imply (for squark masses of TeV; cf. e.g. [1,2,3]): These terms also eliminate the possibility of any SUSY particle being the dark matter of the Universe. 1 Usually assumptions such as either R-parity or matter parity (cf. [4]) are invoked to forbid these couplings and rescue the proton (as well as dark matter) stability. (ii) A second, more vexing, problem is that even after imposing R-parity, one may have dimension five R-parity conserving operators such as κ ijkℓ Q i Q j Q k L ℓ /M P . Such operators also lead to rapid proton decay. In fact, present nucleon stability limits put an upper limit on κ 1121 , κ 1122 10 −8 [1]. In what follows we will refer to these operators as Q Q Q L. To understand that these problems can be attributed to SUSY, recall that the R-parity breaking terms are forbidden in the case of SM by Lorentz invariance in the standard model and the Q Q Q L is suppressed by two powers of Planck mass and hence not problematic. In this study, we focus on the problem of baryon number non-conservation and require the model to satisfy the following constraints: (i) R-parity symmetry is exact so that it prevents catastrophic proton decay; (ii) R-parity conserving dimension five proton decay operators of type Q Q Q L are either forbidden or suppressed to the desired level and (iii) the superpotential of the theory has enough structure for ensuring proper symmetry breaking down to the standard model and give fermion masses. We look for symmetries that allow all desired terms in the superpotential while forbidding the unwanted terms discussed and satisfying the anomaly constraints. The anomaly constraints can be related to triangle graphs involving the not only discrete symmetry and the gauge symmetries but also gravity. The reason for requiring anomaly freedom is the following: there is a general belief that non-perturbative gravitational effects such as black holes and worm holes break global symmetries of nature [5]. In an effective field theory language, they are parametrized by Planck scale suppressed higher dimensional operators. In fact, the above-mentioned Q Q Q L type operators are a manifestation of the fact that global baryon number symmetry is broken by such effects. On the other hand no hair theorem of general relativity says that any non-perturbative gravitational effect must respect the gauge symmetries. Therefore if there is a gauged discrete symmetry in the theory that prevents the undesirable terms under discussion, they will be absent even after all non-perturbative effects are taken into account. How does one ensure that a discrete symmetry is a gauge symmetry ? This problem has been extensively studied in the literature in the context of the MSSM [1,2,6,7,8,9,10] and the general procedure is to calculate the various anomaly equations involving the discrete group with gravity and the gauge group i.e. vanishing of D g g, D G 2 anomalies where D stands for the discrete symmetry group in question, g is the graviton and G is the (continuous) gauge symmetry on which the theory is based. In the context of the MSSM, a discrete 6 symmetry has been identified, dubbed 'proton-hexality' in [2], that contains R-parity as 2 subgroup and forbids Q Q Q L [7]. Remarkably, anomaly freedom of this 6 requires the number of generations to be 3 [1]. Moreover, it has been shown that (given 3 generations) this is the only anomaly free symmetry that allows the MSSM Yukawa couplings and neutrino masses while forbidding the dangerous operators [2]. Such symmetries can be extended such as to also forbid the µ term [11]. (For an approach to ensure proton stability by flavor symmetries see e.g. [12].) On the other hand, the charge assignment is different for different standard model representations. This raises the question whether nucleon stability can be ensured by discrete symmetries in (unified) theories where standard model representations get combined in larger multiplets, and the charge assignment is hence restricted more strongly. We therefore seek discrete symmetries ensuring sufficient proton stability in three gauge extensions of the supersymmetric standard model: (i) the left-right symmetric model [13], (ii) the Pati-Salam model [14] and (iii) SO (10). All these models incorporate the B − L gauge group, which is generally used in the discussion of the seesaw mechanism for neutrino masses [15,16,17,18,19] (for a review see e.g. [20]) and also provides one way to guarantee R-parity conservation [21,22,23]. Due to the higher gauge symmetry, which must be broken down to the SM gauge symmetry, specific terms must be present in the superpotential. This poses constraints on the discrete symmetry. One main result of the study is a connection between the order of the discrete group and the number of generations in all the cases. We give examples of viable models for all the different gauge groups. In our discussion, we follow a certain 'route of unification'. We start with the left-right symmetric extension of the supersymmetric standard model, proceed via the Pati-Salam model to SO(10) GUTs and finally comment on how our results might be used in string compactifications. This paper is organized as follows: in section II, we discuss anomaly-free gauge symmetries ensuring proton stability in left-right models; we proceed to the Pati-Salam model in section III, and in section IV we discuss SO(10) models. We give our conclusions in section V. II. LEFT-RIGHT MODEL AND DISCRETE SYMMETRIES In this section, we discuss the left-right symmetric extension of MSSM, i.e. the gauge group is SU(3) c × SU(2) L × SU(2) R × U(1) B−L . This amounts to breaking B − L symmetry of the model by either a B − L = 2 triplet or a B − L = 1 doublet. In Table I the Higgs sector of the triplet model. A. Left-right symmetric models -doublet Higgs case The (wanted) superpotential is given by On the other hand, the following couplings must be forbidden (we suppress coefficients): To study the anomaly constraints in this model for an arbitrary N group, we start by giving the charge assignments under N to the various superfields (denoted as q F for the field F in the equation below) and writing down the anomaly constraints. The anomaly constraints for N g g, where N g denotes the number of generations. In the first equation which follows from equation (10) of [7]. The assignments must be consistent with the superpotential (3) and has to forbid the terms in (4). We scanned over possible non-anomalous N ≤12 symmetries, keeping the number of generations N g as a free parameter. Remarkably, the smallest viable N g we found is 3, and the smallest N that works with 3 generations is 6. An example is shown in Table II. qL qQ qLc qQc qΦ qχ qχc qχ qχc qS 1 1 5 5 0 0 2 0 4 0 By giving vacuum expectation values (vevs) to the fields χ c and χ c , the 6 symmetry is broken to a 2 symmetry under which matter is odd while the MSSM Higgs are even. That is, we have obtained an effective R-parity which, although there is a gauged B − L symmetry, originates from an 'external' 6 . B. Left-right models -triplet case The transformation properties of the fields under the gauge group are shown in Table I (a) and (c). In this case, the right handed neutrino masses arise from the renormalizable couplings in the theory. We have to forbid Q 3 L and (Q c ) 3 L c . There are many anomaly-free discrete symmetries which do the job. The interesting point is that in this case, the minimum number of generations is N g = 2 with N = 2 for the discrete group. The smallest symmetry that works for 3 generations is 3 . FIG. 1: Effective proton decay operators. (a) shows the usual triplet exchange diagram [24,25]. (b) illustrates that the amplitude vanishes if the mass partner of the Higgs triplet does not couple to SM particles [27]. III. PATI-SALAM MODEL We now proceed to the Pati-Salam model, i.e. the gauge group is G PS = SU(4) c × SU(2) L × SU(2) R . The new feature of this model compared to the left right model just discussed is that the quarks and leptons belong to the same representation (see Table III). The rest of the discussion is completely analogous to the left-right case and in fact the solutions displayed in Table II for the discrete charges apply to the doublet case, and as in the triplet version of the left-right model there are many solutions. Note that the triplet version of Pati-Salam model does not have proton decay due to SU(3) c ⊂ SU(4) invariance. IV. SO(10) GUT AND DISCRETE SYMMETRIES We now turn our attention to the discussion of discrete symmetries in SO(10) GUTs. In SO(10) models, the dimension 5 proton decay operators Q Q Q L have two sources:(i) the higher-dimensional coupling [16 m ] 4 and (ii) effective operators emerging from integrating out Higgs triplets [24,25] (see figure 1). Triplet masses of the order M GUT appear to be too small to be consistent with the observed proton life time [26]. As discussed, the coefficient of the Q Q Q L operators have to be strongly suppressed. This requires an explanation of why both contributions to these operators are simultaneously small. As we shall see, such an explanation might arise from a simple discrete symmetry. The proton decay via Higgs triplet exchange can be forbidden by eliminating the mass term for the 10 multiplet (H). However one needs to make the color triplets in H heavy so that coupling unification is maintained. This can be done by introducing a second 10-plet (H ′ ) such that it forms a mass term with H but the color triplet field in it does not couple to standard model matter [27]. Therefore, from now on we will consider SO(10) models with 2 10-plets. We will discuss two classes of models: (i) where B − L is broken by 16-Higgs fields (cf. Table IV (b)) and (ii) where B − L is broken by 126-Higgs fields [28,29] (cf. Table IV (c)). We consider Abelian discrete ( N ) symmetries and require that higher dimensional R-parity conserving leading order ∆B = 0 operators, which in this case are of type 16 4 m (where the subscript m stands for matter), are forbidden as are all R-parity breaking terms while allowing all terms in the superpotential that are needed to break the GUT symmetry down to the MSSM. Our focus is on proton stability, and we leave other issues such as fermion masses and doublet-triplet splitting for future studies. A. 16-Higgs models In this class of models, one has an independent motivation for introducing a second 10-plet coming from doublettriplet splitting. Further, apart from a pair of 16 ⊕ 16-Higgses, 45-and 54-plets are required to ensure proper GUT symmetry breaking down to MSSM (cf. [30,31,32,33]). The superpotential terms that must be allowed are: )). This leads to the following constraints on the N charges: Here, we denote the N charge for a field F by q F . Next, we list the anomaly constraints, 16 (N g q ψm + q ψH + q ψ H ) + 10 (q H + q H ′ ) + 45 q A + 54 q S = 0 mod N ′ (8a) To forbid the dangerous couplings ψ m ψ m H ′ , ψ 4 m , ψ m ψ H and ψ 3 m ψ H , ψ m ψ H H, ψ m ψ H H ′ and ψ m ψ H A, the values of the N charges have to be chosen such that they satisfy the inequalities 2 q ψm + q H ′ = 0 mod N , The smallest symmetry allowing to fulfill all criteria is 6 and requires N g = 3. A possible charge assignments is q ψm = 1, q ψH = −2, q ψ H = +2, q H = −2, q H ′ = +2, q 45 = 0 (cf. tables IV (a) and (c)). This charge assignment allows for seesaw couplings and the possibility of fermion masses from couplings of type ψ m ψ m H. The allowed operator ψ m ψ m ψ 2 H contributes to both the fermion masses as well as to the seesaw. We note that the charge assignment is such that we have 3 × generation + vector-like (10) under SO(10) × 6 . The model also eliminates the dangerous proton decay operator Q Q Q L or operator of type (ψ m ) 4 /M P . We note that, although there are higher-dimensional gauge (and 6 ) invariant operators, they do not give rise to proton decay operators for the following reasons: (ii) any B − L neutral combination is also 6 invariant because it has to involve as many ψ H as ψ H fields, as only the SM singlets with B − L charge ±1 attain a vev. Therefore the product of the 6 non-invariant combination Q Q Q L with the B − L neutral combination of ψ H and ψ H fields cannot be 6 invariant. Consider, for instance, the operators 1 In the first operator, the expectation value of [ψ H ] 4 vanishes because of the first argument while the second operator is not 6 invariant. We also observe that operators like ψ 4 H H 2 , which would lead to small diagonal entries in the H − H ′ mass matrix, do not exist for the same reason. That is, in this model, proton decay is only due to dimension 6 operators. We refrain from spelling out the detailed phenomenological analysis of this model. However, our preliminary studies seem to indicate that one can achieve doublet-triplet splitting and realistic fermion masses while avoiding proton decay problem by extending the Higgs content. A complete analysis of these issues are defered to a future publication. Let us also comment that, like in the left-right model with doublets, the 6 symmetry gets broken by the ψ H and ψ H vevs down to a 2 which forbids W R , i.e. acts as an R-parity. This means that R-parity in this SO(10) model does not originate from B − L. B. 126-Higgs models We now discuss models where the 16 H ⊕ 16 H get replaced by 126 ⊕ 126 -the motivation being that R-parity becomes an automatic symmetry. Such models have been extensively discussed in the literature [34,35,36,37]. In our context it means that the last two of the four inequalities in Eq. (9) do not exist (see Eq. (13) below). Instead we have the following set of constraints on the charges from anomaly freedom The superpotential constraints can be decomposed in analogs of Eq. (7) 2q ψm + q H = 0 mod N , and analogs of Eq. (9) 2 q ψm + q H ′ = 0 mod N , (13a) 4 q ψm = 0 mod N . (13b) Typically in a class of these models, there are only 210 dimensional representations that need to couple to ∆ and ∆ fields among themselves as well as with 10 Higgs [38,39]. These imply that the discrete charge of 210 vanishes and also that of ∆ and ∆ are opposite. Substituting these conditions into Eq. (11), it becomes clear that if there is only a single 10 Higgs in the model (or, if q H ′ = 0), the requirement of anomaly freedom becomes rather constraining. We could only find a 8 symmetry with one generation. However, once we allow for non-trivial q H ′ , it is possible to have simple anomaly free N symmetries which satisfy all constraints, the simplest example being a 3 symmetry with the charge assignment listed in table V. The N ≤12 symmetries with non-trivial q H ′ require N g = 3 or larger. Another possible symmetry is 6 with the charge assignment listed in table IV (c). This model has the same effective structure as the 126 model of Ref. [34,35,36,37] as far as the discussion of fermion masses go (even though it has 2 10-plets, one of the 10's does not couple to fermions due to 6 charge assignments). For the same reasons as in the 16-Higgs models, the Q Q Q L type induced proton decay operators are forbidden. Again we refrain from a detailed phenomenological analysis of the way this model leads to MSSM at low energies. We close the discussion with a comment on how easily the MSSM Higgs fields emerge from the superpotential: the relevant part of the superpotential has the form: which has the right linear combination of MSSM doublets to maintain all the simple form for the fermion mass results of Ref. [34,35,36,37]. A detailed analysis of these issues will be given elsewhere. C. SO(10) GUTs in higher dimensions Let us now comment on implications of our findings for higher-dimensional models of grand unification, such as 'orbifold GUTs' [40,41,42,43,44,45,46], which provide a simple solution to the doublet-triplet splitting problem. In such models the dimension 5 proton decay can be naturally suppressed [42,43] (while dimension 6 proton decay is slightly enhanced [47]) since here the mass partner of the Higgs triplet has vanishing couplings to matter, as in the discussion above. However, (brane) couplings like ψ 4 m , also leading to proton decay, have not been discussed in this scheme. A reliable discussion of such operators seems hardly possible in the effective higher-dimensional field theory framework. One possible way to address this question is thus to embed the model into string theory or, in other words, to derive orbifold GUT models from string theory. The first steps for doing so have been performed in Refs. [48,49,50]. This has further lead to the scheme of 'local grand unification' [51,52,53,54], which facilitates the construction of supersymmetric standard models from the heterotic string [52,53,55,56]. Here, the two light MSSM matter generations originate from 16-plets localized at points with SO(10) gauge symmetry. Some of these models can have an R-parity arising as 2 subgroup of a gauged, non-anomalous B − L symmetry [55,56,57], like in ordinary GUTs (however, without the need for 126-plets). On the other hand, Q Q Q L operators remain a challenge [53,56]. The fact that these operators could be eliminated so easily in conventional GUTs by simple symmetries leads to the expectation that similar symmetries will be helpful in the string-derived supersymmetric standard models with (local) GUT structures. One lesson which one might learn from our analysis is that one may derive an effective R-parity and suppress Q Q Q L by a discrete (possibly 6 ) symmetry under which matter 16-plets have a universal charge. One might further hope to get insights about the origin of the discrete symmetries (which remains somewhat obscure in the 4D field-theoretic approach) in string models. These issues will be studied elsewhere. V. CONCLUSION AND COMMENTS Motivated by the beauty of the ideas of supersymmetry and unification, we have started a search for discrete symmetries that forbid proton decay operators in gauge extensions of the supersymmetric standard model. We required the symmetries to allow the standard interactions and to be anomaly free. Considering the left-right symmetric, Pati-Salam and SO(10) GUT models with various Higgs contents, we could identify (surprisingly simple) symmetries that satisfy all our criteria. In many cases, there is a connection between the anomaly freedom and the number of generations. Often, simple symmetries exist only for 3 generations (or multiples thereof), as in [1]. In the SO(10) models, our symmetries forbid dimension 5 proton decay operators. Our findings can be interpreted in the following way. Supersymmetric models with an extended or GUT symmetry are often challenged by proton decay. To rectify this, one might be forced to introduce additional (discrete) symmetries. Our examples then show that R-parity can be a consequence of these additional symmetries rather than being related to B − L. From this one might conclude that the appearance of fields with even B − L charges is not a necessity, and, for instance, 16-Higgs and 126-Higgs SO(10) models can be on the same footing. It is also interesting that the minimal 126-based SO(10) models become free of all dangerous proton decay operators without losing their ability to be predictive in the fermion sector once we add a simple anomaly free discrete symmetry.
v3-fos-license
2017-08-03T00:05:22.743Z
2017-05-03T00:00:00.000
25741218
{ "extfieldsofstudy": [ "Psychology", "Medicine" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://link.springer.com/content/pdf/10.1007/s11469-017-9768-5.pdf", "pdf_hash": "a03909c6c51bcb4612003faf133a0e9d976eb278", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41206", "s2fieldsofstudy": [ "Psychology" ], "sha1": "7f6c426b7c7551780048a5386ab317e02cee9a32", "year": 2017 }
pes2o/s2orc
Exploring Individual Differences in Online Addictions: the Role of Identity and Attachment Research examining the development of online addictions has grown greatly over the last decade with many studies suggesting both risk factors and protective factors. In an attempt to integrate the theories of attachment and identity formation, the present study investigated the extent to which identity styles and attachment orientations account for three types of online addiction (i.e., internet addiction, online gaming addiction, and social media addiction). The sample comprised 712 Italian students (381 males and 331 females) recruited from schools and universities who completed an offline self-report questionnaire. The findings showed that addictions to the internet, online gaming, and social media were interrelated and were predicted by common underlying risk and protective factors. Among identity styles, ‘informational’ and ‘diffuse-avoidant’ styles were risk factors, whereas ‘normative’ style was a protective factor. Among attachment dimensions, the ‘secure’ attachment orientation negatively predicted the three online addictions, and a different pattern of causal relationships were observed between the styles underlying ‘anxious’ and ‘avoidant’ attachment orientations. Hierarchical multiple regressions demonstrated that identity styles explained between 21.2 and 30% of the variance in online addictions, whereas attachment styles incrementally explained between 9.2 and 14% of the variance in the scores on the three addiction scales. These findings highlight the important role played by identity formation in the development of online addictions. behavior to achieve the initial mood-modifying effects; withdrawal symptoms refer to the unpleasant feeling states and/or physical effects that occur when the individual decreases or suddenly reduces their addictive activities; conflict refers to both the intrapsychic and interpersonal problems that arise as a consequence of addictive activities and conflicts with all other things in their lives such as relationships and work and/or education; and relapse refers to the unsuccessful efforts to stop engaging in the addictive behavior if the individual is trying to cease. The problematic use of social media (i.e., social networking site [SNS] addiction [Kuss and Griffiths 2017]) is another emerging technological addiction that has been argued as falling into Young's (1999) online relationship addiction category (Kuss and Griffiths 2011). However, if social networking is seen as a discrete online application, it could arguably classed under her category of 'net compulsions' along with activities such as online gambling or gaming. Griffiths' (2005) six criteria of addiction have also applied to SNS addiction, which have been operationalized by some psychometrically robust scales including the six-item Bergen Facebook Addiction Scale (BFAS; Andreassen et al. 2012) and the Facebook Intrusion Questionnaire (FIQ; Elphinston and Noller 2011). However, Griffiths (2012Griffiths ( , 2013 argued that these instruments focused on just one specific commercial SNS (i.e., Facebook) rather than on the activity itself (i.e., social networking) and proposed that researchers should develop more reliable and valid addiction scales assessing social networking. This led to the development of the Bergen Social Media Addiction Scale (BSMAS; Andreassen et al. 2016). To gain a better understanding of the online addictions, ongoing research has identified the risk factors related to the development of pathological behaviors, as well as the protective factors that appear to prevent such pathological behaviors. Empirical studies have highlighted an interplay of factors from intraindividual components to sociodemographic and personalityrelated characteristics (Andreassen 2015;Andreassen et al. 2013;Cerniglia et al. 2016;Griffiths 2011, 2012;Wittek et al. 2016). Among dispositional determinants, identity characteristics have been found to be both protective factors and risk factors in health-risk outcomes, such as delinquency and substance abuse (Côté and Levine 2002). Marcia's (1966) ego-identity status model states that individuals can use drugs for curiosity, as avoidant coping strategies and/or as an expression of an antisocial identity (Christopherson et al. 1988). Nevertheless, there has still been little attention regarding the relationship between identity and online addictions. Given that Berzonsky's social cognitive perspective of identity has already been applied to research on substance addictions (Hojjat et al. 2015;White et al. 1998;White et al. 2003), it may also provide a good framework to explore online addictions. More specifically, Berzonsky (1989Berzonsky ( , 1990 conceptualized identity as three styles referring to the strategy an individual uses to process, structure, utilize, and revise self-relevant information. Information-oriented individuals, characterized by a well-differentiated and wellintegrated identity structure, actively seek out and deliberately process identity-relevant information to make well-informed choices. Normative-oriented individuals, characterized by conservative and inflexible attitudes, focus on the expectations, values, and prescriptions held by significant others, especially parents. Diffuse-avoidance oriented individuals, characterized by a fragmented and loosely integrated identity structure, procrastinate identity conflicts and problems until situational demands force them to make a choice. As these stylistic strategies process information from the social reality in which individuals reside, they may-to some extent-influence the way in which individuals deal with and behave in interpersonal relationships (Berzonsky 2011). To the best of the present authors' knowledge, very few studies have focused on identity styles as predictors of internet and social networking addiction. Normative and diffuseavoidant styles have resulted in protective and risk factors respectively, whereas the informational style has resulted as an ambiguous factor, being both negatively associated to internet addiction (Arabzadeh et al. 2012;Ceyhan 2010;Tabaraei et al. 2014) and unrelated to internet addiction (Morsünbül 2014;Sinatra et al. 2016). Another important but understudied factor in predicting online addictions concerns attachment style. This implies dispositional differences in the functioning of the attachment system and reflects cognitions and emotions, thus influencing different ways of interaction with acquaintances and strangers (Mikulincer and Shaver 2007). Internet addiction has been found to be associated with insecure attachment (Lin et al. 2011;Severino and Craparo 2013), with anxious and avoidant styles (Shin et al. 2009) and with dismissive and preoccupation attachment styles (Odacı and Çıkrıkçı 2014). Little attention has been paid to the association between attachment styles and other forms of online addiction (e.g., internet gaming disorder and social networking addiction). Recent studies have shown the predictive role of attachment in the excessive use of Facebook and online social network sites (Rom and Alfasi 2014;Yaakobi and Goldenberg 2014) On the basis of the aforementioned findings and given that no empirical investigation has simultaneously considered the interrelationship of online addictions with dispositional factors, the present study investigated the extent to which identity styles and attachment orientations accounted for the three types of online addiction (i.e., internet addiction, online gaming addiction, and social networking addiction). The hypotheses of the current study were formulated taking into account the aforementioned socio-cognitive approach of identity formation (Berzonsky 1990) and the model of attachment style proposed by Feeney et al. (1994). Feeney et al. (1994) developed a method to assess where each individual falls on two dimensions, views of self and views of others, and ranging from positive to negative or high to low. Four categories of attachment were delineated: (i) secure individuals have positive views of both self and others due to the responsive caregiving received in childhood; (ii) preoccupied individuals have a positive view of others and a negative view of self and strive for selfacceptance and approval of others; (iii) fearful-avoidant individuals have negative views of self and others, feel unloved and unlovable, and thus, they often avoid others to escape a possible rejection; and (iv) dismissive-avoidant individuals have positive views of self and negative views of others and reject and avoid other people to maintain their high sense of self (Bartholomew and Horowitz 1991;Feeney et al. 1994). In light of these theoretical assumptions, it was expected that the online addictions would be associated (i) positively with preoccupation, fearful-and dismissive-avoidant attachment orientations and with diffuse-avoidant identity style and (ii) negatively with secure attachment orientation and with informational and normative identity styles. Moreover, in accordance with the transactional models of development (Bosma and Kunnen 2001;Grotevant 1987) and given the patterns of association between attachment and identity styles observed by Doumen et al. (2012), it was further expected (i) identity styles would be the primary variables for predicting online addictions, and (ii) attachment styles would contribute incrementally to this prediction. Method Participants and Procedure Participants were recruited from Italian schools and universities and were voluntarily invited to participate in the study by completing a self-report offline questionnaire which took approximately 15 min to complete. A total of 712 questionnaires were collected. The mean age of participants was 21.63 years (SD = 3.90) with 381 males and 331 females. The sample was split into two age categories: those aged 16 to 19 years were classed as adolescents (N = 267; M age = 18.22, SD = 1.04; M = 137, F = 130) and those aged over 20 years were classed as young adults (N = 445; M age = 23. 67, SD = 3.55; M = 244; F = 201). Socio-Demographics The questionnaire included demographic information concerning gender, age, and educational status. Internet Addiction The Italian version of the Internet Addiction Test (IAT; Young 1998;Fioravanti and Casale 2015) is a 20-item scale that assesses the severity of self-reported compulsive use of the Internet. Each item is rated on a 5-point Likert scale ranging from 1 (never) to 5 (always) leading to scores of between 20 and 100. Example items include BHow often do you find yourself anticipating when you will go online again?^and BHow often do others in your life complain to you about the amount of time you spend online?^The higher the score, the more likely someone has an Internet addiction with a score of 80 (out of 100) indicating that internet use is 'severe' and causing major problems in one's life. In the present study, the internal reliability of the scale was excellent (Cronbach's α = 0.96) and it was similar with values reported by Fioravanti and Casale (2015). Internet Gaming Disorder The Italian version of the nine-item Internet Gaming Disorder Scale-Short Form (IGDS9-SF; Pontes and Griffiths 2015; Monacis et al. 2016a) assesses the severity of IGD and its detrimental effects by examining both online and/or offline gaming activities occurring over a 12-month period. The scale comprises nine items corresponding to the nine core criteria defined by the DSM-5. They are answered on a 5-point Likert scale ranging from 1 (never) to 5 (very often) and examples of items are BHave you lost interests in previous hobbies and other entertainment activities as a result of your engagement with the game?^and BDo you feel more irritability, anxiety, or even sadness when you try to either reduce or stop your gaming activity?^Higher scores indicate a higher degree of gaming disorder. In the present study, the scale had an excellent reliability (Cronbach's α = 0.96) and it was in line with the value reported by Pontes and Griffiths (2015). Social Media Addiction The Italian version of the Bergen Social Media Addiction Scale (BSMAS; Andreassen et al. 2016) assesses the experiences in the use of social media over the last year. It contains six items reflecting core addiction elements (Griffiths 2005). Each item is answered on a 5-point Likert scale ranging from 1 (very rarely) to 5 (very often). Examples of items are BHow long during the last year have you spent a lot of time thinking about social media or planned use of social media?^and BHow long during the last year have you used social media so much that it has had a negative impact on your job/studies?^In the present study, the internal consistency of the scale was very good (Cronbach's α = 0.88) and was comparable with the finding reported in the original version (Andreassen et al. 2016). Identity Styles The Italian version of the Revised Identity Style Inventory (ISI-5; Berzonsky et al. 2013;Monacis et al. 2016c) assesses three identity styles, i.e., 'informational,' 'normative,' and 'diffuse-avoidant.' The scale comprises 36 items rated on a 5-point Likert scale (from 1 = Not at all like me to 5 = Very much like me). Sample items include: BI handle problems in my life by actively reflecting on them^for the 9-item informational scale; BI think it is better to adopt a firm set of beliefs than to be open-minded^for the 9-item normative scale; and BWho I am changes from situation to situation^for the 9-item diffuse-avoidant scale. The total score of each scale is computed by summing responses to the items. The internal consistency of the subscales was good (Cronbach's α of 0.77 for informational style, 0.82 for the diffuseavoidant style, and 0.62 for normative style). These values were comparable with those reported in the study carried out by Monacis et al. 2016b). Attachment Style The Italian version of the Attachment Style Questionnaire (ASQ; Feeney et al. 1994;Fossati et al. 2003) comprises 40 Likert-type items and assesses five styles of adult attachment related to two latent factors-anxiety and avoidance-formulated by Hazan and Shaver (1987) and Bartholomew (1990). There are five styles: confidence (C; 8 items) reflects the core aspects of secure attachment, i.e., attitudes of trust and positive expectations from self and others (e.g., BI feel confident that other people will be there for me when I need them^); discomfort with closeness (DwC; 10 items) reflects the role of withdrawal in avoidant attachment defined by Hazan and Shaver (1987) (e.g., BI prefer to depend on myself rather than other people^); need for approval (NfA; 7 items) reflects the need for acceptance and confirmation from others and characterizes Bartholomew's (1990) conceptualization of anxious attachment (e.g., BI wonder why people would want to be involved with me^); preoccupation with relationship (PwR; 8 items) reflects a dependent approach to relationships according to Hazan and Shaver's (1987) notion of anxious attachment (e.g., BI often feel left out or alone^); and relationship as secondary (RaS; 7 items) reflects Bartholomew's (1990) concept of dismissive avoidant attachment (e.g., BTo ask for help is to admit that you are a failure^). Each item is rated on a six-point scale, ranging from 1 (total disagree) to 6 (totally agree). Previous findings reported adequate internal consistency and test-retest reliability (Fossati et al. 2003). In the current study, Cronbach's alpha values for the ASQ subscales ranged from 0.68 to 0.85. Ethics The study procedures were carried out in accordance with the Declaration of Helsinki. The investigation was approved by the research team's university ethics committee. Permission to conduct the research was required from heads of schools and institutions. Written informed consent was obtained from students over 18 years and from parents or legal guardians of students aged under 18 years. Statistical Analyses Statistical analyses comprised an independent samples t test to verify gender and age effects on the scores of the dependent variables (internet addiction, social media addiction, and online gaming addiction). Bivariate correlation analyses were performed to analyze the pattern of associations between the variables of interest. The relationships between gender and the dependent variables were performed on the basis of point-biserial correlation coefficients. Causal relationships were examined by three hierarchical multiple regression analyses with the score of each addiction scale as dependent variable. Age and gender were included as independent variables in the first step, the three identity styles were introduced in the second step, and the five attachment styles were added in the third step. Descriptive Analyses Mean scores and standard deviations for all variables are displayed in Table 1. With regard to gender, significant differences were found in the scores of the IGDS-SF9 and IAT for males (t[654,737] = 10.237, p < .001) and females (t[696,434] = 6.137, p < .001), whereas there were no differences in BSMAS scores. In addition, significant age differences between adolescents Table 2 shows the interrelationships among all variables. Findings demonstrated that IGD, SMA, and IA were positively associated. Scores on the three addiction scales also correlated positively with age, diffuse-avoidant style, DwC, RaS, NfA, and PwR and negatively with normative style and confidence. Among the online addictions, IA was negatively correlated with Informational style, and IGD and IA were negatively correlated with gender. Internet Addiction Results indicated that Model 1 explained 8.6% of the variance in IA, F 2, 709 = 33.54, p < .01. Model 2 explained an incremental 30% of the variance in the dependent variable score, F 5, 706 = 87.35, p < .01, above and beyond the variance accounted for by socio-demographic characteristics. Model 3 explained an incremental 14% of the variance in the dependent variable score, F 10, 711 = 75.58, p < .01, above and beyond the variance accounted for by identity styles and sociodemographic characteristics. The final model explained a total of 51.9% of the variance (R 2 adjusted = .512), and the R 2 change resulted in statistical significance at each step. The predictors were statistically significant except for the PwR attachment style. All beta coefficients are reported in Table 3. Internet Gaming Disorder Results indicated that Model 1 including gender and age explained 20.6% of the variance in IGD, F 2, 709 = 91.70, p < .01. Identity styles added in Model 2 explained an incremental 21.2% of the variance in the dependent variable score, F 5, 706 = 101.41, p < .01, above and beyond the variance accounted for by sociodemographic characteristics. Attachment styles in Model 3 explained an incremental 13% of the variance in the dependent variable score, F 10, 711 = 85.06, p < .01, above and beyond the variance accounted for by identity styles and sociodemographic characteristics. Therefore, the final model explained a total of 55% of the variance (R 2 adjusted = .542) and the R 2 change resulted in statistical significance at each step. The predictors were statistically significant, except for the attachment style of PwR. All beta coefficients are reported in Table 3. Social Media Addiction Results showed that Model 1 explained 6.8% of the variance in SMA, F 2, 709 = 25.67, p < .01. Model 2 explained an incremental 22.2% of the variance in the dependent variable score, F 5, 706 = 57.01, p < .01, above and beyond the variance accounted for by sociodemographic characteristics. Model 3 explained an incremental 9.2% of the variance in the dependent variable score, F 10, 711 = 43.43, p < .01, above and beyond the variance accounted for by identity styles and sociodemographic characteristics. The final model explained a total of 38.3% of the variance (R 2 adjusted = .374), and the R 2 change resulted in statistical significance at each step. The predictors were statistically significant, except for gender and the PwR attachment style. All beta coefficients are reported in Table 3. Discussion The aim of the present study was to simultaneously explore individual differences in online addictions by taking into account identity and attachment styles. Findings demonstrated that internet addiction, online gaming addiction (i.e., IGD), and social media addiction were interrelated and predicted by common underlying risk factors. Hierarchical multiple regressions showed that, among predictors, identity styles explained the greater amount of variance of the three online addictions, thus supporting their main role as protective or risk factors. The relationships between different addictive behaviors and their independent variables (age, gender, identity, and attachment styles) are discussed in the following subsections. Relationships between Different Online Addictions Results from bivariate correlations showed high and positive associations between the three online addictions. More specifically, and as expected, internet addiction was similarly associated with IGD and social media addiction. These findings corroborate previous research (e.g., Andreassen et al. 2013Andreassen et al. , 2016Monacis et al. 2016a;Sinatra et al. 2016) and provides empirical support for the concept of internet addiction as an umbrella construct, which comprises a wide range of online activities , such as communication via social networking sites and playing online video games. Additionally, the high and positive association between IGD and social media addiction was probably due to the fact that many adolescents and young adults play games via social networking sites (Griffiths 2014) to receive pleasure, sense of accomplishment, etc. This also provides a route by which social networking games could represent a gateway to other potentially problematic leisure activities (Griffiths 2015). Demographic Factors In general, both gender and age influenced online addictive behaviors. With regard to gender, being male was associated with both IGD and internet addiction. In line with other findings (e.g., Kuss et al. 2014;Andreassen et al. 2016;Monacis et al. 2016a), these results confirm males' preference for these online activities, such as playing videogames, which can involve playing alone competitively. The lack of association between gender and social media addiction in the present study, is consistent with Sinatra et al.'s (2016) investigation but in contrast with other studies reporting contradictory findings such as higher SNS addiction scores in females (Andreassen et al. 2016;Monacis et al. 2016a;Pfeil et al. 2009) or higher SNS addiction scores in males (Raacke and Bonds-Raacke 2008). Another risk factor is represented by age, since young adults seem to be more engaged in online activities. However, unlike previous findings (Andreassen et al. 2016), this positive relationship may reflect the tendency of people to engage with more online technologies and applications as they age to satisfy their personal needs. Identity Style Formation Findings showed that the construct of identity plays an important role in predicting online addictive behaviors. However, the predictive relationships were only partially confirmed, given the unexpected positive relationship between informational style and online addictions (i.e., as the informational style scores increased, online addiction levels significantly increased). Moreover, the relationship between informational style and social media addiction appears to be stronger than the relationships between this identity style and IGD and internet addiction. The influence in overusing social media may reflect the individual's experience of a constant urge to check social networks, often considered as the best environment to find new information and updates and as useful tools to express and actualize an individual's own identity either via self-display of personal information or via connections. On one hand, the informational style could be considered a significant risk factor for online addictions, along with other personality-related factors, such as extraversion which has been found to be a significant and positive predictor of internet addiction (Zamani et al. 2011) and of social networking addiction (Wang et al. 2014). On the other hand, this result is in contrast with studies showing the protective role of the informational style in the development of online addictive behaviors (e.g., Arabzadeh et al. 2012;Ceyhan 2010;Tabaraei et al. 2014). As expected, normative style was a protective factor for all three online addictions. Indeed, normative-oriented individuals, in internalizing significant others' expectations and values, tend to protect and conserve their own identity structure by reducing the use of technological communication. In this case, the virtual environment could represent an uncertain space characterized by a variety of identities and values. This negative relationship is in line with the findings reported by other researchers (e.g., Arabzadeh et al. 2012;Ceyhan 2010;Tabaraei et al. 2014;Sinatra et al. 2016). Finally, the diffuse-avoidant style is a risk factor in determining online addictions. Individuals with a diffuse-avoidant style tend to avoid or procrastinate identity conflicts, probably through an excessive use of internet, social networks sites, and videogames. In other words, individuals who need to escape from real-life situations prefer to display their self-expression in open and virtual environments, characterized by the maintenance of established offline networks and by the manifold online ties that are indicative of bridging rather than bonding social capital. These findings give further support to previous reported findings (White et al. 2003;White et al. 1998;Hojjat et al. 2015). Another noteworthy observation is that, from the examination of the standardized regression coefficients of the two identity risk factors (informational and diffuse-avoidant style), the diffuse-avoidant style appears to have a more predictive power in all three online addictions than the informational style. This aspect stresses the potential role played by diffuse-avoidant style as the most negative identity risk factor in the development of online addictions. Attachment Styles As expected, attachment styles are important dispositional factors in determining online addictive behaviors. Indeed, similarly to previous research, the findings of the present study generally supported the aforementioned hypotheses regarding the causal relationship between these constructs. More specifically, secure attachment orientation negatively predicted the three online addictive behaviors. Individuals characterized by high self-esteem and enjoyment in intimate relationships and in sharing feelings with others (Bowlby 1969(Bowlby /1982(Bowlby , 1973(Bowlby , 1980 may be lower at risk of becoming addicted to online gaming, social networking, and the internet. The negative regression coefficients of the confidence style confirm its expected function, since it is a protective factor against addictions. However, the negative relationship between secure attachment and social networking addiction is inconsistent with other findings where securely attached individuals, being able to manage SNSs, use them as a tool to have more social ties with others, thus increasing their sense of social belonging and interpersonal competency (Jenkins-Guarnieri et al. 2012;Oldmeadow et al. 2013;Pempek et al. 2009). The expected positive associations were found between need for approval (referred to as an anxious style) and all three online addictions. Individuals with this attachment tendency, showing excessive desires and efforts for acceptance and being dependent from others, are more likely to use technologies to obtain approval and positive feedback, therefore putting themselves at risk of addiction. Moreover, individuals with relationships as secondary style, referred to an avoidant attachment, are led by their attachment attitudes to assess dismissive approaches to close relationships and tend to satisfy their need of social belonging by using the online format, which affords such a dismissive approach to close relationships by maintaining a 'safe' distance from others, thus developing at-risk addictive behaviors. On the other hand, the discomfort for intimacy style (referred to an avoidant style) represents a protective factor, since individuals unable to invest in intimacy and to share feelings, thoughts, and emotions with others tend to reduce any online activity. Consistent with other findings, this style is associated with less interest in the use of Facebook and other social networks (Oldmeadow et al. 2013). Finally, no significant associations emerged between preoccupation with relationships and addiction scores, thus contrasting findings reported by Schimmenti et al. (2014). This contradictory result may depend on the psychological characteristics of the previous Italian sample of participants, who, suffering from childhood experiences of emotional, physical, and sexual abuse, tended to use the Internet as a virtual retreat in order to protect themselves from feelings of loneliness and fears about real interactions. The present study is not without its limitations. First, much caution should be taken in the interpretation of these findings, given that participants were sampled on the basis of a selfselected convenience sampling strategy. Further studies should consider a more representative sample of the population in order to generalize the findings. Second, the use of a self-report questionnaire is associated with well-known biases (e.g., social desirability and recall biases). Future investigations should not only include a multi-method assessment of identity styles, attachment styles, and online addictions but also use longitudinal designs in order to better assess the directionality between the causal relationships. Taken as a whole, as far as the authors are aware, this is the first study to investigate individual differences in the interrelationships between three online addictions, by integrating the theory of identity formation and attachment style. Despite the exploratory nature of the present study, it adds an innovative contribution to the existing literature. Compliance with Ethical Standards Conflict of Interest The authors declare that they have no conflict of interest. Ethical Approval All procedures performed in this study involving human participants were in accordance with the ethical standards of University's Research Ethics Board and with the 1975 Helsinki Declaration. Informed Consent Informed consent was obtained from all participants. Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
v3-fos-license
2022-12-31T16:05:29.374Z
2022-01-01T00:00:00.000
255299993
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://niv.ns.ac.rs/e-avm/index.php/e-avm/article/download/297/269", "pdf_hash": "51865cd4f13186bda4a624442364e2861f76988a", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41210", "s2fieldsofstudy": [ "Environmental Science", "Medicine" ], "sha1": "1d56c6bde8b3b7ee40d1a831ad56174f7d279863", "year": 2022 }
pes2o/s2orc
THE MOST COMMON ANTHROPOZOONOSES IN THE REPUBLIC OF SRPSKA IN THE PERIOD 2015 – 2020 Zoonotic diseases are increasingly becoming an emerging public health threat, partially due to the risk of spillover events at the human-wildlife interface. Th eir potential for infecting people with exotic pathogens originating from unusual pets should not be overlooked. Th e aim of the study is to present and analyze the trend of zoonoses in the 2015-2020 period using the descriptive method. Th e source of data is reports of single cases of infectious diseases, which is in accordance with the applicable legislation governing this area. Th e incidence of anthropozoonoses was the highest in 2017 amounting 16.5/100,000, while the lowest value in this fi ve-year period was in 2020, with 1.1/100,000. Th e share of anthropozoonoses in the total incidence of infectious diseases was also the lowest in 2020, with the value of 0.02%, while the highest share of this group of diseases was recorded in 2017 with a value of 1.42%. In the specifi ed period, the three most commonly reported anthropozoonoses are Q-febris, leptospirosis, and brucellosis. In 2020, the most frequently registered anthropozoonosis was toxoplasmosis, while in the previous 5 years, this disease was not reported among the three most common. It is necessary to raise awareness about the presence of zoonoses in the overall incidence of infectious diseases in the Republic of Srpska, because due to the common non-specifi c clinical picture, zoonoses are not the fi rst to be considered in diff erential diagnosis. In the fi ght against zoonoses, a coordinated approach to “One Health” is necessary, which will enable design and implementation of programs, policies, legislation and research in the area of public health. NAJČEŠĆE ANTROPOZOONOZE U REPUBLICI SRPSKOJ U PERIODU 2015 -2020 INTRODUCTION Six in ten human cases of infectious disease arise from animal transmission (Center for Disease Control, 2018). Fift y years ago, following the wide-scale manufacture and use of antibiotics and vaccines, it seemed that the battle against infections was being won for the human population. Since then, however, and in addition to increasing antimicrobial resistance among bacterial pathogens, there has been an increase in the emergence of zoonotic diseases originating from wildlife, sometimes causing fatal outbreaks of epidemic proportions. Zoonosis is defi ned as any infection naturally transmissible from vertebrate animals to humans. In addition, many of the newly discovered diseases have a zoonotic origin. Due to globalization and urbanization, some of these diseases have already spread all over the world, caused by the international fl ow of goods, people and animals. However, special attention should be paid to farm animals since, apart from the direct contact, humans consume their products, such as meat, eggs, and milk. Th erefore, zoonoses such as salmonellosis, campylobacteriosis, tuberculosis, swine and avian infl uenza, Q fever, brucellosis, STEC infections, and listeriosis are crucial for both veterinary and human medicine. Consequently, in the suspicion of any zoonoses outbreak, the medical and veterinary services should closely cooperate to protect the public health (Libera et al., 2022). Zoonotic diseases, particularly those associated with livestock and poultry, are becoming an increasing threat for public health for various reasons. For example, the predictions suggest that the global human population will constantly increase and reach almost 10 billion by 2050. Consequently, it will result in a higher food demand (United Nations, 2019). One Health is an eff ective approach for the management of zoonotic disease in humans, animals and environments. Examples of the management of bacterial zoonoses in Europe and across the globe demonstrate that One Health approaches of international surveillance, information-sharing and appropriate intervention methods are required to successfully prevent and control disease outbreaks in both endemic and non-endemic regions. Additionally, One Health approach enables eff ective preparation and response to bioterrorism threats (Cross, 2018). Diagnostics plays a key role in disease surveillance. Misdiagnosis results in inappropriate treatment, or missed opportunities to prevent further disease transmission. Th e zoonoses discussed in this paper oft en present as undiff erentiated febrile illnesses, and so a detailed history is key to diagnosis. More common ailments with similar symptoms are initially suspected, and diagnosis may be missed altogether in self-limiting cases (Gunaratnam et al, 2014). It is almost certain that large-scale zoonotic disease outbreaks will almost certainly continue to occur regularly in the future. Th erefore, a better general understanding of the factors aff ecting variation in the severity of outbreaks is critical for well-being of the global community (Stephens et al, 2021). En-demic zoonoses continue to be relatively neglected, oft en with a lack of local and international realization of the extent to which they impact human health and well-being. Th is is partly due to the issues surrounding local capacity and knowledge and partly because, unlike emerging infectious diseases, they are not seen as a threat to people in the developed world. Both EIDs and endemic zoonoses, however, can be tackled using the One Health approach, which includes the identifi cation and mitigation of human activities that lead to disease emergence and spread (Cunningham et al, 2017). In the fi ght against zoonoses, a coordinated approach to "One Health" is necessary, as it will enable the design and implementation of programs, policies, legislation and research in the fi eld of public health. Th e aim of the study is to present and analyze the trend in zoonoses during the 2015-2020 period using the descriptive method. Th e source of data is reports of single cases of infectious diseases, which is in accordance with the applicable legislation governing this area. MATERIAL AND METHODS No ethical approval was obtained because this study did not involve laboratory animals. Only non-invasive procedures were used. As part of epidemiological surveillance, an analysis of the data obtained from monitoring the trend of anthropozoonosis according to European Union (EU) case defi nitions was performed (European Commission, 2018). EU definitions have been part of the national legislation for years now, and they are regulated by law. Using the descriptive method, the data obtained from all 54 primary health centers, as well as 10 hospitals in Republic of Srpska, were analyzed. Th e data were obtained through the Notifi cation of Infectious Diseases, which is an offi cial and binding document for every doctor who registers and thus reports an infectious disease, which is regulated by regulations and law. Th e disease reports are sent from these institutions to the Public Health Institute of the Republic of Srpska which analyzes the data and generates offi cial reports. Th e trend of anthropozoonoses in the mentioned period is described and the three most common anthropozoonoses for each year are determined. Th e case of each disease is classifi ed as possible -probable -confi rmed on the basis of the national case defi nition criteria. Using statistical analysis with statistical soft ware SPSS 23, we compared the incidences of these diseases, while patient demographics were analyzed and statistically processed using chi-squared (χ 2 ) test. Th is test was used to determine whether there is a statistically signifi cant diff erence between the observed frequencies of the three diseases in the observed groups and the frequencies of the same groups in the general population. RESULTS Th e incidence of anthropozoonoses was the highest in 2017 with 16.5/100,000, while th e lowest value in this six-year period was in 2020, amounting to 1.1/100,000 ( Figure 1). Th e share of anthropozoonoses in the total incidence of infectious diseases was also the lowest in 2020 and amounted to 0.02%, while the highest share of this group of diseases was recorded in 2017 with a value of 1.42%. In the specifi ed period, the three most commonly reported anthropozoonoses were Q-febris, leptospirosis, and brucellosis. Th e analysis of the collected data from epidemiological surveillance in the mentioned period, showed that there was a total of 283 cases of these three diseases, and the incidence trend shows that the incidence was the highest in 2017, high in 2016, 2018 and 2019, while in 2020 the incidence was very low (Table 1). If we separate the incidence for each of these three diseases individually, we come to the following information: a total of 83 cases of brucellosis were reported in that period, and the incidence trend shows that the incidence of brucellosis was the highest in 2018 (2.96% 000), and signifi cantly lower in other seasons. In the observed period, a total of 86 cases of leptospirosis were reported, and the incidence trend was the highest in 2017, with the incidence of 2.78% 000, high in the 2019 season with the incidence of 2.1% 000 and signifi cantly lower in other seasons. 114 cases of Q-fever were reported in the same period, with the incidence trend that was highest in the 2016 season and an incidence of 2.76% 000, high in the 2017 (2.43% 000) and 2019 seasons (2.45% 000) and signifi cantly lower in other seasons ( Figure 2). Based on the results of the χ ^ 2 test (χ ^ 2 = 56.993; p = 0.000), it can be concluded that there was a statistically signifi cant diff erence in the total number of patients of all three diseases by sex. Statistically signifi cant (p =0.000, < 0.05) there was a higher number of male patients than the number of female patients, compared to the number of men and women in general population observed for all three diseases together as well as for each of the diseases individually ( Figure 3). Th e data obtained from epidemiological population surveillance from 2015 to 2020 in the Republic of Srpska show that out of a total of 283 patients with brucellosis, leptospirosis and Q-fever, 155 or 54.77% live in urban areas and 128 or 45.23% in rural areas. However, statistical analysis of data for each of the diseases separately reveals that there are signifi cant diff erences in this regard between these three diseases. Namely, the results of the χ ^ 2 test for brucellosis (χ ^ 2 = 1.157; p = 0.282) and leptospirosis (χ ^ 2 = 2.110; p = 0.146) show that there was no statistically signifi cant diff erence in the number of patients according to the type of settlement in which they lived, while in patients with Q fever (χ ^ 2 = 33,778; p = 0,000) this statistical diff erence is signifi cant, as indicated by the relatively high value of the χ ^ 2 test and the high probability (p = 0.000, <0.05) for the accuracy of that statistical diff erence (Figure 4). Th e analysis of the regional distribution of patients by each of the three diseases, and in total, we come to the result that the largest number of reported cases was registered in the Banja Luka region, and lowest in the Trebinje region. When it comes to the reports on the outbreaks of these three infectious diseases in the given period, one epidemic of Q fever was reported in 2016 in Banja Luka, where 30 patients were registered and that correlates with the trend. According to data from population surveillance, the diagnosis of brucellosis, leptospirosis and Q-fever was clinically established in 184 cases (65.02%) (probable cases), which is signifi cantly more than 99 cases (34.98%) in which the diagnosis was confi rmed in the laboratory (confi rmed cases). Th ere is a similar relationship in terms of the method of establishing the diagnosis for each of the three diseases, with the number and share of clinically made diagnoses (probable cases) being highest in patients with Q fever (81 cases and 71.05%) while there were 33 confi rmed cases (28.95%). Th e number and share of laboratory confi rmed cases is the highest in patients with leptospirosis (35 cases and 40.7%), while there were 51 probable cases (59.3%). Th e number and share of probable cases of brucellosis was 52 (62.65%), while there were 31 confi rmed cases (37.35%) ( Figure 6). DISCUSSION Communicable disease epidemiology is closely linked to pathogen ecology, environmental and social determinants, economic factors, access to care, as well as the state of country development (McMichael, 2012). Climate change continues to have both direct and indirect eff ects on communicable diseases, oft en in combination with other drivers, such as increased global travel and trade. Th e frequency, duration, and intensity of heat waves has increased across Europe, and the last decade was the warmest ever recorded (World Meteorological Organization, 2013). A global, integrated zoonotic disease surveillance system needs to detect disease emergence in human or animal populations anywhere in the world at the earliest time possible. An eff ective global, integrated zoonotic disease surveillance system requires eff ective surveillance at national, regional, and international levels, because information from outbreak investigations is used by human and animal health offi cials at all levels to implement response measures and evaluate the eff ectiveness of those responses. Emerging zoonotic diseases can occur any time in any part of the world. Th erefore, it is diffi cult to predict which pathogens may emerge, which human and or animal populations it may aff ect, or how these pathogens may spread. From a growing number of experiences, the world has learned that it is crucial to detect and report emerging zoonotic disease outbreaks that occur in a single country or region. Early detection and reporting at the local level give the international community an opportunity to assist national authorities and implement eff ective response measures (Keusch et al, 2009). Q fever is a severe, zoonotic disease spread worldwide and caused by Cox-iella burnetii. Th is disease was fi rst described by Derrick in 1937 following an epidemic fever outbreak among employees at a slaughterhouse in Brisbane, Australia (Derrick, 1937). Q fever can manifest as an acute disease, usually as a self-limited febrile illness, pneumonia, or hepatitis. It may also occur as a persistent focalized infection with endocarditis. Th e main reservoirs of C. burnetii are cattle, sheep, and goats, but infections were detected in other animals such as domestic mammals, marine mammals, reptiles, birds, and ticks (Eldin et al, 2017). C. burnetii is most abundant in aborted fetuses, amniotic fl uid and placenta aft er stillbirth or normal birth of off spring from infected mothers, and in the urine, feces and milk of infected animals. Transmission to humans most commonly occurs through inhalation of aerosolized bacteria from the placenta (delivery or abortion), feces, or urine of infected animals. Human-to-human transmission is extremely rare. Leptospirosis is a widespread bacterial zoonosis occurring most commonly in low-income populations living in tropical and subtropical regions, both in urban and in rural environments. Rodents are known as the main reservoir animals, but other mammals may also signifi cantly contribute to human infections in some settings. Clinical presentation of leptospirosis is nonspecifi c and variable, and most of the early signs and symptoms point to the so-called 'acute fever of unknown origin' (Goarant, 2016;Adler et al, 2009). Th e implementation of the case defi nition is signifi cant because all reported cases need to be categorized in the same way in accordance with international regulations (Nichols et al, 2014). Th e introduction of a case defi nition facilitated the early recognition of these diseases as well as the appropriate direction in their diagnosis and confi rmation. Th is also enables the evaluation of surveillance system through the analysis of the report of each individual case Standards for good laboratory practices overlap with standards for good laboratory network operations. Good laboratory practice principles are simply applied to laboratory facilities that meet proper standards for testing, safety, and security; employ a trained and profi ciency-tested staff ; have standardized operating procedures, validated test protocols, and properly functioning equipment; and use a communication system that relies on common platforms and accurately and reliably reports test results in a timely manner. Th e Food and Agriculture Organization of the United Nations (FAO), the World Organization for Animal Health (WOAH -ex OIE), and the World Health Organization (WHO) recognize a joint responsibility to minimize the health, social and economic impact of diseases arising at the human-animal interface by preventing, detecting, controlling, eliminating or reducing disease risks to humans originating directly or indirectly from domestic or wild animals, and their environments. An important aspect of eff orts to mitigate potential health threats at the human-animal ecosystems interface is early warning, supported by robust risk assessment to inform decisions, actions, and timely communication between agencies and sectors responsible for human health, animal health, wildlife, and food safety. In 2006, in response to health threats such as H5N1 highly pathogenic avian infl uenza (HPAI) and the severe acute respiratory syndrome (SARS), the three organizations consolidated their eff orts to establish a Global Early Warning System for Major Animal Diseases Including Zoonosis (GLEWS). GLEWS became one of the mechanisms used by the WOAH, FAO, and WHO together for monitoring data from existing event-based surveillance systems and track and verify relevant animal and zoonotic events (FAO-WOAH-WHO, 2010). Based on the results of the study, it can be seen that the incidence of these diseases was the lowest in 2020. Th e cause of this drastic decline is largely the outbreak of the COVID 19 pandemic and the fact that it cast a shadow on other diseases due to the enormous burden it imposed on the health system. Pandemic certainly did not change the course of these diseases, but it did make their reporting, adequate diagnosis and anti-epidemic action very diffi cult. But despite the epidemic, doctors who are the fi rst to receive and treat patients do not consider zoonoses as the fi rst option in diff erential diagnosis, especially because most of them do not have specifi c symptoms at the beginning of the disease, and in the most severe clinical phase fever, malaise, headache, muscle aches, pneumonia or even meningitis are the symptoms of many other nonzoonotic diseases. Our doctors were somewhat more cautious about zoonoses in the fi rst few years aft er the catastrophic fl oods that hit this region in 2014, but that has changed over time due to the impact of several factors. In the observed period, out of the three most reported diseases, namely brucellosis, leptospirosis and Q fever, the largest number of reported cases were Q fever cases, with the highest incidence in 2016. Th e reason for this is the registered outbreak of this disease in the area of Banja Luka. Furthermore, a signifi cantly higher number of people with disease fell ill in urban areas than in rural areas. Th ese data may lead to the conclusion that contact of the urban population with the villages through excursions, hiking, visits to rural families and many other activities that bring this population into contact with the rural area, allows immunologically incompetent population contact with Coxiella spores. Th e fact is close contact with the reservoir animals of this disease is not necessary-it is enough to breathe air with spores that carry this causative agent. Th us, in the area of the city of Banja Luka, there were several outbreaks of this disease in the period before 2010. A statistically signifi cantly higher number of men contracted these three diseases than women, which traditionally described these zoonoses as "male diseases", mostly because men are more likely to engage in livestock, agriculture and other activities in the nature and with animals. In terms of age groups, the largest number of patients falls in the range of 20-64 year olds, which leads us to the conclusion that these diseases aff ected the working population the most, namely those who come into contact with animals and their products through agriculture, livestock, etc. Based on the results of the analysis, the largest number of patients was registered in the region of Banja Luka, which is also the most populated area in the Republic of Srpska. Hospital and diagnostic capacities are the largest in this region, so the increased reporting of these diseases can be related to that fact. A signifi cantly higher percentage of reported cases of brucellosis, leptospirosis and Q fever was reported based on clinical criteria. Th is is certainly something that makes a weak point of the system of control over anthropozoonoses and represents a link that requires signifi cant improvements. Adequate and precise diagnostics are necessary to confi rm the case of any infectious disease, which makes the system of supervision and control of these diseases stronger and more reliable. Another weakness of the system is the absence of a unique electronic system for reporting of infectious diseases-health institutions are not connected into IT network which would obtain a fl ow of information on reports of infectious diseases, outbreaks and all other data that is necessary to analyze the situation or other unexpected health events. For this reason, it is impossible to get all the information about each patient, because most of the reporting and data sharing is paper based or via e-mail at our request. Th at is why establishing a network of health institutions with the Public Health Insti-tute of the Republic of Srpska would be one of the main factors of the improvement and strengthening of the infectious disease surveillance system. CONCLUSIONS Th e incidence of anthropozoonoses in the Republic of Srpska in the 2015-2020 period was highest in 2017, and the lowest in 2020. Th e three most commonly reported diseases were brucellosis, leptospirosis, and Q fever. Th e reported cases of these three diseases were more common among urban population, the patients being mostly male and a majority of them belonged to the working population. Th e largest number of cases has been reported as probable without microbiological confi rmation, which is stated in the case defi nition for each disease. It is necessary to improve the reporting of zoonoses in the Republic of Srpska in terms of case confi rmation, as well as raising awareness of the frequency and importance of anthropozoonoses of all physicians, especially those who fi rst treat patients. АCKNOWLEDGEMENT Th is study is independent research of the above mentioned authors and is not part of any fi nancially supported project.
v3-fos-license
2020-07-30T02:02:55.905Z
2020-01-01T00:00:00.000
226611500
{ "extfieldsofstudy": [ "Psychology" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://link.springer.com/content/pdf/10.1007/978-981-15-0614-7_70.pdf", "pdf_hash": "4cc7ac03fa63c342b513269af658c801d108e903", "pdf_src": "Adhoc", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41211", "s2fieldsofstudy": [ "Sociology" ], "sha1": "b2420ea6a5dbd5dce8886135d7770ab374dfa44b", "year": 2020 }
pes2o/s2orc
Normality, Freedom, and Distress: Listening to the Menopausal Experiences of Indian Women of Haryana This chapter explores variations in the experience of menopause among 28 postmenopausal women belonging to lower socioeconomic strata from the Indian state of Haryana. Singh and Sivakami base their research on in-depth qualitative interviews with the women to gauge their perceptions and experiences of menopause. They analyze the interviews thematically and identify three dominant narratives: menopause as a normal biological process, an insignificant event that goes unnoticed in the chaos of life; menopause as distress in silence, the distress arising from the intersection of poverty, gender, and patriarchy; and menopause as freedom—freedom from societal restrictions and monthly distress. These narratives are distinct but often co-occur; for example, some women experience freedom after going through distress. Additionally, the authors report that participants express the need for emotional and social support during menopause and the desire to be understood rather than to be treated. contested MeanIngs of Menopause The field of endocrinology, which enabled the extraction of synthetic estrogen, was established in 1930, leading to an era of interest in menopause by medical practitioners and researchers (Coney 1993). Menopause became popularly understood as hormone deficiency disease leading to loss of femininity. The movement promoting hormone replacement therapy (HRT) in the 1960s to help women remain 'feminine forever' is highly criticized by feminists for its ageist and sexist agenda (Bell 1987;Coney 1993). Approaches to menopause within the medical community, however, have evolved from treating women and their bodies without reference to social context. Bell (1987) traces the history of medicalization of menopause in the United States and identifies three models-biological, psychological, and environmental-guiding medical practice. The biological model defines menopause as "a physiological process caused by cessation of ovarian function," emphasizing hormone deficiency and thus treatment with estrogen or HRT. The psychological model believes that women's personalities affect their symptom experience, so psychotherapy is the appropriate line of treatment. According to the environmental model, women's symptoms are the result of stresses and strains posed by changing social roles and responsibilities during midlife; it proposes that women change their lifestyle and habits to manage menopausal symptoms. All three models have identified the cause of distress during menopause as existing within women and thus advocate for medical intervention for every menopausal woman. Though the environmental model acknowledges the impact of external factors, the solution proposed is internal (Bell 1987). Feminists critique medical models for devaluing older woman (Bell 1987;Coney 1993), presenting menopause as a 'hormonal deficiency disease,' and representing the menopause experience as universal and definitive, all of which have led to the promotion of HRT as the "elixir of youth" (Bell 1987;Coney 1993;Klein 1992). The use of HRT for managing menopausal symptoms became more contentious after the Women's Health Initiative (WHI) study that linked HRT use with increased risk of breast cancer, endometrial cancer, and cardiac morbidity (Hyde et al. 2010). The WHI study was one of the largest prospective studies assessing the long-term impact of HRT on women's health; however, it was stopped prematurely due to the alarming findings of increased risk of breast cancer and cardiac diseases among study participants (Hammond 2005). The media picked up the study findings, and physicians became highly cautious of using HRT for their patients (Lazar et al. 2007). However, the study soon was criticized in medical, epidemiological, and social science literature for its design, sample selection, and the results not being significant (Derry 2004;Hammond 2005;Wren 2009;Gurney et al. 2014). The feminist-medical debate on the use of HRT for managing menopause resurfaced with the failure of the WHI study. However, we believe strongly that listening to women's own voices is important to further our understanding of menopause. Hence, the focus of this current study is to bring out the uniqueness in the experiences of menopause among Indian women of Haryana and to place them within the broader social context. The next section discusses the literature on menopause in India. We then explain the methodology adopted for this study, followed by a discussion of our findings and the three narratives that emerged. In the discussion section, we illuminate our findings using anthropological and sociological theories around life transitions and illness. We assert that the meanings of menopause are created by the complex interplay of social, cultural, and biological factors. Menopause In the IndIan context India, like many nations, is governed by patriarchal values where women's voices are often ignored or muted (Patel 2007;Syamala and Sivakami 2005). One area in which their experiences are silenced is the menstrual cycle. Menopause and menarche are taboos; they are rarely discussed in Indian society, reflected by the dearth of studies on menopause in the Indian context. Studies on menopause in India have mainly focused on the differences in age at menopause and factors affecting it (Kriplani and Banerjee 2005;Syamala and Sivakami 2005;Sharma et al. 2007;Dasgupta and Ray 2009;Singh and Sivakami 2014). These differences are attributed to rural/urban residence, lifestyle factors, age, environment, education status, occupation status, and reproductive history (Stanford et al. 1987;Brambilla and Mckinlay 1989;Hill 1996;Hidayet et al. 1999;Harlow and Signorello 2000;Kriplani and Banerjee 2005;Syamala and Sivakami 2005;Dasgupta and Ray 2009;Singh and Sivakami 2014;Agarwal et al. 2018). Some studies found women who report no symptoms at all, while others report many symptoms, including hot flashes, night sweats, anxiety, irritability, loss of vision, joint pains, vaginal dryness, numbness, and tingling (Singh and Arora 2005;Bernis and Reher 2007;Dasgupta and Ray 2009;Singh and Sivakami 2014). There are two plausible explanations for this disparity: first, according to Aaron, Muliyil, and Abraham (2002), women do not report menopausal symptoms, as they link them with aging rather than menopause; and second, talking about menarche and menopause is still taboo in Indian culture. The extant studies on the perception of menopause in India indicate that menopause is seen as positive, since the end of menstrual bleeding removes the societal restrictions that come with the cultural view that menstruation is polluting (Singh and Arora 2005). The negative views about menstruation and menstruating women have appeared in the accounts of women from Haryana (Singh 2006). Singh's (2006) study in rural Haryana of practices during menstruation found that only 0.4% of the sample population was using sanitary napkins to manage menstrual bleeding because of the difficulty in accessing menstrual products. This created a source of monthly distress for these women. Moreover, studies on poor women from India (George 1994) have reported the difficulties women face in maintaining hygiene and managing menstrual bleeding in poor living conditions with limited space and no indoor toilets. Systematic review and meta-analysis of 138 studies on menstrual hygiene management among adolescent girls in India between 2000 and 2015 point out that adolescent girls experience menstruation as "shameful and uncomfortable" due to lack of water, sanitation, and hygiene facilities, insufficient puberty education, and poor hygienic practices to manage menstruation (van Eijk et al. 2016). The same study also showed the societal issues faced by Indian adolescent girls in the form of various restrictions, which adds to the negative construction of menstruation. These difficulties construct menstruation as a monthly distress for these women, who later consider postmenopause as a "freedom from monthly tension" (Singh and Sivakami 2014). Also, because menstruation prevents them from entering places that are considered sacred or pure, menopause is considered a type of liberation in Haryana (Singh and Arora 2005;Singh and Sivakami 2014;van Eijk et al. 2016). It is important to understand the meanings of menopause as constructed by women themselves as an alternative to overreliance on the medical discourse. Some have argued that the reporting of menopausal symptoms as well as their management is based on the studies of white women (Avis et al. 2001), and our policies reflect that. The policies need to be guided by the "local biologies" (Lock and Kaufert 2001) to ensure effective management of menopause. This study, undertaken to amplify the voices of menopausal women from lower socioeconomic strata, was conducted in the North Indian state of Haryana. Haryana is known for its gender discrimination practices, which are reflected in the state's sex ratio-that is, the ratio of males to females in a population. Sex ratio is a powerful indicator of social health in any society (Patel 2007), and Haryana's is the lowest in India. Methods and partIcIpants In 2012, we conducted in-depth qualitative interviews with 28 postmenopausal women from the lower socioeconomic strata in the state of Haryana. We recorded information related to women's socioeconomic status, educational and working status, parity, breastfeeding practices, lifestyle and eating habits, and menopause status, using a structured interview schedule as part of a larger study focusing on sociocultural differences in the age at menopause and symptom experience (see Singh and Sivakami 2014). A subset of women who identified themselves as postmenopausal and who agreed to share their menopause experience in detail were part of the qualitative narratives that form the sample participants for this study. We use pseudonyms throughout the paper to maintain privacy and confidentiality. The interviews lasted for 45-60 minutes, and the first author made a detailed summary of the interviews. The first author is familiar with the local language, and thus women discussed their experience using their own words, in their own space and time. The women interviewed were between 45 and 60 years old. They identified themselves as postmenopausal if their menstruation had stopped more than a year ago. The mean age at menopause was 47 years for urban women and 44 years for rural women. Of the 28 postmenopausal women, three women reported having had hysterectomies and thus had surgical menopause, while the rest had conventional menopause. In terms of education, 17 of the women were illiterate, 5 had completed education up to primary level, and only 6 women were educated up to senior secondary level. Four women were widowed and the rest were currently married. Almost all had children of marriageable age or were already married. Many of them were grandmothers. Six of them were currently working as unskilled labor and the rest did not work outside the home. After establishing a rapport with the women, we asked them to discuss their menopausal experience, and the topics ranged from the changes in their menstrual cycle, changes in body, managing menopause, family and work life, social support, peer comparisons, and local beliefs about menopause and its management. Their experiences ranged from having had no symptoms to suffering from a multitude of symptoms. The gamut of experiences suggested the role of social context in the uniqueness of each woman's experience. The narrative accounts of each woman were crafted from the information obtained during the interviews. We conducted narrative analysis to obtain rich insights into women's experience of menopause and identified themes and subthemes to gauge the significance of sociocultural context in the experience of menopause; we then linked them to theoretical models (Ryan and Bernard 2003). Three distinct but co-occurring narratives, discussed below, emerged from the detailed accounts of North Indian women from lower socioeconomic strata. We also read the narrative accounts repeatedly to identify five key themes grounded in the data: bodily changes or complaints, support system, visit to health provider, local beliefs about menopause and its management, and attitude toward menopause. The themes were then compared and contrasted across all interviews using constant comparison method (Corbin and Strauss 1990). We drew on sociological and anthropological theories of illness and transition to discuss the differences and similarities across narratives. Ultimately, the three distinct but co-occurring narratives that emerged were: menopause as a normal life transition, menopause as distress because it's taboo, and menopause as freedom from monthly distress and societal restrictions. We found that although these narratives co-occur as women reflect back or anticipate the future, one narrative dominated. We draw insights from anthropological and sociological theories of menopause, illness, and transition states (Kaufert 1982;Hyden 1997;Ballard, Kuh, and Wadsworth 2001) to illuminate the differences and similarities in narratives. We also compare and contrast the different narratives based on the five identified key themes. Menopause as a Normal Life Transition I do not have any issue . . . all of a sudden it stopped and I was happy. (Kamla,49,rural resident) Accounts of menopause as a normal life transition showed that women tend to normalize their symptoms by peer comparisons and by acknowledging the universality of menopause. Women used phrases like "every woman's issue" and "it's like childbirth" to normalize the experience. Every woman has to go through this, women are made to suffer, it's like child-birth you know. (Meena, 56, rural resident) It's over now. . . . I had to suffer from heavy bleeding for six months which every woman has to suffer. . . . I have heard that menopause happens this way only. (Jyoti,47,urban resident) Many women reported having had no symptoms and abrupt cessation of menstruation; for these women, menopause was a natural transition. They considered menopause symptom-free; they mentioned joint pains or headaches but associated those with aging rather than menopause (Singh and Sivakami 2014). Similar results have been reported by other Indian studies as well, indicating that Indian women report fewer symptoms, as they frequently link menopausal symptoms with symptoms of aging (Aaron, Muliyil, and Abraham 2002). In contrast to Western studies, where women are influenced by biomedical discourse and visit health providers to make sense of their experience and confirm their menopausal status (see Hyde et al. 2010), our study depends on "vernacular health theories": using popular beliefs and perceptions in local communities to understand illness or a life transition (Goldstein 2000) and to make sense of their bodily changes. These women depend on the affirmation of their community, as Kamla (49, rural resident) offers: "I discussed with other women in my community and they told me, it's menopause (mahina-bandh) for you. . . . . " The conception of menopause as a normal life transition also comes from the accounts of women who expressed the insignificance of menopause in their lives once their family was complete. Many women stated that they did not need the menstrual cycle past age 40, as menstruation is equated with reproduction, especially among illiterate women. As one woman said, "Why you need this? . . . Once your family is complete, you don't need it" (Gyani, 58, rural resident). Biomedical discourse also influenced women's perception of early menopause. Few who went to health providers mentioned their fear of getting uterine cancer. Most of the women acknowledged that they suffered from unmanageable, heavy, and painful bleeding during the menopause phase; however, they call it normal. They believe that if the 'bad blood' were to remain inside the body, they may develop cancer. This suggests the power of vernacular health theories in normalizing distress, which is crafted by the medical discourse of menopause (Goldstein 2000). Menopause became insignificant for those women who were busy managing the chaos of life, most notably poor women, whose struggle every day to ensure that their families get fed takes priority over reflection on their own health issues. Reena (58, urban resident), a domestic servant who did not recall her menopause experience, reflects this view: "I don't remember how it stopped . . . it's been 10 years now. . . . I think I had no complaints. . . . At that time I was busy doing my job [domestic servant]. I used to work in 10 houses and was busy whole day." Seema (53, rural resident) is another woman who failed to recall her menopause experience: "I never had time to think about menopause and all. . . . I don't even remember exactly when I stopped menstruating. . . . I was too busy managing my household chores and managing field work [agricultural fields]." Beena (49, rural resident) has a daughter of marriageable age, which in their culture is as soon as they turn 18. Beena is more concerned about the marriage of her daughter than her menopausal status. The latter has no bearing on her identity, while the former shapes her identity as a mother. She said, "It's a woman's issue which she has to bear. . . . Now, it's over but I have lot of other tensions. . . . I have to arrange for my daughter's wedding, she is 26 now. . . . It's getting late." Here, menopause, as a midlife transition is overshadowed by other events, and it becomes a routine and normal midlife transition, consistent with Ballard, Kuh, and Wadsworth (2001) finding that other life transitions, such as changing relationships with one's partner or children or changes in work life, often overshadow the experience of menopause. Most women in our study reported that menopause was an insignificant life event that either passed without any symptoms or without any symptoms that they remember. Those who did have unwanted symptoms normalized them by ascribing them universality, as reflected through peer comparisons and phrases such as "every woman has to suffer this" and "women are made to suffer." Menopause as Distress In this second thematic narrative, distress emerged mainly from somatic changes (heavy and painful bleeding, sleepless nights, irritability, anxiety, mood swings, and frequent headaches), often exacerbated by negative life events and an inability to share their experience with anyone. Rajni (52, urban resident) recalls her struggle with painful and heavy menstruation during her perimenopause phase: "I suffered during menopause for 2 years. . . . It was heavy and painful bleeding which kept me awake . . . [I] used to feel very hot, couldn't sleep, had frequent headaches . . . was feeling depressed. . . . . During that time my husband died . . . It was a tough time." In Rajni's account, there is no mention of depression or other body changes emerging from menopause. Rather, she reports that perimenopause became tough at the point where her husband died. That is, it was the death of her husband and her inability to share her distress with anyone that made perimenopause a difficult time for her. Sarla (51, rural resident), who is currently postmenopausal, recalls suffering and difficulties during perimenopause, particularly managing menstrual bleeding and maintaining hygiene in the presence of a male family member (her son). In her culture, it is unacceptable to discuss issues such as menarche and menopause with male members, as she mentions, Sarla's wish for someone to understand her situation points to a lack of emotional and social support in her life. Though she mentions a number of symptoms-sleeplessness, exertion, dizziness-she does not intend to seek medical intervention. Rather, she expressed the need to be understood. In Sarla's account, the narratives of freedom and distress coexist. Initially, her difficulties managing menopause and its associated symptoms in the presence of male members in her house made her feel distressed about menopause; however, once her period stopped completely, she felt liberated from monthly tensions. In her culture, menstruation or menopause is a women's issue and should not be discussed with men. We found that women who lacked a support system were more likely to feel the distress of menopause. For Sita (54, urban resident), the postmenopausal phase became very difficult because she had no one with whom to share her grief over the death of her husband. Her complaint about God highlights her helplessness and distress. She complains of frequent headaches and joint pains: Similarly, Geeta (59, rural resident) remembers her menopause experience and calls it a "suffering." She mentions her frequent fights with her husband and even her son. She expresses the need to be understood in the following excerpt: During menopause, I suffered a lot. . . . I had very heavy bleeding . . . which was unmanageable. . . . I tried many home-remedies . . . but nothing worked. . . . I was unable to sleep, fully exhausted . . . didn't wanted to talk to anyone, became anxious, used to get irritated over small things . . . had frequent fights with my husband . . . and even with my son. . . . They don't understand what a woman is going through . . . and you can't explain them. . . . It was a difficult time. On the other side, Gita (52, rural resident) boasts about her daughter-in-law and mentions that she is enjoying this carefree life as a grandmother: There was very heavy bleeding before I stopped menstruating . . . but that's normal . . . every woman has to go through this. . . . My daughter-in-law is very nice . . . she managed all the household chores at that time . . . she helped me a lot. . . . Now she works and I look after her kids . . . now I am free and enjoying life. Menopause as Freedom The third thematic narrative carried two meanings of freedom: freedom from the stressful management of menstruation and freedom from societal restrictions. Most women acknowledged that managing heavy and painful menstrual bleeding is difficult at an older age, thus they are happy when menstruation ends. The management is further complicated by the taboos attached to menstruation and menopause, which leads them to suffer in silence. Gita (52, rural resident) mentions, "Three to four years back it stopped. . . . It's good that it stops when you become old." Women expressed their difficulties in managing the pain and chaos of menstruation every month and thus embraced menopause as entry into a carefree phase of life. Women are happy in their new roles (mother-in-law or grandmother), which they would have found difficult with menstruation. As Gita adds, "It's good it stopped before I became grandmother as it's difficult to manage menstruation in old age." For some women, the narrative of distress and freedom co-occurred. When they reflect back on their experience during perimenopause, they call it "suffering" but they ended with phrases like "I am free now," "I am enjoying now," and "ít's over now." For these women, the narrative of freedom has emerged from the narrative of distress. For instance, Nirali, (53, urban resident) mentions, "two to three months I was bleeding heavily and then it stopped. . . . It was difficult to manage . . . it was annoying, now it's over." For most of the rural and some urban women, ending menstruation liberated them from the societal restrictions on entering sacred places and participating in sacred rituals like wedding pheras. Rekha (48, rural resident) recalls that she was barred from entering wedding mandap of her daughter. She mentions, "Even during my daughter's wedding I was menstruating and thus could not perform many rituals." Seema (47, rural resident) earns a living by organizing kirtans (spiritual gatherings). In her community, it is not acceptable that she attends a kirtan while she is menstruating, as menstruation is considered to be polluting. She is happy that she has reached menopause, as now she can plan and organize spiritual gatherings any time: "I am happy that it stopped. . . . I run a mandli (group) for performing spiritual gatherings (kirtan). . . . It was so difficult to manage when I was menstruating. . . . I cannot plan a kirtan when I am menstruating. . . . Now, I am free." All of these women are happy that they have attained menopause. They can freely go to temple, plan outings, and lead a carefree life. Women were more concerned about their new roles as grandmothers, and they felt liberated from monthly distress; by contrast, Western studies report that women are more concerned about their fertility status (Nosek, Kennedy, and Gudmundsdottir 2012). Some sociological and anthropological studies have reported that Indian women enjoy greater control over resources when they enter postmenopause and when they become mothers-in-law (Kaufert 1982;Inhorn 2006;Patel 2007). Also, Indian women enjoy higher social status in post-reproductive years due to freedom from the so-called 'polluting' menstruation and power dynamics (Vatuk 1998, 289-306;Aaron, Muliyil, and Abraham 2002). dIscussIon The narratives of normality, distress, and freedom emerged from the accounts of North Indian women from the lower socioeconomic strata. Each narrative surfaced from the complex interaction of biological changes, changes in family and work life, social status changes, and cultural beliefs. Though the narratives were distinct, they co-occurred in the accounts of many women. The narratives of normality and freedom suggest the positive attitude of these women toward menopause. Our findings are reflected in earlier studies from India that examined perceptions of menopause (Aaron, Muliyil, and Abraham 2002;Singh and Arora 2005). The studies reported that Indian women embrace menopause, as they are free from the societal restrictions and the 'polluting' effects of menstruation. The narrative of normality was influenced by cultural perceptions about menopause, shaped either by their personal experience or that of their community members (Aaron, Muliyil, and Abraham 2002). It has been found that societies that value fertility, youth, and sexual attractiveness view menopause negatively (Kaufert 1982;Khademi and Cooke 2003;Hall et al. 2016), while societies in which menopause is considered to be socially liberating embrace it (Aaron, Muliyil, and Abraham 2002;Singh and Arora 2005;Syamala and Sivakami 2005). In these settings, postmenopausal women enjoy greater self-esteem (Hall et al. 2016). Menopausal normality also emerged from the insignificance of menopause in the lives of many women who were busy managing other chaos of life. Ballard, Kuh, and Wadsworth (2001) report similar findings from their study of menopause experience in social context. The authors argue that social events compete with menopause for attention, and significant life events often overshadow the menopause experience. We found that life events such as the death of a husband, the birth of a grandson, or the marriage of a son or daughter overshadowed the menopause experience. These major life events that reshaped family structures rendered menopause insignificant. The distress of menopause reported by studies from the West is different from the narrative of distress that has emerged in this study. For Western women, the distress primarily emerges from the anticipation of aging, losing fertility, and becoming less attractive (de Salis et al. 2018;Nosek, Kennedy, and Gudmundsdottir 2012); their accounts seem to be influenced more by the biomedical perspective (Nosek, Kennedy, and Gudmundsdottir 2012). Conversely, in our study, distress stemmed mainly from difficulty in managing heavy and painful bleeding, further exacerbated by taboos attached to menopause. The grief and distress expressed by one of the rural women in our study was palpable, as she described being barred from her daughter's wedding mandap because she was menstruating. The distress in the lives of poor women from Haryana seems to stem from the complex interaction of patriarchy, gender, and poverty. The narrative of distress as well as freedom has its roots in the patriarchal structure of Indian society-broadly defined as the domination of women by men-as illuminated in the review of ethnographic works by Inhorn (2006). Inhorn asserts that patriarchy has a demoting effect on women's health through both the "micropatriarchy" in a doctor-patient relationship and the "macropatriarchy" in the family structure, in which men exert domination over females of the house. Patriarchy is also seen as women being discriminated against and/or abused by their husbands. There is also an age dimension, as older women exert control over younger women and girls in the household, sometimes tormenting them for being infertile or not doing household chores (Inhorn 2006). The three thematic narratives are interlinked. Narratives of normality and freedom dominated, and for most women freedom emerged from the narrative of distress. The narrative of distress, rooted in patriarchal values and practices, co-occurred with normality and freedom for many women. Women cannot share their menopausal experiences with men in the house because a woman is considered polluting while menstruating. Even the accounts of normality reflect the workings of patriarchy in the sense that women are supposed to bear the pain and show themselves as culturally competent. The narrative of freedom departs from normative and oppressive power structures, however, in the way that it challenges the dominant negative image of menopause found in medical discourses that cast menopause as disease in need of treatment. As these narratives have emerged from the accounts of postmenopausal women, we must note that the phase of menopause per se may be influential in shaping the meaning of menopause. We may expect different findings if premenopausal and perimenopausal women were also part of this narrative analysis. Furthermore, the findings are based on the accounts of women from low socioeconomic strata and a single state in India. Future research may look into women of more diverse backgrounds. In conclusion, this study identified a spectrum of menopausal experiences of low-income Indian women, whose voices are rarely heard. When we listen to women's own stories, located in the social context, we can capture what menopause actually means to women. In this case, the result found a complex interplay of social, cultural, and biological factors. From here, we can develop strategies of support that enable healthy and empowered aging that meets women's diverse needs. references
v3-fos-license
2023-01-12T16:57:34.144Z
2023-01-01T00:00:00.000
255704068
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "http://aetic.theiaer.org/archive/v7/v7n1/p1.pdf", "pdf_hash": "f254e7c157189d48477fafd6f18c6aa242fade1a", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41213", "s2fieldsofstudy": [ "Computer Science", "Agricultural And Food Sciences" ], "sha1": "96889b7b4a754a73ec713b0e856977a934685d94", "year": 2023 }
pes2o/s2orc
An Efficient Technique for Recognizing Tomato Leaf Disease Based on the Most Effective Deep CNN Hyperparameters : Leaf disease in tomatoes is one of the most common and treacherous diseases. It directly affects the production of tomatoes, resulting in enormous economic loss each year. As a result, studying the detection of tomato leaf diseases is essential. To that aim, this work introduces a novel mechanism for selecting the most effective hyperparameters for improving the detection accuracy of deep CNN. Several cutting-edge CNN algorithms were examined in this study to diagnose tomato leaf diseases. The experiment is divided into three stages to find a full proof technique. A few pre-trained deep convolutional neural networks were first employed to diagnose tomato leaf diseases. The superlative combined model has then experimented with changes in the learning rate, optimizer, and classifier to discover the optimal parameters and minimize overfitting in data training. In this case, 99.31% accuracy was reached in DenseNet 121 using AdaBound Optimizer, 0.01 learning rate, and Softmax classifier. The achieved detection accuracy levels (above 99%) using various learning rates, optimizers, and classifiers were eventually tested using K-fold cross-validation to get a better and dependable detection accuracy. The results indicate that the proposed parameters and technique are efficacious in recognizing tomato leaf disease and can be used fruitfully in identifying other leaf diseases. Introduction Biologically called Solanum Lycopersicon, tomato is a commonly harvested crop around the world, which is high in principle antioxidants like Vitamin 'C' and 'A' accompanying beta carotene. There is an increasing trend in the production and consumption of tomatoes throughout the globe resulting in 38.54 million tons of production for the year 2020 1 . Tomatoes can be grown in any well-drained wet soil with a www.aetic.theiaer.org information that can be recovered from a single-color component is constrained since plant leaf pictures are complicated due to the background. As a result, the feature extraction approach produces less reliable information. Therefore, the high identification accuracy of CNN has attracted many researchers. Pandian et al. [13] applied an innovative 14 layered deep CNN (14-DCNN) on a massive open dataset of leaves. Their research indicates that 14-DCNN is well suited to automated plant disease identification. A customized CNN model significantly outperformed a pre-trained model, as shown by their study. Developing an effective CNN model to get higher detection accuracy is a difficult task. Zhang et al. [14] suggested a three-channel CNN model that combines RGB color components to recognize disease in vegetable leaves. Sibiya et al. [15] utilized CNN to classify maize plants' diseases. They demonstrated the model's impact using histogram approaches. They were able to obtain an overall model accuracy of 92.85%. For identifying diseases in tomato leaf, Zhang et al. [16] investigated a few CNN architectures such as ResNet, AlexNet, and GoogleNet. The maximum accuracy of ResNet was 92.28%, outperforming other networks. In the study presented by Amara et al. [17], the LeNet CNN model was utilized to identify banana leaf diseases. Here, the authors test the model using grayscale and color images utilizing the CA and F1-scores. Ferentinos [18] used AlexNet, GoogleNet, and VGG CNN architecture to compare the classification accuracy of the leaf disease. The VGG surpassed all other networks with the plant, obtaining 99.53 percent disease performance. Yamamoto et al. classified tomato diseases using CNN utilizing high, low, and super-resolution to compare super-resolution accuracy to other approaches [19]. The paper's results showed that the super-resolution approach surpassed traditional methods by a great proportion in terms of detection accuracy. Durmus et al. [20] used pre-trained networks AlexNet and SqueezNet V1.1 to classify tomato plant disease. AlexNet, on the other hand, outperforms with a disease classification accuracy of 95.65%. According to the review, deep neural networks have been effectively employed for learning in plant disease detection applications. The architecture of the network, where it is critical to accurately edge weights and map nodes from the input to the output, is the primary issue involved with developing deep neural networks. To train deep neural networks, it is necessary to fine-tune their network parameters using a procedure that maps the input layer to the output layer and gets better over time. In our work, some pre-trained deep models were used as a starting point and fine-tuned it using three hyperparameters: learning rate, optimizer, and classifier. The capability to use deep models with limited sample numbers is the main benefit of such transfer learning in image classification [21]. Lastly, outstanding values of hyperparameters that contributed the most to improving detection accuracy were recorded using a fivefold cross-validation approach. A computer program can learn from data using deep learning. The learning process is the means through which the ability to conduct the classification with high precision is attained. The aim is to use pre-trained models to identify and classify ten types of plant disease using the ImageNet dataset. The classification job instructs the computer program to determine which of k categories a given input belongs. The learning algorithm is tasked with creating the function : ℝ → {1, … , }. The model allocates an input defined by a vector x to a category specified by a numerical value y when = ( ). In this study, ten different classes were used, nine of which were for leaf illnesses and one for healthy leaves.3.1. Dataset The dataset of diseased tomato plant leaves was collected from the well-known Plant Village dataset 2 . The dataset contains 56,048 images of plant leaves of 14 different species such as Apple, Blueberry, Cherry, Corn, and Grape. Among these, we chose tomato leaves in this experiment. The dataset of tomato www.aetic.theiaer.org leaf is composed of images of 9 non-identical classes and 1 healthy class as shown in Table 1 along with a brief information. The 9 diseased classes are Early blight, Bacterial spot, Leaf mold, Late blight, Septoria leaf spot, Target spot, Two-spotted spider mite, Tomato yellow leaf curl virus, and Tomato mosaic virus. There are 18,160 images in total, and 1591 of them are images of healthy tomato leaves. In the first and second part of the experiment, the dataset was split into training and testing datasets in an 8:2 ratio by randomizing pictures from the dataset based on the group label ratio. In the third part of the research, we did five-fold cross-validation. For that, the whole dataset was equally divided into five folders, where one of the folders was used for validation and the other four for training. In all cases, the images have all been downsized to the target size (64 × 64). The dataset was normalized before being divided into training and validation sets. DenseNet DenseNet was first introduced in the paper [8]. In a feed-forward manner, it connects each layer to any other layer. This network has L(L+1) direct connections between each layer and its following layer, whereas most conventional convolutional networks have just one link in between layer and its following layer. DenseNet design provides several advantages, including improving feature propagation, relieving the vanishing gradient problem, and, most importantly, lowering the parameter count. ResNet In the paper [11] ResNet was first introduced. This architecture was proposed primarily to solve the problem of numerous non-linear layers not being able to learn identity mapping and to address the degradation problem. There are three types of layers in the ResNet model, and they are 50, 101, and 152. Among those, ResNet50 is the most efficient and effective. Thus, in this experiment, we choose ResNet50. This is a network-within-a-network design built on a large number of stacked residual units. Residual units serve as the foundation of the ResNet design. Convolution and pooling layers make up these residual units. This is kind of similar to the VGG [10] architecture but 8 times deeper. In this experiment, we loaded the pre-trained network and finally added a softmax layer in the end to perform image classification. VGG VGG [10], developed by the University of Oxford's Visual Geometry Group, placed second in the classification assignment at the ILSVRC-2014. The most astonishing feature of this architecture is that it consistently has the same convolution layer that uses 3X3 filters. We employed two of the best performing VGG architectures, VGG 16 and VGG 19, in this experiment. VGG-16 contains 13 convolution layers followed by 3 completely connected layers, whilst VGG-19 has a stack of 19 convolutional layers linked to a fully connected layer. In this case, we loaded pre-trained VGG-16 and VGG-19 weights and created an output layer with ten dimensions, which correspond to the ten tomato disease classes. EfficientNet EfficientNet [12] was introduced first to achieve more effective performance by evenly scaling width, depth, and resolution parameters utilizing a remarkably effective composite coefficient while scaling down the model. Unlike other CNN models, which employ ReLU as the activation function, this one proposes a unique activation function called Switch. EfficientNet has eight models ranging from B0 to B7. When the number of models increases the accuracy increases considerably while the quantity of estimated parameters does not increase that much. In this experiment, we have used the latest one, EfficientNet B7. www.aetic.theiaer.org The inverted bottleneck MBConc is the primary building block of the EfficientNet. Under similar FLOPS constraints, EfficientNet performs much better than most other neural network models by giving significantly better accuracy numbers. Here, we used the native model architecture to extract features for the output FC layer. Hyperparameters Tuning Hyperparameters are a set of parameters that can influence the model's learning. These parameters include the number of epochs, layers, activation functions, optimizers, learning rate, etc. The hyperparameter configuration utilized in the second half of the investigation is detailed below. After multiple tries, the authors advanced the effective learning rate, optimizer, and the activation function to the third stage of the experiment, which is the K-fold cross-validation procedure. Learning rate A hyperparameter, the main purpose of which is to change the model concerning the approximated error every time the weights of the model are updated is known as the Learning rate. Determining a fixed value for the learning rate is strenuous. Selecting a very tiny value might lead to a lengthy training process and even can get stuck. On the other hand, choosing a too big value might lead to an unstable training process or too fast learning of a sub-optimal set of weights. While configuring a neural network it might be one of the most important hyperparameters. To combat the problem of choosing a hyperparameter manually for each given learning session in the learning rate schedule, there are various adaptive gradient descent algorithms, including Adadelta, Adam, and RMSpro. But as we have seen from this experiment, choosing a suitable learning rate is important even for those adaptive learning rates, especially while working with fewer epochs [22,23]. Optimizer Optimizer plays an important role while iteratively updating the parameters of all the layers in the training of the deep CNN model [24,25]. Optimization is quite important in training a neural network, as is responsible for reducing losses and providing the most accurate results. Gradient descent is a prominent approach for doing optimization in a neural network. This is used frequently in linear regression and classification algorithms. Moreover, the gradient descent algorithm is responsible for backpropagation in neural networks. Even though it is easy to implement and compute, it has a few drawbacks such as may often trap in local minima and requiring large memory to calculate the gradient descent of the whole dataset. In this article, we have worked with stochastic gradient descent, Adam, AdaBound, RMSProp, AdaDelta, AdaGrad, Nadam, and Ftrl to see their effect on our dataset. Activation Functions The main work of an Activation function or classifier is to sort data into labeled classes or categories. It mainly affects the outcome of deep learning models, including their performance and accuracy. The activation functions have a significant influence on the capacity and speed of neural networks to converge [26][27][28]. Moreover, activation functions help to normalize the output between -1 and 1 for any input. As weight and bias are essentially linear transformations, a neural network is simply a linear regression model with no activation function. Activation functions are available in a variety of forms, including Binary step, Linear, ReLU, Sigmoid, and many more. In the second part of the experiment, we experimented on Softmax, ReLU, SeLU, ELU, Exponential, Nadam, Softsign, Tanh, and Sigmoid. K-fold Cross-Validation K-fold cross-validation is a statistical method for measuring the ability of machine learning models. In the third part of experiment, the highest values of the hyperparameters acquired in the second part of the experiment were assessed using 5-fold cross-validation. This process aims to analyze the performance and relationship of these hyperparameters in enhancing classification accuracy. www.aetic.theiaer.org Algorithms Evaluation The CNN models considered in this study were executed in a machine equipped with Ryzen 3600x processor, AMD radeon RX 550, and 16 GB RAM. All codes were realized with keras 2.4.3 framework, written in python 3.9.5, and executed in Jupyter Notebook. For every experiment, we used categorical cross-entropy loss and accuracy metrics for evaluation. A similar layout was taken for every model and each experiment was run for 50 epochs. A dense layer "Softmax" activation function was employed for classification at the output. "Adam" was the optimizer we used with a learning rate of 0.01. The accuracy and loss of training and validation datasets are shown in Table 2. Furthermore, recall, precision, and F1 score are also shown are of weighted average. The average time (in seconds) taken for each epoch is also shown in Table-2. In the case of DenseNet 121, after 50 epochs we achieved an accuracy score of 99.55% in the training set and 99.12% in the validation set. The weighted average of recall, precision, and F1 score were 0.9912, 0.9911, 0.9911 consecutively. The average time it took for each epoch to complete was 410 seconds. DenseNet 169 performed almost similarly to DenseNet 121. Even though this architecture has more layers it performed worse with it. As a result, the average time of execution of each epoch increased to 495 seconds. After 50 epochs it achieved an accuracy score of 94.71% and loss was 19.89%. ResNet 50 has the closest results to the DenseNet 121. Its accuracy in both the train and validation set was almost the same, near 98%. In the case of the training set after 50 epochs, it achieved an accuracy of 98.92% and in the validation set, it achieved 98.76%. Both VGG 16 and VGG 19 performed similarly on the basis of the accuracy of the validation set which was close to 97%. Their average time per epoch was also almost adjacent. The weighted results of precision, sensitivity, and F1 score are shown in Table 3 for each type of diseases. EfficientNet B7 performed most poorly in terms of average time per epoch. Whereas other algorithms took less than 600 seconds to complete each epoch, EfficientNet took more than double time, around 1400 seconds to finish each epoch. Moreover, its accuracy score in the validation set was the second lowest of the bunch. Similar trends can be seen in training set accuracy, loss, precision, recall, F1 score. The graph below (Figure 1) depicts the accuracy and loss of the models on classifying the tomato leaf diseases. Performance Metrics Evaluation over Hyperparameters From the above result analysis, we can see that DenseNet 121 surpassed other pre-trained models for the Tomato leaf disease diagnosis. To do further analysis, we tried tweaking different parameters and tried to find out if learning rate, optimizer, or activation functions had an impact on the overall effectiveness of the DenseNet architecture as depicted in table 4. If so, what are the optimal metrics for learning rate, optimizer, and classifier hyperparameters to use for the DenseNet 121 model? For that first started by changing the learning rate. We started with a 0.002 learning rate and kept gradually increasing to 0.0009. As the results were getting worse, we stopped there and then kept gradually decreasing the learning rate. Then we selected the learning rate at which the pre-trained model performed best. Then we tried other popular optimizers out there and analyzed the results. Finally, we selected the optimizer that performed best among those and tried different classifiers. Results of all these are given below Table 4. As we can see from Table 4 that for learning rate there is a range or a fixed point for which the algorithm performs well above which the accuracy decreases, and below which accuracy also drops. In our experiment, we observed the worst results when the learning rate was increased to 1. Here, accuracy dropped below 29%, and the F1 score was just 13%. In this study for a learning rate of 0.01, the algorithm performed best. Accuracy, in this case, was just above 99%, loss observed was 0.046, and F1 score was also above 99% mark. In the case of optimizers, seven out of nine algorithms scored more than 95%. Among them, Adabound's accuracy score was the highest. It had an accuracy score of 99.31% which was just above Adam's 99.12%. Its loss was also less than Adam's. Its precision, recall, and F1 score was 0.99506. RMSProp also performed well here, the accuracy score of which was 99.04%. Among all the classifiers tested Ftrl optimizer had the worst performance, with accuracy just above 28% the F1 score was just 0.1259. After selecting 0.01 as the learning rate and Adabound as the optimize we tested on different activation functions. Here, in total four activation functions scored more than 90%, Softmax, Softplus, Nadam, and Sigmoid. Among them, the score of softmax was the highest. Tanh scored least in terms of accuracy with just 9.85%. So, overall, we found optimum results when the learning rate is 0.01, Optimizer is AdaBound, and activation function is Softmax. The Table 4 shows 3 potential combinations of hyperparameters that produced the maximum accuracy, or 99%, in this case. Those are a combination of, (i) AdaBound optimizer and Softmax classifier with a learning rate of 0.01, (ii) Adam optimizer and Softmax classifier with a learning rate of 0.01, and (iii) AdaBound optimizer and Softplus classifier with a learning rate of 0.01. To check the authenticity of these results we further did five-fold cross validation on the dataset using hyperparameters that exhibited the highest performance metrics scores according to Table 4. The end result is shown in Table 5 below: Here, we can see that for our metrics in the case of all 5 folds the accuracy score was more than 97%, it got more than 99% accuracy in three out of five folds for learning rate 0.01, AdaBound optimizer, and softmax classifier. However, the accuracy score reached as high as 99.39% in the second fold. When we applied the same experiment for 0.001 and 0.1 learning rates, the results were much worse. Especially for the learning rate of 0.1, the accuracy was below 60%, and in three out of five cases; it was even below 30%. The other two combinations of Adam optimizer and Softmax classifier with a learning rate of 0.01, and AdaBound optimizer and Softplus classifier with a learning rate of 0.01 have not score more than 99% accuracy. Moreover, some folds of these combinations even score close to 92% accuracy. Therefore, we found that for learning rate 0.01, AdaBound optimizer, and softmax classifier the model performs best. K-fold Cross Validation The Figure 3 below depicts the model accuracy and model loss of each epoch while the model was learning from the dataset for learning rate 0.01, AdaBound optimizer, and Softmax classifier. As we can see from the 2nd diagram, the model loss did not change a lot after the 7th or 8th epoch and stayed almost the same as the train set loss. In the case of the accuracy, it fluctuated a lot before it stabilized at the 35th or 36th epoch, then it was almost as same as the train set accuracy. So, we can see that indeed for the learning rate 0.01, Softmax classifier and AdaBound optimizer the DenseNet performs best. Conclusions This paper analyzed networks that are based on pre-trained deep convolutional networks of DenseNet, ResNet, EfficientNet, and VGG. Here, in the first step, we compared those networks with Adam Optimizer, 0.01 learning rate, and Softmax Activation function. The highest result was achieved with DenseNet. Then a performance evaluation was done with different optimizers, learning rates, and classifiers that was affecting the results of the DenseNet. We found out that a range of learning rates between 0.001 and 0.1 gives good results where above and below are not effective. In the case of activation functions with Softmax and Softplus activation functions, the best results were obtained. When different optimizers were evaluated Adam, AdaBound, and RMSProp performed well. Here, the best overall result was observed with a 0.01 learning rate, Softmax activation function, and AdaBound optimizer. Our study reveals a significant information that there is a relationship among learning rate, optimizer, and classifier in improving detection accuracy. In the third part of the experiment, a K-fold cross-validation check further justified those parameters. Using the most effective deep CNN hyperparameters realized, this work might be extended to a variety of leaf disease detection applications. Despite the fact that this study obtained the highest detection accuracy, performance evaluation with multiple hyperparameters consumes a substantial amount of time and computer power. In the future, the convolutional neural network (CNN) pruning strategy may be explored to solve this issue.
v3-fos-license
2023-01-17T14:36:03.064Z
2015-09-26T00:00:00.000
255861682
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://doi.org/10.1186/s12936-015-0905-y", "pdf_hash": "058cef784ef75355c313a07964d08084fc5b9814", "pdf_src": "SpringerNature", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41214", "s2fieldsofstudy": [ "Medicine" ], "sha1": "058cef784ef75355c313a07964d08084fc5b9814", "year": 2015 }
pes2o/s2orc
Spatial and space–time clustering of mortality due to malaria in rural Tanzania: evidence from Ifakara and Rufiji Health and Demographic Surveillance System sites Although, malaria control interventions are widely implemented to eliminate malaria disease, malaria is still a public health problem in Tanzania. Understanding the risk factors, spatial and space–time clustering for malaria deaths is essential for targeting malaria interventions and effective control measures. In this study, spatial methods were used to identify local malaria mortality clustering using verbal autopsy data. The analysis used longitudinal data collected in Rufiji and Ifakara Health Demographic Surveillance System (HDSS) sites for the period 1999–2011 and 2002–2012, respectively. Two models were used. The first was a non-spatial model where logistic regression was used to determine a household’s characteristic or an individual’s risk of malaria deaths. The second was a spatial Poisson model applied to estimate spatial clustering of malaria mortality using SaTScan™, with age as a covariate. ArcGIS Geographical Information System software was used to map the estimates obtained to show clustering and the variations related to malaria mortality. A total of 11,462 deaths in 33 villages and 9328 deaths in 25 villages in Rufiji and Ifakara HDSS, respectively were recorded. Overall, 2699 (24 %) of the malaria deaths in Rufiji and 1596 (17.1 %) in Ifakara were recorded during the study period. Children under five had higher odds of dying from malaria compared with their elderly counterparts aged five and above for Rufiji (AOR = 2.05, 95 % CI = 1.87–2.25), and Ifakara (AOR = 2.33, 95 % CI = 2.05–2.66), respectively. In addition, ownership of mosquito net had a protective effect against dying with malaria in both HDSS sites. Moreover, villages with consistently significant malaria mortality clusters were detected in both HDSS sites during the study period. Clustering of malaria mortality indicates heterogeneity in risk. Improving targeted malaria control and treatment interventions to high risk clusters may lead to the reduction of malaria deaths at the household and probably at country level. Furthermore, ownership of mosquito nets and age appeared to be important predictors for malaria deaths. efforts to accelerate progress towards achievement of millennium development goals (MDG) 4 and MDG 6. However, achieving these goals requires better understanding on geographical malaria distribution and factors that influence high-risk for malaria deaths. Several studies have mapped the distribution of high risk area for malaria and have identified populations at risk at continental/country level [3][4][5][6]. These studies and several others [5][6][7] have used data on prevalence of infection collated by the Mapping Malaria Risk in Africa project (MARA), while others have used hospital data to explore the burden of malaria [8,9]. However, fewer studies have investigated on the risk factors of malaria mortality using verbal autopsy data [10][11][12]. The results of these studies suggested that age, community awareness for early treatment and scale up use of mosquito net are predictors of malaria mortality. Finding from a randomised controlled trial revealed that mosquito net does not reduce malaria transmission and mortality in high transmission area [13]. Recognizing malaria clustering, hotspot and coldspot for malaria deaths would permit control efforts to be directed to specific geographic areas, reducing costs and increasing effectiveness [14]. Control of malaria in such hotspots might also eventually lead to elimination of deaths related to malaria. Measuring malaria burden in a community is a challenge to most developing countries including Tanzania [15,16], because most disease incidence and deaths occur outside the formal health care system [17,18], where no records are available. As a result, verbal autopsy (VA) is currently an alternative approach to determine malaria-specific death [19,20]. VA is a method used to ascertain the cause of a death based on interview with next of kin or other caregivers. This is done using a standardized questionnaire that obtains information on signs, symptoms, medical history and circumstances preceding death [21]. VA procedures have been evaluated in SSA and other countries [22][23][24]. Results from these studies concluded that VA is a reliable estimate for specific cause of deaths [24]. Moreover, other studies have evaluated the validity of VA on determining malaria specific mortality [24,25] and concluded that VA methods have an acceptable level of diagnostic accuracy at the community level. In this study, VA data generated from Rufiji and Ifakara HDSS in rural Tanzania were used. These HDSS sites are among sites established within the INDEPTH network of many Sub-Saharan African countries which provide evidence-based health information on monitoring the impact on various policies in the population [26]. Although, the two HDSS sites are small to represent the whole country, no attempt documented malaria mortality patterns and trend in Tanzania for at least 10 years period. Also, use of spatial techniques for identifying clustering for malaria specific cause of death in Tanzania is unclear. Investigating spatial and space-time clustering of malaria mortality, would provide evidence for evaluation of the impact on malaria control interventions in achievement of millennium development goals (MDG) 4 for child survival and MDG 6 (for combating HIV/AIDS, malaria and other diseases). Study area The study was carried out in Rufiji and Ifakara Health and Demographic Surveillance System (HDSS) sites. Both sites are located in the Greater Rufiji River Basin in southern Tanzania [27]. The sites are primarily rural with majority of the population relying on subsistence farming or fishing. Both sites are characterized by heavy rains from March to May. These two HDSS sites were selected because they are among the HDSS sites which continuously collect large amounts of data on defined geographical areas and longitudinal data for malaria specific cause of death in Tanzania. Based on microscopy testing for the health facility survey in 2012 [28], these HDSS sites still have high malaria prevalence of approximately 19.2 % in high season and 7.2 % in low season in Rufiji HDSS and 9.4 % in high season and 4.2 % in low season in Ifakara HDSS. Rufiji HDSS site Rufiji HDSS is situated in Rufiji District, Coast region, with 38 villages covering an area of 1814 km 2 . The Rufiji HDSS is located in eastern Tanzania 7.45°-8.03° south latitude and 38.62°-39.17° east longitude (Fig. 1). The vegetation of the HDSS is formed mainly by tropical forests and grassland. The weather is hot throughout the year and with rainy seasons. The average annual precipitation in the district is between 800 and 1000 mm. The population size of the Rufiji HDSS is approximately 103,503 people living in 19,315 households [29]. The HDSS is largely rural and highly populated in centres along the main roads. The Rufiji HDSS has 24 health facilities in the surveillance area (one non-government hospital, two health centres and 21 dispensaries). Ifakara HDSS site Ifakara HDSS is situated in and covers parts of Kilombero and Ulanga Districts in Morogoro Region. The Ifakara HDSS covers 25 villages (13 in Kilombero and 12 in Ulanga districts) in Morogoro region south-eastern part of the country. The Ifakara HDSS is located in eastern Tanzania 8.0°-8.58° south latitude and 36.00°-36.80° east longitude (Fig. 2). The HDSS site constitutes more than 124,000 people, living in 28,000 scattered rural households [30]. The two districts are divided by the extensive floodplains of the Kilombero River, a potentially high risk and malaria endemic area (Fig. 2). The HDSS is predominantly rural with an ethnically heterogeneous population that practice subsistence farming, fishing and small scale trading. The population of the Ifakara HDSS area is served by a network of health facilities and at the time of the study, there were 14 health facilities (two health centres and 12 dispensaries). Study design and data collection The analysis used data collected in Rufiji and Ifakara HDSS sites. Individual and yearly malaria deaths were extracted from the Rufiji and Ifakara HDSS database and January 2002 to December 2012, respectively. The two HDSS sites have consistently been recording pregnancies, pregnancy outcomes, deaths and migrations by visiting households once every 4 months since 1997 in Ifakara HDSS and 1998 in Rufiji HDSS. Household registers are used to record each of those events. All registered deaths are followed up with a verbal autopsy (VA) form by well trained field staff. Date of birth of each individual is included in the household registers and each event is recorded along with the specific date it occurred. Data credibility in HDSS was ensured at all stages of collection and processing to enhance quality. Up to 5 % of randomly selected households were visited by field supervisors for repeated interviews. Other strategies including accompanied interviews as well as surprise field visits by field supervisor. Data management was done using the household-registration system (HRS 2) with in-built consistency and range checks. Verbal autopsy procedure The WHO and INDEPTH Network [31] standardized VA questionnaire was adapted and used for data collection on causes of death. In the HDSS, deaths were captured during rounds of update. Then HDSS field interviewers visited the deceased's home after a grieving period to administer a verbal autopsy questionnaire. An interview was administered to relatives or caregivers who were closely associated with the deceased during the period leading to his or her death. The questionnaire assessed the identity of the deceased and established the sequence of events leading to death, including symptoms and signs of the illness before death. Verbal autopsy was carried out since 1998 in Rufiji HDSS and 2002 in Ifakara HDSS. The verbal autopsy forms are independently reviewed by two physicians according to a list of causes of death based on the 10th revision of the International Classification of Diseases (ICD-10). A third physician is asked to code the cause of death in the case of discordant results. If there is disagreement among the three physicians, the death is coded as "undetermined" cause [32]. Causes of death (main, immediate, and/or contributing) are coded to be consistent with the ICD-10 [33]. Malaria deaths are coded as direct when malaria is the underlying cause of death or indirect when malaria is one of several diseases leading to death but the death is attributed by a different cause) [34]. Geo-referencing location of households and health facilities The geographic information available in the HDSS database included the coordinates (longitude and latitude) and altitude of the majority of the households, health facilities and village. These were collected on-site using handheld global positioning system (GPS) receivers or tablet with in-built GPS reader at a precision of less than 10 m [35]. Twenty percent of households were not geolocated in HDSS database and were primarily collected using handheld global positioning system and mapped in a geographic information system (GIS) database. Data processing and analysis All-cause mortality data were obtained from the Rufiji and Ifakara HDSS database for the period 1999-2011 and 2002-2012, respectively. Individual-specific information extracted from the HDSS database includes date of birth, start and exit from the study, age, sex, mosquito net ownership, socio-economic status and death status. Other information such as location of the household, health facility and altitude were obtained in HDSS database and were collected by other projects that were carried out within the HDSS platforms, and few households with missing coordinates were primarily geo-located. Nearest distance to health facility was calculated using the spherical law of cosines for straight line using the latitudes and longitudes of health facilities and households. The formula (as described in detail in [36]) was used. Nearest distance to health facility was classified into two groups: less than 5 km and 5 km and above [37]. Person time at risk (person-years) contributed by each person was calculated until exit. Exit from the study was due to migration (outside the HDSS area), death or end of the study. In a case where a person migrated to a different household location within the study area, time at risk was computed separately for new location and added to the total time at risk. The outcome of interest is the death status of an individual or total monthly/yearly death for specific age groups (age group was categorized into under five and five and above). The malaria mortality rates were calculated by dividing the number of deaths by the person-years of observation and were expressed per 1000 person-years (py). Seasons at death were classified into two groups: dry (June-October) and wet (November-May) according to the dates of dying that correspond to the seasons of the year in the study area. Household wealth status was constructed using the principal component analysis (PCA) method [38]. Items included in the PCA were household assets such as animals; TV, bicycle and radio and household characteristics such as the type of toilet, source of drinking water, house roofing material, wall material, and floor material were included. Finally, all households were classified into five categories: poorest, poorer, poor, less poor or least poor, according to their household wealth score. The outcome variable, death due to malaria, was defined by assigning "1" if a person died due to malaria or "0" if a person had not died due to malaria. The explanatory variables were age, distance to nearest health facility, sex, season, social economic status (SES), ownership of mosquito nets, and altitude. Malaria control interventions The HDSS sites routinely collect data on mosquito net ownership (household ownership mosquito nets). Policy for malaria treatment changes were extracted from existing publications. In August 2001, the first-line malaria treatment policy changed from chloroquine to sulphadoxine-pyrimethamine (SP) [39]. Artemisinin-based combination therapy (ACT) was introduced in January 2007 through a change from SP [40]. The start date of the IMCI interventions was set in April 2002 in Kilombero and Ulanga, and 1997 in Rufiji District. Figure 3 shows the coverage of these malaria control interventions over time. Modelling the relationship between malaria mortality and risk factors Statistical analysis and model building were performed using STATA software (version 11, College Station, TX, USA), using survey procedures that account for clustering and stratification. The analysis used both descriptive and analytical statistics. Dying due to malaria rates by each variable were calculated and presented. Pearson's Chi Square test was used to determine the association between a set of explanatory variables and dying due to malaria for categorical variables. Further analyses for all variables were individually analyzed using logistic regression with villages as random effects to account for clustering. All percentages and odds ratios reported are population-average estimates which have been adjusted to take into account the clustering at village level. Selection of variables for inclusion in the multivariate model was based on the log-likelihood ratio test, whereby a variable was retained in the model if there was statistical evidence that its presence improved the model and possible association with dying by malaria (p < 0.2) in the univariate analysis model [41]. The model was finally checked for presence of interaction and adequacy before being approved as final. Clustering for mortality due to malaria SaTScan ™ software version 9.3, using Martin Kulldorff method [42], was used to identify the geographical clusters with high mortality due malaria using Poisson model. The package has been used by researchers [43,44] to determine the frequency or rate of occurrence and the extent to which such events occurred over a specified period of time within a defined area and population. The analysis was purely spatial, purely temporal or space-time. This methodology identifies clusters with higher numbers of observed cases (malaria deaths) than expected cases under spatial randomness, and then evaluates their statistical significance by gradually scanning a circular window that spans the area of study. A likelihood ratio test compares the observed deaths of the disease within the circle to the expected deaths across the entire range to identify significant clusters of disease, providing relative risk and p values for any clusters identified [42]. The model was run with a maximum cluster size of 50 % of the total population and p values generated across 999 Monte Carlo replications to ensure no loss of power at the alpha = 0.05 level [42]. Two local measures of spatial association were used within ArcGIS 10.1 to indicate "where the clusters or outliers are located" and "what type of cluster and intensity is most important" [45]. Anselin Local Moran's I [45] was used to detect core clusters/outliers of villages with extreme malaria mortality rate values unexplained by random variation, and to classify them into hotspots (high values next to high, HH), cold spots (low values next to low, LL) and spatial outliers (high amongst low, HL or vice versa, LH). Local Moran's I tests the null hypothesis of absence of spatial clustering of malaria mortality in the villages of the study areas (for polygon features) when its expected value is −1/(N − 1). This method has been used in other studies to identify HIV prevalence hotspots [46,47], and malaria hotspots in particular [14]. Further, the local Getis-Ord statistic (Gi*) was used to provide additional information indicating the intensity and stability of core hotspot/cold spot clusters [47,48] for significant predictors variables. Gi* statistic identifies different spatial clustering patterns like hotspots, high risk and cold spots over the entire study area with statistical significance [48]. The statistic returns a Z score for each feature in the dataset. For statistically significant positive Z score, the larger the Z score is, the more intense the clustering of high values (hot spots). For statistically significant negative Z score, the smaller the Z score is, the more intense the clustering of low values (cold spots). High risk areas are at lower significance level in comparison to hot spots. Villages with Z scores >2.58 were considered significant at 99 % confidence level (P < 0.01) and were put in the hot spot category. Villages with Z scores between 1.65-1.96 and 1.96-2.58 were considered significant at 90 and 95 % confidence level (P < 0.10 and 0.05) and were categorized as high risk villages. Z scores <−2.58 indicated clustering of low values and were considered as cold spots [14]. The Getis-Ord Gi* index was calculated as described in [47]. Results were mapped using geographical information system (GIS) (ArcGIS, version 10.1, CA, USA). The Tanzania administrative boundaries were downloaded from National Bureau of Statistics website and added to the map as a layer. Descriptive results The analysis included 11,462 deaths that occurred in Rufiji HDSS for the period 1999-2011. Of these, more than half (51 %) were female and over two-thirds (69 %) of deaths were 5 and above years old in Rufiji HDSS. Seasonal variations in deaths were observed whereby more deaths occurred during wet season ( Table 1). As shown in Table 2, malaria deaths were 23.6 % in Rufiji HDSS. Overall, deaths among under-five children accounted for 30.6 % of all causes of death of whom 33.1 % (95 % CI: 31.2-34.9) were malaria related deaths in Rufiji. The proportional death attributable to malaria was 27.9 % in households without mosquito net at death in Rufiji. A greater proportion of under-fives died as a result of malaria compared to those aged five and above, this difference was statistically significant in Rufiji and Ifakara HDSS (Table 2). In Ifakara HDSS, a total of 9328 deaths occurred and 49.1 % were female. More than 60 % in Ifakara were aged 5 and above years ( Table 1). The malaria related deaths contributed to 17.1 % for all deaths that occurred in Ifakara HDSS during the study period. Overall, deaths among under-five children accounted for 38.4 % of all causes of death of whom 25 % (95 % CI: 23.4-26.7) was malaria related deaths in Ifakara HDSS. The distribution of malaria deaths by sex, season, socio-economic statuses, ownership of mosquito net, distance of household to the nearest health facility are shown in Table 2. In terms of socio-economic status, no evidence of inequality between quintiles and malaria deaths was found in both sites. In Ifakara HDSS covers parts of Kilombero and Ulanga districts. The clustering for Ifakara HDSS was separated for each district (Table 3; Fig. 4). In Kilombero district part, clusters were observed in 2002-2012. Table 3 shows the purely spatial analysis and indicates one significant cluster in 2008 and involved two villages (Lukolongo and Mchombe) with 35 total malaria deaths cases and 21 expected cases (RR = 1.87, p = 0.046). The purely spatial scan for the entire period of 12 years was also identified with three clusters and the numbers of malaria deaths were not significantly different with expected cases (Fig. 4). In Ulanga district, clusters were observed in each year from 2002 to 2012. Table 3 shows the purely spatial analysis and indicates two significant clusters, the first significant cluster was observed in 2010 and involved three villages (Igota, Lupiro, Kichangani) with 27 total malaria deaths cases and 17 expected cases (RR = 2.30, p = 0.023); the second significant cluster was observed in 2012 and consisted of one village namely Kichangani (RR = 3.44, p = 0.045) with nine cases and three expected cases. The purely spatial scan for the entire period of 12 years was also identified with two clusters: the first with mostly likely was significantly at three villages (Idunda, Kichangani, Igota) with the highest malaria mortality and secondly at one village (Mavimba) (Fig. 4). Temporal trend Results from purely temporal analysis for high rates showed that 1999-2002 appeared the most likely and significant cluster with high malaria mortality rate (p = 0.001) in Rufiji HDSS. The number of observed malaria deaths in this cluster was 1008 against 778.22 expected cases at a relative risk of 1.47. Temporal trend results show significantly marked decrease in mortality Spatial-temporal clusters The spatial-temporal analysis using SaTscan ™ was run ( Fig. 4) with a relative risk of 1.46. In these areas the observed number of malaria deaths was significantly higher than expected malaria death. One village Table 3 Malaria mortality clustering using spatial and space-time analysis in Rufiji and Ifakara HDSS for the study period Mostly likely :primary cluster with highest likelihood; secondary cluster is the second cluster followed after mostly likely LLR Log likelihood ratio, RR relative risk In Ifakara HDSS, the spatial-temporal analysis using SaTscan ™ was conducted and identified that the significant years were 2007-2009 (p < 0.001) in Ulanga district, which consisted of the five villages: Igota, Lupiro, Kichangani, Igumbiro, Idunda, (Table 3; Fig. 5) with a relative risk of 1.66. One village was consistently observed in significant clusters namely Kichangani. Hotspots and coldspots of malaria mortality The spatial clustering of villages was analyzed into hotspots and coldspots of malaria mortality and significant change over time (see Additional files 1, 2). The Anselin Local Moran's I showed core hotspots clustering of high malaria mortality for villages next to other villages with high malaria mortality (HMM) and coldspot clusters of low malaria mortality next to other villages with low malaria mortality (LMM) in the study areas. Maps depicting hotspots and high risk areas for significant variables using Getis-Ord statistics were identified (see Additional file 3). These approaches of analysis shows the similar areas with hotspots, defined as areas with statistically significant high malaria mortality consistently located in the significant clusters identified by SaTScan ™ . This provides for true clustering of malaria mortality indicating heterogeneity and hotspot/coldspot in risk for the study areas. Factors associated with malaria mortality Results of univariate analysis are shown in Table 4 after adjusting village level. Sex, age, ownership of mosquito net, season and altitude were significantly associated with malaria death in Rufiji HDSS while in Ifakara Age, ownership of mosquito net and season was significantly associated with malaria death. Children aged five and above and sex (males) were associated with a significant decrease in the odds of malaria death. Also, ownership of mosquito net and dry season were associated with a decrease in the odds of malaria death in both sites. SES and distance to nearest health facility were not associated with malaria death. The SES was not included in the multivariate modelling. Multivariate logistic regression analysis with an adjustment for within village clustering indicated that age and ownership of mosquito net were significantly associated with malaria death in both sites (Table 4). Altitude was additional variable that indicated significant associated with malaria death in Rufiji HDSS. Children age under 5 years were two times (adjusted OR = 2.04, 95 % CI: 1.82-2.28) more likely to die from malaria compared to those aged five and above age in Rufiji. Furthermore, under five age had more than twofold (adjusted OR = 2.51, 95 % CI: 2.25-2.79) increased odds of dying from malaria in Ifakara HDSS compared with five and above age. There was strong evidence that ownership of mosquito net had protective effect for dying from malaria. Households with mosquito net had 43 % (adjusted OR = 0.57, 95 % CI: 0.51-0.64) and 35 % (adjusted OR = 0.65, 95 % CI: 0.57-0.74) lower odds of dying from malaria as compared to those without mosquito net for Rufiji and Ifakara, respectively. Discussion This study has shown consistent villages with malaria mortality clustering in Rufiji and Ifakara HDSS sites for the study period using SaTscan, Anselin's Local Moran's I statistic and Getis-Ord statistic (Gi*). These villages were also identified with clustering of all cause mortality for under five children in the same study areas [49,50]. The clustering of malaria mortality indicates heterogeneity in risk of study areas. Our analysis adds to the existing literature by providing evidence for targeting malaria interventions at small scale areas; previous studies were predominantly for malaria incidence at continental/ country level [3][4][5][6]. Our findings also indicate that malaria mortality rates started to decline from 2003 in Rufiji HDSS and 2008 in Ifakara HDSS. Space time clustering was observed in 1999-2002 in Rufiji HDSS and 2007-2009 in Ifakara HDSS. The possible explanation for this decline could be attributed to different malaria interventions programs and treatment policies such as change of malaria treatment policy and implementation of integrated management of childhood illness (IMCI) [39,51] in Rufiji HDSS. The decline coincided with change of malaria first line treatment drugs. In 2002, there was a change in the implementation of national policy of the first-line drug for the treatment of malaria from chloroquine to sulfadoxine pyremethamine (SP) [39]. The impact of the change of treatment policy from chloroquine to SP is large given the higher treatment efficacy of SP upon its introduction [52] and the high resistance to chloroquine before it was replaced [53]. Also there was exceptional high mortality due to malaria in 2004 and 2009. The possible reason for this exceptional is coinciding with the drug resistance to SP in 2004 for malaria treatment [52]. Likewise, the efficacy of the IMCI interventions has been extensively documented [54,55]. There was also a modest increase in the coverage of mosquito nets over the years (Fig. 3b). All these factors have contributed to a steady decline in malaria mortality within the Rufiji district. Other factors related to improvement in the health services and access to care could explain the decline [56,57]. In Ifakara HDSS, malaria mortality declined since 2008 for all ages and 2009 for children less than 5 years of age. The decline coincided with the implementation of the first new anti-malaria treatment artemether-lumefantrine (ALU) [40]. The reasons for this delayed fall in malaria mortality in Ifakara are unclear but further examination is warranted to derive lessons for malaria control program elsewhere. This study used Aselin Local Moran's I to identify hotspot/coldspot villages in the study areas. The hotspot villages identified with Aselin Local Moran's I were consistently located in the significant clusters identified by SaTScan ™ software using the Martin Kulldorff method. Our findings are consistent with previous studies that have used GIS to analyze malaria situation at micro level for decision making [14]. This study is one of the few studies that have demonstrated the use of spatial statistic tools for malaria mortality clustering [58] in two neighbouring HDSS sites located in three districts in Tanzania. Although results may not be representative of the whole country composed of more than 150 districts, they offer an insight on space-time clustering of malaria mortality at local scales. In addition, because these are HDSS sites, their populations are investigated more often and several health system interventions including malaria interventions were implemented on a research basis than elsewhere in the country [59,60]. Clustering of high malaria mortality villages next to high ones (HH) in some villages were observed in the study period despite of the recent decline. Villages like Mangwi and Machipi in Rufiji HDSS are relatively remote areas with high forest cover. These villages had relatively lower levels of malaria control interventions; mosquito net coverage for example was lower than elsewhere in the Rufiji HDSS. It was observed that by the end of 2011, about 44 % of households in Mangwi village had owned at least one mosquito net. The villages identified with hotspot (HH) in both sites were the most significant ones, with high incidence of malaria deaths in households without nets (see additional Fig. 3). Access to treatment is also an important indicator of the decrease in malaria mortality and needs to reach remote communities which have an increased risk of malaria infection [11,27]. This study has also shown that households at greater distances from health facilities had an increased risk of malaria mortality; although there was no statistical evidence, the estimated odds ratio was substantially higher than 1 in both HDSS site. The villages included in the spatial clusters of malaria mortality in this study correspond with previous studies [49,50]. These studies identified spatial clusters of all cause mortality for under-five children in the same areas. The villages found to be significantly associated with malaria mortality were repeatedly detected in both the purely spatial and space-time analysis. This may suggest that such villages could have certain underlying characteristics that predisposed them as being high risk areas to malaria mortality. A logistic regression model was therefore used to assess risk factors that determine a household's characteristic or individual's risk of malaria mortality. Findings from this study showed that age and ownership of mosquito net were potential factors for malaria mortality in the two sites. Age remained an important factor within the model, with children less than 5 years of age being exposed to a higher risk [58]. Other studies have shown a clear downward trend of the effect of transmission with age which may be an effect of the cumulative malaria exposure [61,62]. It has also been reported that high cumulative exposure reduces the risk of infection especially in older age [62]. This observation might be associated with the acquisition of malaria immunity which is believed to increase with age or behavioural change of older children [63]. Children in the younger age group are more likely to sleep under insecticide-treated mosquito nets [64] and have proven to be highly protective against malaria. Therefore, at early stages of life, Mosquito nets are beneficial as they might lead to less malaria death and protect children with low (or no) immunity. With time the children build up the immunity and given that the malaria infection is significantly low, the effect of Mosquito nets on their death risks becomes insignificant, which supports the argument that other factors contributed to malaria drives the death in these children [65]. The study has shown that malaria deaths were statistically significantly associated with altitude in Rufiji HDSS. Our findings are consistent with previous studies which reported much less malaria morbidity and mortality in higher altitudes [66,67]. The overall mean altitude of the Rufiji HDSS is less than 500 m [29]. The possible explanation may be due to the fact that temperature decreases with increasing altitude. In this regard, malaria incidence is said to decrease because of the relationship between temperature and the Plasmodium parasite [66]. The results from this study are comparable to other studies. The observed decline in malaria deaths in the study areas was similar to the decline for malaria deaths reported for Tanzania in world malaria report 2012. The world malaria report shows exception high malaria deaths in 2004 and 2009 as shown in this study [1]. Clustering analysis in this study is similar to the analyses from other HDSS studies in South Africa, Burkina Faso [44,68]; this study is somewhat compromised by the relatively small size of the study area. Also the results are comparable to the rest of rural Africa and other countries where risk factors identified for malaria have similar picture to other studies as reported in Kenya and India [11,63]. Strengths and limitations The study utilized datasets from Health and Demographic Surveillance System sites which are continuously registered vital demographic events in a geographical defined area. Although few studies have used VA data for investigating malaria cause specific mortality [10][11][12], verbal autopsy has a great potential for countries like Tanzania where a numerous number of people die from places other than health facilities. With gaps in data on what is killing people because of incomplete vital registration systems [19], VA data provide evidence-based information for health systems decisions and planners for appropriate malaria interventions. VA provide causes of death information at community level that can inform health systems decisions and performance similar to death certification in high-quality hospitals [69]. VAs also complement the health management information system which provides data from health facilities especially in SSA where use of health care is low [69]. This study demonstrates the identification of clusters and hotspots using Kulldorff 's spatial scan statistic, Anselin's Local Moran's I statistic and Getis-Ord statistic (Gi*). The study provides strong evidence of their importance for malaria mortality clustering and hotspots. The identification of significant clusters and hotspots can help policy makers and planners to focus for targeted malaria control strategy for elimination of malaria deaths. On the other hand, findings from Health and Demographic Surveillance Systems data provides information to policy makers and program manager which can be translated into policy and practice. This study has some limitations that need to be considered in interpreting the findings. First, presence of at least one bed net was considered as a proxy for use of bed nets in the house, as information about exact bed nets use was not collected during the study. Secondly, SaTScan ™ has limitations that have implication on results interpretation. The circular window imposed on either purely spatial scan statistic or cylindrical window for space-time statistic, usually takes various villages with high malaria mortality rates. If it happens that a village with a low malaria mortality rate is surrounded or is very close to villages with high mortality, the software will then enter this village into the high-mortality cluster villages. However, this limitation does not disqualify SaTScan ™ software from its importance in producing summarized information about conversional epidemiological methods of presenting results. Thirdly, there is a potential risk of misclassification of cause of death where the sensitivity and specificity of the VA technique is relatively low. This misclassification may lead to underestimation or overestimation for malaria death. Fourthly, other possible factors associated with malaria deaths (e.g. anti-malaria availability) that was not available in HDSS database was not included in the study, but are found to be more effective against reducing malaria related mortality. Conclusions This study used both SaTScan ™ and GIS software which are efficient in processing large epidemiological data at micro level. The study established the role of GIS in disease control as it provided rapid and understandable results which are required for decision making. The distribution and level of malaria deaths presented in this study reveal significant spatial variation in malaria death risks, which previous mapping studies failed to convey. The findings have important operational relevance to the implementation of the current malaria control strategies in the study area. Targeting malaria control interventions and treatment to hotspots or high risk clusters can help to improve the reduction of malaria death at household and country level. This study has identified several major trends in malaria deaths in Ifakara HDSS over the past period, warranting further investigation. The study recommends priority control in hotspot villages and high risk areas reported, including consistent significant clustering villages in Rufiji and Ifakara HDSS to address grave malaria situation in the three districts in a cost-effective manner. Reduction in mortality due to malaria calls for more attention to be given to factors that affect malaria deaths the most, such as ownership of mosquito nets and age. Ownership and use of mosquito nets should be a continuous strategy in the study areas, particularly in the high risk areas for better malaria control and prevention of transmission.
v3-fos-license
2020-06-16T14:49:56.320Z
2020-06-16T00:00:00.000
219691083
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://bmcgastroenterol.biomedcentral.com/track/pdf/10.1186/s12876-020-01331-x", "pdf_hash": "90f832671a31b596d2a956b947229f32400816fd", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41219", "s2fieldsofstudy": [ "Medicine" ], "sha1": "90f832671a31b596d2a956b947229f32400816fd", "year": 2020 }
pes2o/s2orc
Prevalence of non Helicobacter pylori gastric Helicobacters in Iranian dyspeptic patients Background Non Helicobacter pylori gastric Helicobacters (NHPGHs) are associated with a range of upper gastrointestinal symptoms, histologic and endoscopic findings. For the first time in Iran, we performed a cross-sectional study in order to determine the prevalence of five species of NHPGHs in patients presenting with dyspepsia. Methods The participants were divided into H. pylori-infected and NHPGH-infected groups, based on the rapid urease test, histological analysis of biopsies, and PCR assay of ureA, ureB, and ureAB genes. The study included 428 gastric biopsies form dyspeptic patients, who did not receive any treatment for H. pylori. The samples were collected and sent to the laboratory within two years. H. pylori was identified in 368 samples, which were excluded from the study. Finally, a total of 60 non-H. pylori samples were studied for NHPGH species. Results The overall frequency of NHPGH species was 10 for H. suis (three duodenal ulcer, three gastritis, and four gastric ulcer samples), 10 for H. felis (one gastritis, three duodenal ulcer, and six gastric ulcer samples), 20 for H. salomonis (four duodenal ulcer, five gastritis, and 11 gastric ulcer samples), 13 for H. heilmannii (three gastritis, five duodenal ulcer, and five gastric ulcer samples), and 7 for H. bizzozeronii (zero gastric ulcer, two duodenal ulcer, and five gastritis samples). Conclusions Given our evidence about the possibility of involvement of NHPGHs in patients suffering from gastritis and nonexistence of mixed H. pylori infection, bacteriological testing of subjects negative for H. pylori becomes clinically relevant and important. Our findings suggest H. salomonis has the highest rate among the NHPGH species in Iranian dyspeptic patients. Background The number of discovered Helicobacter species has increased rapidly in the last decade. Up to now, more than 30 species have been characterized and well-recognized by microbiologists. H. pylori is the most recognized bacterium associated with dyspepsia in humans [1][2][3][4]. However, NHPGH species with typical spiral morphology has the ability to colonize the stomach of humans and animals. NHPGHs, by neutralizing the gastric acid, provide a suitable environment for their survival [5][6][7]. Studies have shown that NHPGHs are involved in gastritis among humans [8][9][10]. Gastritis may progress to gastric atrophy, intestinal metaplasia, and gastric cancer over time and result in precancerous lesions (similar to monoclonal lymphocytic proliferation), development of lymphoid follicles, and even primary gastric lymphoma, which develops only in some patients with gastritis. The incidence of these conditions varies relative to the multifactorial influence of host virulence and bacterial factors, which are dissimilar in different social and racial groups [11]. NHPGHs can affect the stomach environment through different mechanisms [1,11,12]. H. suis is capable of escaping the host's immune system and shows a longterm presence in the stomach [13]. These bacteria contain genes, essential for their survival in the stomach environment. Also, NHPGH species, such as H. bizzozeronii, can be distinguished from H. pylori considering their higher metabolic flexibility in terms of energy sources and chain of electron transport [14]. H. felis species promote mucosal cytokines, which play a role in the development of gastric cancer [15]. Previous research has reported co-infections with two NHPGH species, i.e., H. salomonis and H. heilmannii, in dyspeptic patients [16,17]. On the other hand, H. heilmannii may be more involved in the formation of gastric MALT lymphomas, compared to H. pylori. To our knowledge, H. pylori mainly covers the mucosal layer, whereas H. heilmannii invades deeply into the antral glands [18]. The lack of information about the topic of current research was frequently mentioned in national congresses and health ministry priority research list. There is no information about the prevalence of NHPGH species in Iran. In the present study, we report, for the first time, the frequency of NHPGH species in dyspeptic patients in Iran. In addition, the possible relationship between NHPGH species and histological findings was examined. Patients and sampling In this study, between March 2017 and February 2018, 428 dyspeptic patients, scheduled for upper gastrointestinal endoscopy (Mehrad Hospital, Tehran, Iran), were examined for Helicobacter infections using the Rapid Urease Test (RUT) and histological examination of stomach biopsies. According to the result of these tests, the samples were categorized into NHPGHs monoinfected (n, 60) and H. pylori mono-infected (n, 368) groups. The NHPGH mono-infected group comprised of 60 dyspeptic patients, who were studied for the prevalence of NHPGH species, while the H. pylori group included 368 dyspeptic patients, who were excluded from the study. A standard clinical pro forma was used to collect the demographic and clinical characteristics of NHPGH mono-infected patients via interviews. The study's exclusion criteria included I) receiving treatment for H. pylori, concurrent or recent antibiotic use such as metronidazole, clarithromycin, amoxicillin, tetracycline, doxycycline and other cephalosporin, II) histamine-2 receptor blocker or proton pump inhibitor (PPI) therapy and bismuth compounds in the last four weeks; III) patients with regular use of NSAID; IV) patients with severe concomitant disease and V) patients with upper GI surgery. The participants signed the informed consent forms, and the Ethics Committee of Clinical Research approved the study protocol. The flow diagram of this study is shown in Fig. 1. Histological examination The biopsy sections were embedded in 10% buffered formalin. Next, hematoxylin and eosin staining was applied to assess gastritis, while Giemsa staining was used to detect Helicobacter species. The histological patterns were classified as gastric ulcer, duodenal ulcer, and gastritis, using the updated Sydney system. DNA isolation A Qiagen Genomic DNA Extraction Kit (BioFlux, USA) was used to isolate DNA from the stomach biopsies. Thereafter, DNA was resuspended in distilled water free of RNase/DNase (UltraPure). Molecular detection of NHPGH species PCR assay was performed to detect NHPGH species, including H. salomonis, H. bizzozeronii, H. heilmannii, H. felis, and H. suis [19,20]. Table 1 shows the primer sequences for NHPGH species. The amplification reactions were performed using 1X Reaction Buffer (0.2% gelatin, 16 mM of ammonium sulfate, 67 mM of tris/ HCl, and 0.45% triton X-100), Taq DNA polymerase (one unit; Biotech International), deoxynucleotide triphosphates (200 mM each), 2 mM of MgCl 2 , oligonucleotide primers (10 pmol each), and 1 μL diluted DNA (typically a 1:10 dilution of the original sample at nearly 20-100 ng/μl); with a final volume of 50 μL. For every specific reaction, the amplification parameters are described below. A thermocycler (Perkin Elmer PE2400) was used to perform the reactions. In addition, Agarose mini-gel in TAE buffer (1 mM EDTA and 40 mM trisacetate) was used to separate the PCR products. The products were then imaged under UV transillumination following ethidium bromide staining. For urease I reactions, the cycling conditions included three minutes of denaturation at 94°C for 4 min; then 35 cycles at 94°C for 10 s, at 52°C for 20 s, and at 72°C for 90 s; and a five-minute extension at 72°C. In urease II reactions, the conditions were similar to those of urease I reactions with some modifications, i.e., 30 s of annealing and two minutes of extension at 42°C and 72°C, respectively. By analyzing the urease sequences from the strains and isolates, ureA gene regions, which were dissimilar in H. felis, H. bizzozeronii, and H. salomonis species, could be identified. Also, in Type-I PCR assay, the cycling conditions included five minutes of denaturation at 94°C, followed by 35 cycles of amplification at 94°C for 10 s, at 55°C for 30 s, at 72°C for one minute, and at 72°C for four minutes. Real time PCR Real-time PCR was performed using a Light Cycler 480 (Roche -Germany) detection system with the SYBR green I fluorophore. Reactions were performed in 20 μl (total volume) mixtures which included 5 μM SYBR green I PCR master mix 5 μl of each primer at a concentration of 5 μM, and 1 μl of the template DNA. Analyses were performed with a Light Cycler 480. The following protocol was used for 50 cycles consisting of 95°C for 15 s, 55°C for 15 s, and 72°C for 30 s [21]. A melting curve analysis was performed following every run to ensure that there was a single amplified product for every reaction (Table 1). We used the Real time PCR alone as a conformity test for PCR (not a quantitative test). In other words, the main purpose of our experiment at the Real-Time PCR was to confirm our findings as we confirmed our positive ones. Thus, it was not aimed to distinguish the various species. Statistical analysis Data were analyzed in SPSS v. 16.0 (SPSS Inc., Chicago, IL, USA). The Chi-square test was used to calculate the association between the presences of five non-pylori Helicobacter species in NHPGH-infected group. A P-value of < 0.05 was considered as statistically significant. Results are expressed as mean ± standard deviation for continuous variables (e.g., age) and number (percentage) for categorical data (e.g., gender). Demographic and clinical characteristics Genomic DNA was collected from 60 NHPGH monoinfected patients. DNA was analyzed in all subjects. The Fig. 1 Flow chart of the study. 1 The study's exclusion criteria included I) receiving treatment for H. pylori, concurrent or recent antibiotic use such as metronidazole, clarithromycin, amoxicillin, tetracycline, doxycycline and other cephalosporin, II) histamine-2 receptor blocker or proton pump inhibitor therapy and bismuth compounds in the last four weeks; III) patients with regular use of NSAID; IV) patients with severe concomitant disease and V) patients with upper GI surgery NHPGH mono-infected patients' demographic characteristics are presented in Table 2. Based on the findings, there was no significant difference (p > 0.05) among histological groups (i.e., duodenal ulcer, gastric ulcer, and gastritis) with respect to age and gender distribution. Among the 60 NHPGH mono-infected patients (which were negative for H. pylori) there was not coinfection with different species of NHPGH. In other words, was not observed more than one species of non-H. pylori in NHPGH mono-infected group. Agarose gel electrophoresis of the PCR products are shown in Fig. 2. Discussion Introduction of NHGPH species has provided researchers with an opportunity to determine the relationship between these species, which can colonize the animal and human guts, in order to better understand their effects on the host [22]. Within a short period of time after discovery of H. pylori by Marshall in 1983, scientists had understood that we have other members in this spiral type of bacteria causing the inflammation in human gastrointestinal route. The prevalence of NHPGH species in the gastric mucosa of humans and animals is diverse around the world [11,[23][24][25][26]. The main advantage of current research was to investigate in such naïve population with no information about prevalence and likely significant association between those strains and severe gastroduodenal diseases. Likely, in close future, current data can be a starting point for similar studies. In the present study, we applied the PCR assay to evaluate the frequency of NHPGH species in Iranian dyspeptic patients. In the literature, H. suis has been introduced as the most prevalent NHPGH species, colonizing the stomach of dyspeptic patients [13,27]. According to the recent study by Nakagawa et al, H. suis was the main cause of chronic gastritis in individuals without H. pylori infection [28,29]. According to previous studies, these species have a pathogenic potential due to the presence of gamma-glutamyl transpeptidase (ggt), their immune-suppressing properties, as well as outer membrane vesicles [13,30]. Although pig farming, which is recognized as an important source of infection [31], is not permitted in Iran, there has been a relative increase in the frequency of H. suis (n, 10, 16%) among NHPGH species. Previous studies have shown that H. suis is a cause of acute inflammation in colonized patients in comparison with H. pylori and non-pylori Helicobacters, but such findings were not repeated in our examination [29,32]. Similar to findings that De Cooman et al. released about pork meat consumption and the high risk of contaminated [20], we assume that this may happen in our population too, although technical and experimental errors may have been the source of our observation. Nevertheless, this rate of H. suis seems high among the Iranian individuals since pork is not in the regular dairy list. Indeed, the lack of knowledge about required duration time for transmitting the infection is still in place and we hope to have better insights in to this within the foreseeable future. On the other hand, H. bizzozeronii is the predominant NHPGH species in the canine stomach [11,33]. There are multi potential factors involved in the virulence of H. bizzozeronii, including greater metabolic flexibility, genome plasticity, and harboring multiple methyl-accepting chemotaxis proteins [14,16]. In addition, this species has been associated with severe dyspeptic symptoms [14]. In our study, we found seven dyspeptic patients infected with H. bizzozeroni, who claimed they were not in contact with dogs (as pets) since pet keeping is not common among Iranians. Therefore, H. bizzozeroni had the lowest prevalence among NHPGH species in our study population. However, further studies are needed to confirm this finding. H. felis infection is associated with reduced levels of interleukin-1β and tumor necrosis factor-α and increased level of interleukin-10, leading to the expression of key gastric mucosal cytokines and possibly gastric cancer [15]. The frequency of H. felis was 10 out of 60 (16%), thus we were unable to report any association between H. felis infection and histological report (P > 0.05). H. salomonis has been isolated from gastric biopsies of healthy dogs and humans. It has been also isolated from individuals infected with H. heilmannii [16,17]. In the past, many studies reported that H. heilmannii infection is an example of zoonosis and we may have worrying report out of it in humans, but our results are quite contradictory [34]. In this study, H. salomonis was the most frequent NHPGH species (n, 20; 33%), while there is no similar study from the same location in Iran. In our study, the frequency of H. heilmannii was 13 out of 60 (21%). Since the transmission pathways for H. salomonis are unclear, we need to determine the probable source and route of transmission for this NHPGH species. On the other hand, H. heilmannii has been linked to gastritis, gastric ulcers and duodenal ulcers in humans [35]. We did not find significant association between presence of H. heilmannii infection and histological findings (P > 0.05). Indeed, this species can be the cause of apoptosis and angiogenesis in gastric MALT lymphoma [36]. Further studies with emphasis on molecular experiments are necessary to explain reports of such results. Limitations and future prospects To the best of our knowledge, ours is the first crosssectional study representing an Iranian population sample investigated for the clinical relevance of the five tested non-pylori Helicobacter species. Importantly, we found no mixed Helicobacter infections among NHPGH mono-infected group. We tried to make such big sample size in order to draw a good conclusion regardless significant finding or not. We think that our survey has good results, and it can be a reference study in future surveys in this country. However, our study had two limitations. The first limitation of our study was among H. pylori infected group (368 patients who were excluded from the study) that the possible co-infection with NHPGH different species was not investigated. This was due to the limited budget of our project. In this regard, we are starting the new project based on current panel of non-pylori helicobacters co-infection among H.pylori positive group. The second limitation was the need to establish a clear causative association between non-pylori helicobacters colonization and gastritis. Also, the significance of infection with NHPGH in terms of disease development could not be determined. The implications of this study are that non-H. pylori helicobacter species infection occurs in patients with abdominal pain or discomfort similar to H. pylori infection. Conclusion Due to the difficulty associated with identification of non-pylori Helicobacters within routine laboratory tests, increased awareness of general health care and infectious diseases experts should be on the priority for decision- makers in hygiene and health in Iran. We conclude that infections with non-pylori Helicobacter species are candidates for further microbiological testing for targeted and improved clinical management. Nowadays the only fact we are confident of is that NHPGH species induce superficial inflammation in the gastric mucosa of colonized patients. The exact mechanism is not yet understood. However, further research is also necessary to clarify the epidemiology and pathogenesis of these mysterious bacteria.
v3-fos-license
2021-12-16T16:44:09.974Z
2021-12-12T00:00:00.000
245159144
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.ajol.info/index.php/tjpr/article/download/218657/206273", "pdf_hash": "01ac2df0da37a41cc2846f814fc4d3a983eb1eb8", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41220", "s2fieldsofstudy": [ "Biology" ], "sha1": "9c322c8b9756e4c7a87ec725f788a8e9e0b04439", "year": 2021 }
pes2o/s2orc
Puerarin mitigates acute liver injury in septic rats by regulating proinflammatory factors and oxidative stress levels Purpose: To determine the protective effect of puerarin against acute liver injury in septic rats, and the mechanism involved Methods: Eighty-seven Sprague-Dawley (SD) rats were assigned to control, sepsis and puerarin groups (each having 29 rats). Serum levels of NF-kB, TNF-α, IL-1 β, IL-6, ALT and AST were assayed. Liver lesions and levels of NO, SOD, iNOS and malondialdehyde (MDA) were measured using standard procedures. Results: Compared with the control group, the levels of NF-kB, TNF-α, IL-1β, IL-6, AST, ALT, NO, MDA and iNOS significantly increased in the sepsis group, while SOD level decreased significantly. In contrast, there were marked decreases in NF-kB, TNF-α, IL-1β, AST, ALT, NO, MDA and iNOS in puerarin group, relative to the sepsis group, while SOD expression level was significantly increased (p < 0.05). The level of p-p38 in liver of septic rats was up-regulated, relative to control rats, while Nrf2 significantly decreased (p < 0.05). The expression level of p-p38 in the puerarin group was significantly decreased, relative to the sepsis group, while the expression level of Nrf2 significantly increased (p < 0.05). Conclusion: Puerarin mitigates acute liver injury in septic rats by inhibiting NF-kB and p38 signaling pathway, down-regulating proinflammatory factors, and suppressing oxidative stress. Thus, puerarin may be developed for use in the treatment liver injury. INTRODUCTION Sepsis, a systemic inflammatory response syndrome, is one of the serious complications of life-threatening diseases such as severe infection, shock and trauma. It is a disease diagnosed frequently in hospital intensive care units. Sepsis is of sudden onset and rapid development, and it is associated with increasing morbidity and mortality [1]. Advances in medical diagnosis have led to improvements in early diagnosis of sepsis and effective treatment of sepsis patients. However, sepsis-related mortality is still high, a situation which seriously threatens the lives and health of patients. The liver is an important organ for metabolism of nutrients [2]. Moreover, it is the largest endothelial phagocytic system which activates and releases a variety of cytokines, and plays an important role in initiating multi-organ failure due to sepsis. Studies have confirmed that the liver is one of the most affected organs in the septic state. Sepsis causes early liver injury, decreased liver function and liver cell damage which result in multiple organ failure [3]. However, the mechanism involved in development of sepsisrelated liver injury is not yet clearly understood. Puerarin is an isoflavone which has been shown to improve microcirculation and protection against myocardial ischemia [4]. Studies have found that puerarin exerts hepatoprotective effect by lowering inflammatory reactions and reducing oxidation-induced damage [5]. This research was carried out to investigate the influence of puerarin on acute liver injury in rats with sepsis. EXPERIMENTAL Animals A total of 87 healthy male SD rats with mean body weight of 204±18 g were randomly selected. The rats were obtained from Zhuhai Baixiantong Biotechnology Co. Ltd [production license = SCXK (Guangdong) 2020-005; use license = SYXK (Guangdong) 2020-0229]. They were fed adaptively for 1 week at laboratory temperature of 23 ± 4 ℃ and humidity of 50 ± 12 %, in an environment with 12-h light/12-h dark photoperiod. This study received approval from the Animal Ethics Committee of Affiliated Hospital of Medical College of Ningbo University, and was performed in line with "Principles of Laboratory Animal Care" [6]. Main equipment and reagents used The major instruments and reagents used, and their sources (in brackets) were: Establishment of rat model of sepsis Three groups of 29 rats were used: control, sepsis and puerarin groups. A longitudinal cut was made along the main central axis of the lower abdomen of each rat in the supine position under anesthesia. The abdominal cavity was opened to expose the cecum. The lower blood vessels of the cecum were separated and ligated. The center of the cecum was pierced with a needle, and the contents of the cecum were gently squeezed to overflow into the abdominal cavity. Then, the cecum incision was sutured layer by layer. In postoperative rats, drowsiness, decreased feed intake or refusal to eat, low urine output, lethargy, abscess in the abdominal intestinal duct, bleeding and necrosis of the cecal wall, were evidence of successful establishment of sepsis. Rats in sham group had their abdomen opened only, without ligation and perforation. Rats in the puerarin group received the drug at a dose of 80 mg/kg, while rats in sham and sepsis groups were given normal saline in place of drug. Rat cardiac blood (3 mL) was taken at 12, 24 and 48 h, and the serum samples obtained after centrifugation were placed in cryogenic refrigerator at 80 ℃ prior to analysis. Biochemical analysis Serum levels of alanine aminotransferase (ALT) and aspartate (AST) were determined using automatic biochemical analyzer (Indiko TM , Thermo Scientific, USA). Histopathological examination Histopathological changes in liver were determined with hematoxylin and eosin (H & E) staining. Following sacrifice of the rats, the liver tissues were excised and routinely processed into paraffin sections, dewaxed with xylene, hydrated with gradient alcohol, and subjected to hematoxylin staining for 15 min, followed by staining with eosin dye for 3 min. The stained sections were rinsed with phosphate buffer, dried at room temperature, dehydrated, cleared, sealed with neutral gum, and observed and recorded under a light microscope. Assessment of oxidative stress indicators Oxidative stress indicators were also determined. The levels of NO and SOD in liver tissues of rats were determined with nitrate reductase and xanthine oxidase, respectively, while iNOS and malondialdehyde (MDA) levels were determined with thiobarbituric acid method. Determination of protein expressions The protein expression levels of p-p38 and nuclear factor E2 p45-related factor2 (NRF2) in liver tissues of each group were determined using Western blotting. Total protein was extracted from liver tissue, and protein level of the lysate was determined with BCA method. Then, the protein was resolved with SDSpolyacrylamide gel electrophoresis PAGE and transferred to PVDF membrane. The membrane was incubated overnight at 4 ℃ with primary antibodies for p-p38 and NRF2, followed by incubation with HRP-linked secondary antibody for 1 h at room temperature. The bands were subjected to ECL, and images were acquired and stored using a gel imaging system. Statistical analysis The SPSS20.0 software package was used for statistical analysis. Differences between two groups with respect to measurement data for serum inflammatory factors, oxidative stress, biochemical indices and other indices, were statistically evaluated with independent sample ttest. Results of statistical analysis were considered significant at p < 0.5. Serum inflammatory factors As shown in Table 1, compared with the control group at all periods, pro-inflammatory factors in septic rats were markedly up-regulated, but the expressions of these factors were markedly reduced in puerarin group at all periods, relative to the sepsis group. Serum AST and ALT levels Compared with the control group at each time point, the levels of AST and ALT in the sepsis group were significantly increased (p < 0.05). However, AST and ALT levels in the puerarin group were significantly decreased at all periods, when compared with septic group (Table 2). Histopathological changes in rat liver The liver tissues subjected to histological analysis were only those from rats treated for 48 h. The hepatocytes of the control group were intact and orderly, with centered nuclei, and nucleoli clearly visible. No fibrosis or inflammatory exudation was observed. In contrast, the structure of liver cells from sepsis rats was disorganized, with evidence of diffuse vacuolar degeneration, massive infiltration of inflammatory cells, and focal necrosis. Compared with sepsis group, the structures of hepatocytes from the puerarin group were improved significantly. These results are shown in Figure 1. Oxidative stress The amounts of NO, MDA and iNOS in septic rats were significantly raised at each time point, while SOD levels of SOD were markedly decreased, relative to control rats. However, the levels of NO, MDA and iNOS in puerarin group at each time were significantly decreased, while that of SOD was significantly increased, relative to the sepsis group (p < 0.05). The results are presented in Table 3. P-p38 and Nrf2 expression levels in liver tissue of rats As shown in Figure 2, the expression levels of p-p38 and Nrf2 in liver tissue of rats in sepsis group were significantly increased, relative to the control group (p < 0.05). However, the expression levels of p-p38 and Nrf2 in puerarin group were markedly decreased, relative to septic group. DISCUSSION Sepsis is a systemic inflammatory reaction which is the basis of multiple organ dysfunction syndrome, and it has been included amongst top ten reasons for death in patients. The liver is an important detoxification organ of the human body. Studies have found that the deterioration of liver function can be used as a predictor of severe symptoms and poor prognosis in critically ill sepsis patients. In addition, septic liver failure induces multiple organ dysfunction syndrome, which seriously threatens the lives of patients [7,8]. Therefore, early intervention in acute sepsis liver injury plays an important role in delaying the occurrence of liver failure and improving the prognosis of patients. Traditional Chinese medicine (TCM) has a unique approach to the treatment of sepsis, bleeding and coagulation. The important treatment methods in TCM involve clearing away heat and detoxification, promoting blood circulation and removing blood stasis [9]. Puerarin is an isoflavone which dilates coronary artery, reduces myocardial oxygen consumption and improves microcirculation. In addition, some studies have found that puerarin inhibits platelet aggregation and scavenges oxygen free radicals [10,11]. In this study, puerarin was used to treat septic rats, with the aim of studying its effect on acute liver injury, and the mechanism involved. It has been reported that sepsis may cause liver cell injury because bacterial endotoxins stimulate macrophages to induce formation of TNF-α, IL-6 and other pro-inflammatory cytokines during sepsis, thereby inducing liver cell injury [12]. When sepsis occurs, inflammatory coagulation processes are activated, and the two promote each other, thereby causing deficiency of blood and oxygen supply in the microcirculation. It has been shown that TNF-α is one of the most important proinflammatory cytokines with the fastest response, earliest release and the most extensive cytotoxic effects, while NF-kB is a transcription factor widely found in various cells in the body. Sepsis enhances innate or adaptive immune responses by activating NF-kB and p38 signaling pathways, thereby mediating increases in TNF-α, IL-6 and IL-1β [13]. In addition, sepsis induces the release of other inflammatory factors, leading to systemic multiple organ dysfunction. The transaminases AST and ALT are important indicators of liver function impairment. The results of this study showed that puerarin significantly mitigated liver injury caused by sepsis in rats, due to inhibition of proinflammatory cytokines.Increases in levels of oxygen free radicals trigger oxidation-induced lesions. Changes in levels of NO, MDA, iNOS and SOD are important indices that reflect the level of oxygen free radicals in vivo. Studies have found that sepsis is often associated with dysfunctions in multiple organs, including heart, liver, kidney and lungs. Oxidation-induced injury may be crucial in multiple organ dysfunction [14]. The production of NO is catalyzed by NOS which exists in three forms: nNOS, iNOS and eNOS. Nitric oxide (NO) is a free radical with strong reactivity. It has been reported that in sepsis, NO inhibited the synthesis of total protein and glycogen in liver cells, causing a direct impact on liver metabolic function [15]. Some scholars have reported that sepsis significantly increased the level of MDA due to damage to the integrity of membranes and impairment of the function of membrane proteins, leading to disorders in energy metabolism [16]. Superoxide dismutase (SOD) is an antioxidant enzyme, the level of which is negatively correlated with the level of oxidative stress, while NRF2 is a receptor of oxidative stress and a core transcriptional regulator of the endogenous antioxidant system. The results of this study suggest that the puerarin-induced mitigation of acute liver injury in sepsis rats is linked to the inhibition of oxidative damage. CONCLUSION Puerarin alleviates acute liver injury in sepsis rats by inhibiting NF-kB/p38 signaling pathway, and suppression of the levels of pro-inflammatory factors and oxidative stress. Thus, puerarin may be used to treat liver injury.
v3-fos-license
2022-12-12T16:08:34.195Z
2022-12-10T00:00:00.000
254561049
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://doi.org/10.1080/14733285.2022.2153329", "pdf_hash": "b97cf630bed1dd9fd952e0aa2ac4fd139a29576b", "pdf_src": "TaylorAndFrancis", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41222", "s2fieldsofstudy": [ "Art" ], "sha1": "c4535baba3c2bf9617e8fa839b15d55306e0585a", "year": 2023 }
pes2o/s2orc
‘It's beautiful, living without fear that the world will end soon’ – digital storytelling, climate futures, and young people in the UK and Ireland ABSTRACT This research explores two questions: how do young people imagine futures shaped by climate change and our collective response to the climate crisis, and what is the impact on young people of creatively engaging with the future? The participatory action research method of digital storytelling was adapted to explore climate futures, with thematic, visual and narrative analysis of the resulting videos. Young people articulated positive, negative and more complicated visions of the future, including counterfactuals, discontinuities, and living with loss and change. They also described a process of positive reappraisal over the course of the speculative digital storytelling workshops, with emotions about the future shifting from being predominantly negative to a more balanced spectrum including acceptance, curiosity and hope. In September 2019, an estimated six million people participated in the Fridays for Future climate strikes around the world (Taylor, Watts, and Bartlett 2019).In London, where the author joined the strikers with his children, a sea of protesters gathered near the Houses of Parliament.Young people and their supporters demanded climate action through homemade signs: 'There is no planet B', 'You'll die of old age, we'll die of climate change', 'My future is in your hands'. In an early review of research on the school climate strikers, Bowman reflected on the imagination of young people as they look into an uncertain future: 'Climate action is more than protest: it is also a world-building project, and creative methodologies can aid researchers and young climate activists as we imagine, together, worlds of the future ' (2019, 296).This research picks up on that call, collaborating with young people to explore their hopes and fears for the future through digital storytelling.This research also responds to interest in the fields of Children's Geographies and Childhood Studies in the everyday climate activism of young people, which is shaped by their perceptions of the future (Skovdal and Benwell 2021;Spyrou, Theodorou, and Christou 2022). Building on the traditions of participatory action research and more recent developments in narrative, visual and digital analysis, this article explores two interconnected research questions: . How do young people in the UK and Ireland that are engaged with climate activism imagine futures shaped by climate change and our collective response to the climate crisis? .What is the impact on young climate activists of creatively engaging with the future?These questions are particularly relevant to climate educators interested in helping young people develop both resilience amid change and the skills to shape the future.Speculative digital storytelling is a novel participatory research method and promising environmental education practice, with the potential to offer new insights into youth perspectives on climate change while supporting young people's positive reappraisal of environmental problems. Context Should I tell you what the world will be like 30 years from now?Well, it can go in two ways. Imagining climate futures Futures are of interest across the social sciences.Mische (2009) called for a 'sociology of the future', arguing that cultural sociologists should pay as much attention to future projections as they do to collective memories.In science and technology studies, Jasanoff defined sociotechnical imaginaries as: 'collectively held, institutionally stabilised, and publicly performed visions of desirable futures, animated by shared understandings of forms of social life and social order attainable through, and supportive of, advances in science and technology ' (2015, 4).Levy and Spicer (2013) identified four different climate imaginaries: fossil fuels forever, climate apocalypse, techno-market and sustainable lifestyles.Milkoreit extended this thinking into the concept of socio-climatic imaginaries, which incorporate both social and natural systems, as well as undesirable and mixed possible futures (2017).At the level of individuals and human psychology, Andrews described the combination of imagination and narrative as, 'a bridge traversing the pathway between what is known, and what can be known, between the present and possible futures' (2014,5). Futures thinking also has practical applications.In Japan, a Future Design movement has emerged in which people represent imaginary future generations in planning discussions, with the aim of 'activating a human trait called futurability, where people experience an increase in happiness because of deciding and acting to forego current benefits in order to enrich future generations' (Saijo 2020, 2).In Wales, the Well-being of Future Generations Act of 2015 established a Future Generations Commissioner to advise government bodies on sustainable development and the long-term impacts of their decisions (National Assembly for Wales 2015).The Swedish Narrating Climate Futures initiative, which is part of the Climaginaries network, created The Museum of Carbon Ruins, an immersive, speculative exhibition presenting a post-carbon 2053 (Raven and Stripple 2020).Reflecting on a collaboration between artists and academics exploring climate futures in Spain and Portugal, Galafassi et al. concluded, 'visioning then is not something one does once and for all (as in forming an image), but rather it is a continuous process of making the future present in order to discover preferences towards certain futures and taking actions in the present towards an evolving purpose ' (2018, 8). Environmental and sustainability educators have long argued for developing futures literacies, with Hicks and Holden contending, 'probable and preferable futures, scenarios, envisioning, can fruitfully be employed in the classroom to help students develop a futures perspective ' (2007, 509).Examples of creative engagement with climate futures include digital storytelling (Cunsolo Willox, Harper, and Edge 2013), participatory video (Haynes and Tanner 2015;Littrell et al. 2020;Walsh andCordero 2019), photovoice (McKenzie andBieler 2016;Trott 2019) and speculative fiction (Doyle 2020;Rudd, Horry, and Lyle Skains 2020).With respect to these 'inventive methodologies', Coleman has argued that, 'a sensory sociology of the future might be interested not only in documenting orientations or imaginations of the future, but also in probing, provoking, stimulating them ' (2017, 539). Climate and other environmental campaigners have been characterised as using apocalyptic imagery to motivate action, reflecting a form of future-oriented pessimism (Cassegård and Thörn 2018).Amid debates about the effectiveness of positive versus negative rhetoric, a new form of post-apocalyptic environmentalism has emergedrepresented by Extinction Rebellion, Deep Adaptation and the Dark Mountain Projectthat anticipates a future characterised by irreversible and unavoidable loss (Cassegård and Thörn 2018;Moor 2021;Friberg 2021). Recent research has also found high levels of climate anxiety among young people, with students rating negative climate scenarios as more likely than positive climate scenarios (Finnegan 2022;Hickman et al. 2021).Advocating for participatory methods and arts-based engagement with young people on climate, Trott commented, 'the arts can support critical reflection and creative expression, allowing young people to envision alternative and preferable futures and how to get there (i.e.helping us to imagine "what if?")' (Trott 2021).Verlie takes the power of story a step further in terms of how we can learn to live with climate change: 'We need stories that enable us to identify as part of climate change, and that enable us to stay with the ethical and interpersonal challenges of living with it' (2022,104). Digital storytelling in research Digital storytelling, as reflected in the International Digital Storytelling Conference, is rooted in the community arts work of the American charity StoryCenter (StoryCenter n.d.).This model of storytelling involves a facilitated workshop in which people without any filmmaking experience produce short, first-person, multimedia narratives.Digital storytelling has been adopted in a variety of educational, organisational, and developmental contexts, including the Patient Voices programme in the UK (Hardy and Sumner 2018). A systematic review of digital storytelling in research confirmed its contribution as a participatory, sensory, visual research method, especially as the visceral nature of multimedia stories 'capture sensory data not accessible via written word or interview' (Jager et al. 2017(Jager et al. , 2574)).Hogan and Pink argue that visual methodologies involving creative expression are also a means of accessing the interior world of affect, or interiority, through a 'paradigm that views inner states as being in progress, rather than ever static ' (2012, 233).After leading a digital storytelling project that shared the voices of older people in rural England living more sustainably, Gearty concluded, 'narrative and storytelling can play an important role in both action research and action learning by helping individuals not only learn through the telling of their own stories but also through their engagement in the stories of others ' (2015, 160). Narrative and visual turns This research project builds on turns in social science towards narrative and visual analysis.While the visual research methods explored by Rose primarily relate to found images and visual culture rather than research that involves the production of visual material by research subjects, her criteria for critical visual methodologies are useful reference points: 'one that thinks about the agency of the image, considers the social practices and effects of its viewing, and reflects on the specificity of that viewing by various audiences, including the academic critic ' (2016, 17). With narrative analysis, Riessman (2008) argues that we are able to better understand the structure of stories and intention of storytellers by treating an entire narrative as an analytical unit, rather than coding small excerpts of text (or transcripts) out of context.Reflecting on the combination of narrative and visual turns in human sciences, Riessman commented, 'the power of the camera is turned over to research participants to record images they choose, and to story their meanings collaboratively with investigators' (2008,143). Participatory visual methods allow research subjects to express multiple and ambiguous narratives related to identity (Ní Ní Laoire 2016).Digital storytelling has also been described as a 'morethan-visual' method: 'Rather than focusing on the image as data, more-than-visual methods take visual production as a situated, material, and embodied practice' (Marshall, Smaira, and Staeheli 2022).As such, digital storytelling methods provide insights through both the production process and resulting stories into the provenance, properties, meanings and affordances of multimedia data. Methodology Forty-seven secondary school students in the United Kingdom and Republic of Ireland produced digital stories through this research project.The students attended upper secondary school and were between 15 and 18 years old.Participants were primarily recruited through the UK and Irish Schools Sustainability Networks, which emerged in 2020 to support teachers and students engaged in school sustainability activities after the youth climate strikes.The researcher also conducted workshops with small groups of students at two schools in London, two schools in Berkshire, and a museum in Oxford.Twenty-eight of the students attended independent schools, eighteen attended state schools, and one was home-schooled.Demographic information was not collected during the workshops.The researcher observed that approximately two-thirds of the participants were female and one-quarter were from ethnic minorities (excluding white minorities), while recognising that there are limitations to observed aspects of identity.The research was approved by the Central University Research Ethics Committee (CUREC) at the University of Oxford, with reference number SOGE20201A-178. In advance of the research, the researcher participated in StoryCenter's certification programme for facilitators.Digital storytelling workshop materials were adapted to include information about climate science, climate change communication, and future scenarios (Climate Lab Book n.d., Climate Outreach 2020, Great Transition Initiative n.d.).As a vehicle for first-person, multimedia expression, digital storytelling tends to be a reflective process, in which the storyteller identifies and explores 'the moment of change that best represents the insight they wish to convey' (Lambert and Brooke Hessler 2020, 71).In this research, storytelling was speculative rather than reflective, with the participants encouraged to envision the world in the year 2050 and write a letter from their future self to the current self.These letters from the future were shared within each group of storytellers, and then recorded as narration for their videos. The facilitated digital storytelling workshops were delivered by the researcher both in-person (25% of stories) and online (75% of stories) due to COVID-19 restrictions.The two-day in-person workshop was adapted into a series of virtual sessions, which were conducted during school lunch breaks, after-school, and at the weekend.The researcher obtained an educational licence for the browser-based software WeVideo, which the participants used to edit their videos.Each workshop concluded with a screening of the participants' digital stories and a focus group conversation.At the end of the workshop, participants also revisited informed consent, especially with respect to how they wanted to share their digital stories (do not share, share anonymously, share with attribution).For consistency, all stories are referred to below using an anonymous code from DS01 to DS47 and focus group conversations use a code W1 to W8 for each series of workshops. The 47 digital stories are the primary data generated during this research.The narration was transcribed and the researcher also created a list of the visuals used in each video, with notes about music and editing style.The focus group conversations were also recorded and transcribed.Eleven videos were used in public engagement activities connected to COP26, and creators of these digital stories each wrote a short filmmaker statement, which are considered supplemental data.These files were coded in NVivo for reflexive thematic analysis, in terms of themes identified in both the audio and visuals of the stories, and narrative analysis of the structure of each story (Braun and Clarke 2006; Riessman 2008).There are methodological limitations to this study.Given the substantial time investment required to participate in the workshop, the students were self-selecting and primarily motivated by an interest in climate and storytelling.As there were a small number of participants, and independent schools were overrepresented, the participants are not representative of young people in the UK and Ireland.Demographic data wasn't systematically captured about participants, limiting any comparisons based on gender, ethnicity or other factors.As the workshops took place during COVID-19 regulations, they were delivered through a hybrid in-person and online format, which meant the digital storytelling workshops varied in format for different groups of participants.Online sessions also limited opportunities for strengthening the relationship between researcher and research subjects, or gathering data through participant observation. The digital stories are available online at http://tinyurl.com/lettersfrom2050.An animation based on the digital stories is available at https://vimeo.com/tamarackmedia/2050. Change It is up to you how much light there will be in the dark, and what remains as an anchor of positivity, hope and joy. Results The digital stories created by the research participants ranged in length from one minute and seven seconds to four minutes and 22 s.While most of the videos followed the format of a letter from the year 2050, a handful departed from the initial prompt: two students wrote poems as the narration, one student wrote a letter from the present to the future, one student produced a video without any narration, and one student wrote a personal reflection on a local rewilding project that involved the failed reintroduction of beavers. The digital stories utilised a range of visual materials, including stock footage and photographs, original footage and photographs created by the participants, illustrations and text.The majority of videos primarily used stock footage that was included with the WeVideo editing software.While much use of the stock footage was show-and-tell, directly illustrating the text of the script, the filmmakers also used visual metaphors, with repeated images of sunrise, sunset, clouds, and flying birds.Ten videos were completely composed of original footage, including a story with a single long shot of Brighton Pier with natural sound (Figure 1), which the filmmaker explained in her filmmaker statement. When given the brief I instantly thought of the sea and the beach as this is the place where I feel most connected to nature and where I often reflect.At the start I found the process rather overwhelming and daunting, thinking about the future in this way, and I took more of a pessimistic approach.However, as the project went on, I decided to change my stance and focus on the way we have adapted and that it isn't as bad as I may have imagined (DS08). A small number of digital stories, all created by female storytellers, included personal images, especially at the beginning and end of their videos.These were accompanied with more personalised, emotional messages from the future self to the current self.'Make sure to look after yourself, love yourself, and keep making changes, no matter how small, so you'll be able to appreciate the world as much as I am now.I love you baby girl' (DS20). The pacing of the editing varied greatly, with some videos composed of a small number of still images and video clips held on screen for a number of seconds, and other stories including fastpaced sequences where visuals only remained onscreen for a fraction of a second.Videos also included layering effects, in which more than one image appeared onscreen at the same time, for example to juxtapose imagery of the causes and impacts of climate change.Filmmakers also layered text over visuals to reinforce key ideas from their narration. Twenty of the videos had no music, with the other twenty-seven primarily using stock music available through WeVideo.The music tended to be fairly dramatic, although some videos had a more contemplative soundtrack.A small number of videos included a music transition to reflect a change in content and tone. After a close reading of the digital story transcripts and repeated viewings of the visual material of each story, twelve themes were identified (Table 1).Looking at an entire digital story as a cohesive narrative, rather than breaking the transcript and visuals into smaller units for coding and analysis, revealed patterns in terms of story structure and style.Based on this narrative analysis, nine different types of narratives were identified in the digital stories (Table 2). Through their writing and use of visuals, the storytellers presented positive, negative and nuanced visions of the future.There were narratives of destruction and decline, even collapse; dystopias illustrated with scenes of pollution, extreme weather, flooding, and fires; stories of loss and solastalgia.The students also created narratives of progress towards green utopias with visuals of green technologywind turbines, solar panelsand discussions of new forms of transport, agriculture, and architecture.In some cases, the positive narratives reflect a new consciousness of harmony with nature, and an intersectional approach to social and environmental problems. The participants also presented more complicated visions of the future using a variety of narrative strategies.Counterfactuals included both positive and negative futures, juxtaposing what could have happened between now and 2050 with what did happen, or contrasting present day hopes and fears with the lived experience of a future self.Stories of discontinuitiesfuture events that mark a break between current and future trendsexplain the transition between a worsening short-term and a longer-term 'period of repair, of restoration, of rebuilding' (DS44).Stories also present a form of 'staying with the trouble' (Haraway 2016), in which young people are clear eyed about loss and injustice, while articulating narratives of adaptation, resilience, and appreciation for the natural world. Most of the digital stories included visual and script references to climate protests, a tangible form of climate action for students, especially with respect to the Fridays for Future school climate strikes.Individual climate actions were presented as a path to collective action and systems change, although young people didn't express much faith in politicians and governments taking necessary Biophilic design Many stories described a future in which people lived in harmony with nature, urban areas had been rewilded, and architecture reflected biophilic design (Kellert and Finnegan 2011).Imagery of the Supertree vertical gardens in Singapore appeared in multiple videos (Gardens by the Bay n.d.). 'Humans and nature have begun to live in harmony, with vines and trees growing magnificently, and urban life moving with them, instead of just destroying them, like in the past' (DS42). Education When describing climate solutions, many storytellers referenced the importance of education.In addition to environmental and sustainability education, access to education for girls was seen as an important response to climate change. 'Schools also now teach future generations how to look after the Earth and how to be mindful of everything they do and how it will affect the world' (DS46).'Children are being educated about their upcoming responsibilities' (DS43). Gratitude A number of videos used natural imagery combined with narration expressing their gratitude for the natural world, or specific natural places and activities. 'Please always remember to look up around you, embrace the fresh air, the smell of flowers, the snow in the winter, appreciate the clean beaches, the warm summers, and the trees around you. Because this is what you are fighting for' (DS25). Green technology Most of the digital stories also included some representation of green technology, primarily reflected through solar photovoltaic panels and wind turbines.Some videos also mentioned changes in transportation, including self-driving electric cars, and skyline shuttle pods controlled by apps. 'Now we have the ability to rely almost exclusively on renewable, clean energy, having invested enough in solar, wind, and other crucial sources of renewable energy' (DS06). Individual/ systems change Many of the videos describe changes in patterns of behaviour, such as more plant-based diets, an end to fast fashion, and plastic-free shops.These individual actions were described as pathways to collective action, and also referenced as part of messages of encouragement. 'It all starts from the baby steps, the union of society, of everyone making their contribution' (DS41).'So all the small actions that you're taking now, although they may seem insignificant at the moment, they'll play a major role in saving our planet' (DS46).(In)equality Many videos referenced inequality and social justice, reflecting an intersectional approach to social and environmental issues.Some stories painted a picture of increasing inequality, while other stories reflected decreases in inequality. 'In 2050 you will see the rich retreating into airconditioned sanctums behind ever-higher walls and the poor left exposed to the ever harsher elements' (DS06).'The unnecessary suffering has been avoided by no longer prioritising the rich and wealthy but focusing on making sure no one has too little, no one has too much' (DS42). Government (in)action Governments and the political process were mainly seen as obstacles to addressing the climate crisis.Some videos referenced the success of the Green Party and local government action, while others emphasise international cooperation. 'Politicians did nothing to stop climate change' (DS07).'The fake promises governments made to act on climate change are now more than visible and the consequences can be seen globally' (DS33).'The people you trusted to fix it failed' (DS34).Loss and damage Borrowing the language of the UNFCCC, most of the letters include references to and visual representations of loss and damagehurricanes, flooding, forest fires, droughtand the resulting human suffering. 'I'm talking about droughts and floods that seem to be as often in the news, as COVID-19 is in your life right now' (DS37).'The world is ablaze … .The world has been lit with damage' (DS30). Pollution A number of specific stock footage clips of smokestacks and powerplants were repeated, as well as smog-filled cityscapes, with references to poor air quality.The storytellers also frequently mentioned the face masks worn during COVID-19 as a regular part of life in the future due to poor air quality.There also were many references, in both visuals and scripts, to polluted waterways and plastic in the ocean. 'The unbearable air, reeking of industrial dirt and acting as glue by making my lungs sticky and ineffective' (DS37). Protest Another stock footage clip that appeared in most of the videos was of climate change protesters holding handmade signs.One story included a photo of the filmmaker at a school climate strike. 'After an incredible amount of protests and education, the world took it seriously' (DS45). Refugees A number of digital stories described climate migration as a means of exploring both global climate injustices and potential xenophobic reactions to refugees. 'At the same time, sitting in front of our borders and waiting to be let in are millions of refugees.Refugees, because of the unbearable and destructive climate in their home countries. Refugees, because the air is even hotter, drier and (Continued ) action unless under great pressure from civil society.Storytellers also presented their participation in climate action now and in the future as a means of ensuring they can live without regret, even as the climate crisis continues to unfold. The filmmaker statements provided additional insights into the participants' motivations and creative process.One storyteller initially drafted a letter that went into great detail about a negative future, but, when it came time to create a video, he abandoned the original letter and went to film in a local natural area (Figure 2).The resulting digital story reflected the theme of solastalgia and narrative of memory, and storytelling intent was captured in the filmmaker statement: The focus group conversations at the end of each workshop, after the participants had shared their final videos, were also an opportunity to further explore filmmaker intentions and the impact of the digital storytelling process.Students expressed satisfaction with creating their digital stories: Refugees, because we, society as a whole, didn't seem to care enough' (DS25). Solastalgia Many videos referenced destruction of the natural world and the sense of loss due to environmental change, which echoes the concept of solastalgia (Albrecht et al. 2007).Solastalgia implies a connection to place, and the feeling of loss when that place changes. 'I am still based in England, but it is not the same place that I once loved' (DS01).'I don't mind where I live, but the world feels cold; nowhere feels like home' (DS12).Climate tipping points might have already been reached, causing catastrophic climate change and loss of biodiversity' (DS21).'The world today is different from the one you once feared, the world you hated to imagine, and the world that filled you with hopelessness and frustration' (DS20).(Dis)continuity A number of the digital stories described a worsening near-term future followed by a more sustainable long-term future.A few of these transitions were described in terms of a world-changing event, a discontinuity between past and future, for example the 2034 London air pollution crisis (DS19). 'But then in 2035, it all changed.The biggest drought the world had ever seen swept the globe, populations halved, water supplies dwindled, but from the dust arose revolution' (DS05). Enlightenment A small number of stories describe a new consciousness of harmony between humans and the more-than-human world, reflecting a narrative arc towards enlightenment that is distinct from the utopian visions characterised by green politics and technology. 'There is a new culture of trust and empathetic understanding for one another.I look around and everyone seems healthier, happier, without these feelings of helplessness and chronic frustration towards climate change' (DS20). Living-with Echoing both Verlie (2022) and Haraway (2016), many of the stories reflect the ambiguity of a future where climate change resulted in substantial loss and suffering, but people also adapted individually and collectively. 'Things have changed.But we remain closer to a tipping point than feels comfortable.The precarious balance of two steps forward, one step back.It wasn't like in the films.There was never a point when you breathed a sigh of relief and decided that good had defeated evil.Too many natural wonders were lost due to our actions.Too many species whose fates have been restricted to the history books.Too many lives lost because of the greed and unbridled power of a few.It's hard to feel that we've won, as politicians like to claim.A ticking time bomb remains and there's lots of work to do' (DS44). Memory A number of the digital stories are memories from the future of specific places or experiences, with the storyteller including original footage and photographs of these local places of personal importance. 'Do me a favour and take a moment to think about all those long walks you go on with your family' (DS26).'Do you remember this?It's your favourite hike … .You remember, right?That taste of fresh air you valued so much, loved so much.Well, it's gone now' (DS02). Revenge A few of the stories present a narrative in which the natural world exacts revenge upon humanity.These narratives tend to present a misanthropic perspective. 'The planet I live on has begun vindicating itself of the virus that is humanity' (DS11).'Mother Nature came for us, showing us the same mercy we have shown to her, as resolute in her cause as we were to protect money over life' (DS34).'We exist and Earth is still our home but she is broken.We made her deadly and ruthless and relentless.She was angry.She knew no boundaries and showed no mercy' (DS24). Utopia/ Dystopia Most of the digital stories presented either a fully positive or negative picture of the futurenarratives of a world of improvement or decline.The dystopias were characterised by strong emotional language. 'I am so ashamed of what we have let our world come to' (DS01).'Climate change has ruined my life and it will ruin yours soon too' (DS30).'I do feel like this green, state-of-the-art utopia is all a bit in your (Continued ) Student 1: I feel really proud.I was really excited to finally see the finished product … .Even though I sound really weird, I'm quite happy with the way that it's coming across now. Student 2: When you write something … you kind of visualise it in your head.And so I guess that kind of gave us the opportunity to do that.And seeing it come together, even though it was obviously not exactly the vision that I saw, it was close, and that was quite satisfying as well (W2). In these focus groups, the storytellers also reflected on the intended impact of their videos on potential audiences.For some, the stories were an opportunity to see through their initial ideas, 'I kind of did it for myself.I didn't think about the viewer at all.I just thought about making the video' (W3).Students also spoke about storytelling as a means of cultivating empathy, 'I think I want the audience to feel empathy … and maybe do what they can to maybe prevent any further destruction' (W4). A couple of filmmakers, both male, spoke about how they changed from a more positive vision when they initially wrote the letter from the future to a negative depiction in their final video because they thought that would be more entertaining.Students also identified the cultural influences on their depictions of the future, 'It reminded me of apocalyptic climate films like Snowpiercer, where the whole world is frozen, and other fiction films, which actually could be a reality a couple years down the line' (W2).When explaining the choice of a narrative of collapse and visual metaphors of weeds growing through cracks in the pavement (Figure 3), one participant commented, 'I feel like I've sort of mentally accepted the death of civilisation.It's gonna happen, but we can keep looking out for each other.' (W1). Students across the focus groups also spoke about the invitation to explore the future: 'We're attending loads of careers talks and things like that.A lot of "the future" is based on what you're going to do and your goals.But you never really kind of stop and think how different the world is going to be around you when you reach that point in your life' (W4).The storytellers also reflected on the strangeness of telling a story about the future in the present.For some, this involved the memories of the future captured through present day original footage.For others, this was based on using current stock footage to illustrate future climate catastrophes.'My piece has forest fires that happen in 2050, but it is literally just taken from now.And so it's … kind of weird saying, oh look, this is what happens in 2050.They are actually happening now' (W7). An exercise during the workshop involved participants choosing three words that related to how they felt about the future, which were related to the taxonomy of climate emotions described by Pihkala (2022).This happened during the initial session after exploring climate science and future scenarios, and then again during the focus group conversation at the end of the workshops.In one workshop, the main sentiment expressed changed from overwhelmed in the beginning to a combination of hopeful and ambitious at the end, which one of the participants noted in the focus group: 'I'm happy that these [hopeful and ambitious] are the two words that most people used, because it implies that there is hope, we are looking for solutions, and we are looking forward to making progress' (W2). In one of the focus groups, students discussed how their emotional responses to a future shaped by our collective response to the climate crisis changed over the course of the digital storytelling workshop. Student A: I was worried and frustrated before.I still am, but I think I'm more ambitious now and kind of optimistic that we can make a difference. Student B: I agree, it's definitely made me feel more optimistic.And something I said in my letter was about me doing actions now which will mean I don't have to live with regret in the future.And that … makes my actions now feel meaningful and worthy.While the digital stories presented positive, negative and mixed socio-climatic imaginaries, including the range of themes and narratives outlined above, the reflections of the storytellers more consistently indicated a shift from a general sense of dread about the future, to a combination of worry, acceptance and curiosity.As the discussion above illustrates, engaging with the futurecreatively, collectivelyplayed a role in this shift in perspective. Discussion The primary goal of this research was to explore how young people imagine futures shaped by climate change and our collective response to the climate crisis using the research method of digital storytelling. Despite time constraints, limited tools, and no budgets, the young people proved sophisticated storytellers, effectively combining voice, music, photographs and video, and utilising editing techniques such as pacing, transitions, and layering.The digital stories reflect a number of influences, from classroom learning and personal environmental activism, to cultural references and speculative fiction, for example referencing space travel, time travel, future memories, and apocalypse.Visual metaphorssunrise and sunset, clouds moving across the sky, birds in flightwere left to viewers to interpret, and simultaneously communicated contemplation about the future, changes in atmospheric chemistry, and hope. The speculative digital storytelling process and resulting stories communicate a wide variety of imagined climate futures, as illustrated by the themes and narratives identified above.This study builds on related research exploring future scenarios with young people, which found that students held mixed emotions about the future, reporting high levels of both hope and anxiety (Finnegan 2022).The digital stories add a richness to this picture, especially through the words and voices of the research participants, while similarly resisting any shared, homogenous vision of the future.Taken as a whole, the letters from the future capture the uncertainties, worries and possibilities young people face, especially those actively engaged in climate activism. The negative visions expressed in many of the digital stories paint a bleak picture of the word in 2050stories of dystopia and collapse characterised by pollution, loss and damage, and refugees.While in some ways these visions are specific to the climate crisis, fears of the future being worse than the present are not new.A study in the 1980s in which students wrote about a normal day in their life in the year 2000 reported expectations of 'violence, unemployment, high technology, boredom, inflation, poverty, pollution, material prosperity' (Brown 1984).A similar Australian study in the 1990s that involved young people envisioning the year 2010 found, 'most young people see the future mainly in terms of a continuation or worsening of today's global and national problems and difficulties' (Eckersley 1999, 77).Eckersley acknowledged these visions had even deeper roots: 'Apocalyptic myths about "the end of the world", which have always been part of human mythology, including most major religions (this relates especially to fears about global catastrophe, such as a nuclear holocaust) ' (1999, 84).Alternatively, Hicks (1996) connected fears about the future of young people in the UK to contemporary socio-political concerns.The digital stories in this study reflect both archetypes in storytelling and climate change as a contemporary issue that is shaping perceptions of the future, although that is perhaps unsurprising given the research design and participants. The positive solutionspresented visually and in the video narrationemphasised two themes: green technology, like renewable energy, and individual change, for example more sustainable consumer choices.Researchers have critiqued climate policies that overly rely on techno-solutionism, and the UK Climate Change Committee's Net Zero report assessed that 62% of emissions reductions would need to come from societal/behavioural changes and measures that combine societal/behavioural changes and low carbon technologies (Committee on Climate Change 2019; Nelson and Allwood 2021).Social scientists have also critiqued environmental policies and discourses that focus on behaviour change rather than systemic change related to power, culture, and social practices (Shove 2010).Climate educators interested in the practice of speculative digital storytelling may need to actively introduce concepts related to sociotechnical transitions and systems (change) thinking into the digital storytelling workshops to address these critiques. The second concern of this research was the impact on young people of creatively engaging with the future through speculative digital storytelling.As an arts-based form of participatory action research, digital storytelling methods are acknowledged to potentially to have an impact on the research participants.The digital storytelling workshops were also a form of environmental and sustainability education that creatively engaged with climate change, with potential outcomes in terms of knowledge, attitudes and behaviour.This impact was explored in the focus group conversations at the end of each workshop, where students expressed satisfaction and pride in their work.Specific skills and competencies related to digital communication were also demonstrated by the students through their multimedia productions. Through an exercise in which the participants identified the emotions they feel with respect to the future, the digital storytelling process appears to have supported a shift from more negative to more positive emotions.While the participants were still worried about the future, they also expressed an increase in acceptance, curiosity, and hope.As Hogan and Pink (2012) noted, interiority is not static and crystallised when extracted as data, and creative research methods may provide deeper access and also influence inner states.Changes in young peoples' emotional responses to the future also relate to Ojala's model of climate hope that includes positive reappraisal: 'Positive reappraisal is about perceiving the threat but being able to reverse one's perspective and also activate positive emotions that can help one to face the difficult situation and deal with worry constructively ' (2012, 636).Ojala's model of hope also includes trust in both self and others, and the shared experience of the digital storytelling workshop may contribute to the collective nature of climate hope.The digital storytelling process also directly responds to calls by Rousell and Cutter-Mackenzie-Knowles for more 'interdisciplinary, affect-driven and experiential approaches to climate change education ' (2020, 6).The shift from negative to positive emotions is a valuable outcome in response to widespread climate anxiety among young people (Hickman et al. 2021).In addition, recent research has found a strong relationship between the emotional/cognitive concept of climate hope and action competence, which is the ability constructively engage with future environmental challenges (Finnegan 2022). In their handbook on digital storytelling, Lambert and Hessler reference a number of applications of story work, including environmental activism and scenario planning (2020).This research is the first reported application of speculative digital storytelling to environmental issues and could be used as a model for future digital storytelling research and practice.The personal, reflective, creative approach of speculative digital storytelling, especially using the prompt of a letter from the future, provided an opportunity to productively engage with the climate crisis, develop futures literacies, and explore the concept of being a good ancestor.Future research could use speculative digital storytelling on other environmental or social issues, as well as work with different age groups.In addition, the students primarily used the stock footage and music provided in WeVideo, and productions may have been very different if original filming and photography was required.While this model of digital storytelling workshops works best with small groups and involves a substantial time commitment, and thus is difficult to scale, there are a wide range of creative activities that can help students develop futures literacies (Miller 2018). Closure Use your voice to be as loud as you canour voices are louder together.I'll leave you there, but keep your eyes open.Take care. Young people are not experts in future scenarios, policies or technologies.Nor are they fortune tellers.However, younger generations will experience more of the future than older generations, with opportunities to individually and collectively shape it over the course of their lifetimes.The socioclimatic imaginaries young people articulate through speculative digital storytelling reflect their hopes and fears, their sense of both the possible and the inevitable.The process of creatively engaging with the future not only extends their time horizons and futures literacies, but it also provides opportunities for facing climate anxiety, positive reappraisal, and constructively engaging with the climate crisis. For educators, and others looking to help young people develop resilience and agency, the digital storytelling process can be used as a form of creative, participatory climate education, exploring the causes and impacts of climate change across multiple disciplines, while centring the emotional response of young people to the climate crisis.The results of the digital storytelling process are also powerful tools for further education and engagement, so that work with a small group of students can extend throughout a larger school community. Students in the UK and Ireland, especially those from a privileged background, are largely sheltered from the already unfolding impacts of climate change on frontline communities.However, exploring their worries about the future in a supportive environment can develop empathy for and solidarity with others, and appreciation for the more-than-human world.Creatively engaging with preferable futures allows young people to anchor their hopes on existing green technology, Figure 1 . Figure 1.Still frame of the digital story DS08. During my time filming, I thought showing the potential effects of global warming in an extremely personal and significant place would be really impactful.I hoped that demonstrating what could happen to my favourite place might help people watching the video understand what may happen to theirs and thus enact genuine change.I also wanted to capture the beauty of the environment and what is at risk if we don't work to change our impact on the environment (DS02). Figure 2 . Figure 2. Still frame of the digital story DS02. Figure 3 . Figure 3. Still frame of the digital story DS39. Student C: I feel like I'm at peace with what I am doing to be on the right side of history … .I have kind of accepted, especially with thinking about it a lot more over the last three weeks, what my role is within everything.And, obviously, I'll never truly accept it, but I'm definitely a lot more accepting of the facts and reality now than I was maybe a year ago when I was just filled with anger and really frustrated about it.Whereas now I think, 'why focus on those emotions when you can do other things and use your energy better'.Student D: I'd still say that I'm worried about it.Of course, I guess most people probably are.But I also feel like if I tried to do everything that I, as one person, can do, it might spark something in people around me if they see me doing something (W3). Table 1 . Themes identified during reflexive thematic analysis. Table 2 . Narratives identified during narrative analysis. Table 2 . Continued.The utopias focused on positive changes in energy, transportation, food, and architecture.One filmmaker presented a positive vision, but critiqued it.face though … .Countries competing over who can create the most flamboyant green architecture, who can build a new it city, because sustainability has become the new it word' (DS18).
v3-fos-license
2021-07-25T06:17:03.352Z
2021-07-01T00:00:00.000
236211669
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/2077-0383/10/14/3064/pdf", "pdf_hash": "0bcf22bb3390c0ce7ef322f3fe95236b9bd1af67", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41224", "s2fieldsofstudy": [ "Medicine" ], "sha1": "17353ecb3741206c983aa9450df2552e75c17174", "year": 2021 }
pes2o/s2orc
Modified Technique for Wirsung-Pancreatogastric Anastomosis after Pancreatoduodenectomy: A Single Center Experience and Systematic Review of the Literature Background: The mortality rate following pancreaticoduodenectomy (PD) has been decreasing over the past few years; nonetheless, the morbidity rate remains elevated. The most common complications after PD are post-operative pancreatic fistula (POPF) and delayed gastric emptying (DGE) syndrome. The issue as to which is the best reconstruction method for the treatment of the pancreatic remnant after PD is still a matter of debate. The aim of this study was to retrospectively analyze the morbidity rate in 100 consecutive PD reconstructed with Wirsung-Pancreato-Gastro-Anastomosis (WPGA), performed by a single surgeon applying a personal modification of the pancreatic reconstruction technique. Methods: During an 8-year period (May 2012 to March 2020), 100 consecutive patients underwent PD reconstructed with WPGA. The series included 57 males and 43 females (M/F 1.32), with a mean age of 68 (range 41–86) years. The 90-day morbidity and mortality were retrospectively analyzed. Additionally, a systematic review was conducted, comparing our technique with the existing literature on the topic. Results: We observed eight cases of clinically relevant POPF (8%), three cases of “primary” DGE (3%) and four patients suffering “secondary” DGE. The surgical morbidity and mortality rate were 26% and 6%, respectively. The median hospital stay was 13.6 days. The systematic review of the literature confirmed the originality of our modified technique for Wirsung-Pancreato-Gastro-Anastomosis. Conclusions: Our modified double-layer WPGA is associated with a very low incidence of POPF and DGE. Also, the technique avoids the risk of acute hemorrhage of the pancreatic parenchyma. In current state-of-the-art pancreatic surgery, the primary goal is to reduce major complications (i.e., pancreatic fistula and delayed gastric emptying), allowing both early recovery and early discharge of the patient. Standard PGA provides for invagination of the pancreatic stump in the gastric cavity, thus exposing the pancreatic remnant to hemorrhagic complications. The aim of this study was to retrospectively analyze the morbidity rate in 100 consecutive PD reconstructed with WPGA, performed by a single surgeon applying a personal modification of the pancreatic reconstruction technique. In addition, a systematic review of the literature is reported, comparing our technique with the existing literature on the topic. Materials and Methods During an 8-year period (May 2012 to March 2020) at the Department of General Surgery "Ospedaliera" at the Polyclinic Hospital in Bari (Italy), a single surgeon L.V. performed 212 pancreatic resections, 124 of which were PD. We retrospectively analyzed 100 consecutive PD reconstructed with WPGA over this period of time. The study was designed and conducted respecting the STROBE guidelines for observational studies. The indication for surgery was upfront resectable primary or secondary malignancy (92 patients), 6 cases of Intraductal Papillary Mucinous Neoplasia (IPMN), and 2 of chronic "mass-forming" pancreatitis. The preoperative assessment included CT scan, neoplastic markers (CA 19.9 and CEA), MRI in selected patients, and evaluation of the surgical risk according to the ASA classification (Table 1). No ASA IV patient was excluded if judged fit enough to undergo major abdominal surgery by the anesthesiologists. Preoperative biliary drainage was not routinely placed in resectable patients if bilirubin was ≤10 g/dl and/or jaundice lasted for less than one week. The primary endpoint was to analyze the incidence of POPF and DGE. According to the 2016 ISGPF classification, any measurable volume of drained fluid on or after POD 3, with amylase levels 3 times the upper normal limit (3 × ULN), was defined as POPF only in cases of clinical relevance. Instead, an increased amylase value in drainage fluid without clinical consequences was defined as a biochemical leak (BL) [29]. According to the ISGPS definition, delayed gastric emptying (DGE) was diagnosed in cases needing insertion of a nasogastric tube (NGT) after POD 3 or intolerance to solid oral intake after POD 7 [2]. Secondary endpoints included overall and surgical morbidity and mortality within 90 days. We also analyzed median operative time, intraoperative blood transfusions, length of hospital stay, reoperation and readmission rates. Operative Technique The operations were performed by the same surgeon, LV, and the reconstruction technique was always a personal, modified technique for WPGA in this series of patients. Compared to the standard PGA technique, the most important modifications adopted since 1998 were the use of an external catheter instead of an internal stent and the "double-layer" anastomosis, where only the gastric seromuscular layer is sectioned and the hole for the duct-to-mucosa anastomosis is created when the "armed" stent is passed through the posterior gastric wall. Additionally, the surgeon modified the type of sutures used for the anastomosis, adopting an absorbable monofilament instead of a non-absorbable material. After pancreatic section, the pancreatic remnant was mobilized 3 cm toward the tail, with accurate dissection from the splenic vein and the surrounding tissues. Before performing the pancreatic anastomosis, the main pancreatic duct was explored to check its patency. The seromuscular incision, performed with a scalpel on the posterior gastric wall, was slightly larger than the greatest diameter of the pancreatic stump ( Figure 1A). Suture between the anterior pancreatic surface and the gastric seromuscular layer was performed using interrupted a monofilament absorbable suture, knotted on the external side to avoid decubitus on the pancreatic parenchyma. An external stent, which is a catheter of 6 to 10 Fr in diameter, almost as large as the duct, was introduced into the Wirsung prior to performing the duct-to-mucosa anastomosis. Then, this catheter was anchored to the Wirsung section surface with a single absorbable stitch. The external stent facilitates the performance of the anastomosis as it avoids occlusion of the duct by the sutures. The duct-to-mucosa anastomosis was performed with interrupted monofilament absorbable suture. This anastomosis was begun at the anterior circumference, placing one or more stitches in each quadrant, depending on the main duct diameter ( Figure 1B). The stent was passed through the posterior and anterior gastric wall thanks to its edge "armed" with a Redon nail. This trans-gastric catheter allows gastric drainage during the first postoperative days (since it reduces the pressure on the anastomosis) through its intragastric holes. . The catheter used as external stent (c) is introduced into the Wirsung duct to ensure its patency during anastomosis (B) The suture between the anterior pancreatic surface and the gastric seromuscular layer is performed. Then, the duct-to-mucosa anastomosis is performed, applying one or more stitches in each quadrant (C) After the anastomosis between the posterior gastric wall and the pancreatic stump is completed, the catheter is secured to the anterior gastric seromuscular layer (d) and exteriorized through the anterior abdominal wall (e). Finally, the gastric seromuscular layer was sutured to the posterior pancreatic surface. A double suture was performed, anastomosing the main pancreatic duct to the gastric mucosa and anchoring the pancreas to the gastric wall protected by the gastric mucosa itself. In this way, corruption of the pancreatic parenchyma by gastric acid secretions was avoided and intraluminal hemorrhage prevented. The catheter was secured with a Vicryl Rapid stitch to the anterior gastric seromuscular layer. It was successively exteriorized through the anterior abdominal wall under the left costal arch and fixed with an external stitch at the end of the operation ( Figure 1C). Two drainage tubes were always positioned. Postoperative Care After surgery, the patients usually returned to the surgical ward. The nasogastric tube was removed on POD 1 and the patient started oral intake on POD 3. The external stent was closed as soon as possible (when the output was minimal) and removed after POD 10, during recovery or after discharge. . Fourthly, studies on WPGA as the reconstruction technique after PD were selected. Studies on different procedures, such as PGA, were excluded. Lastly, the references of the included studies were searched to find additional relevant papers. The last search was performed on 14 July 2020. No language restriction was adopted. Four investigators (CB, FA, MC, SF) independently searched for papers, screened titles and abstracts of the retrieved articles, reviewed the full-texts, and selected articles for inclusion. Data Extraction The following information was extracted independently by two investigators (FA, SF) in a piloted form: (1) general information on the study (author, year of publication, country, study type, follow-up period, inclusion criteria, number of patients); (2) details of the surgical technique; (3) complications, including the number of patients with POPF (B-C), DGE, and PPH; (4) global morbidity. For each selected article, the main paper and Supplementary data were searched; if data were missing, authors were contacted via email. Data were cross-checked and any discrepancy was discussed. Results The series included 57 (57%) males and 43 (43%) females (M/F 1.32), the mean age was 68 (range 41-86) years, and median BMI was 25 (range 18.5-39). Some of the patients had previously been treated in outlying hospitals: 19 patients underwent biliary drainage as a temporary solution to the jaundice: 1 percutaneous transhepatic biliary drainage, 17 biliary endoprosthesis and 1 sphincterotomy without endoprosthesis positioning because of technical problems, which caused acute iatrogenic pancreatitis. One patient was referred to our center after "palliative" GEA had been performed because he was considered unresectable. A Kausch-Whipple PD was performed in 55 cases, while 45 patients underwent a PPPD, according to the indications for PD and extension of the tumor. Standard lymphadenectomy was always associated, and involved the removal of lymphatic groups 5, 6, 8a, 12a, b, 13, 14v and 17, according to the Japanese classification [30]. Extended lymphadenectomy (removal of additional lymphatic stations such as 8p, 9, 12p, 14a, 16a2, 16b1) was necessary in 11 cases (11%) to allow complete histological staging because of intraoperative evidence of suspicious lymph nodes. In 16 patients (16%), vascular resection was necessary because of portal vein or porto-mesenteric carrefour infiltration, and 8 patients (8%) underwent "extended" procedures with "en bloc" resection of surrounding organs. Operative data are reported in Table 2: the pancreatic characteristics were classified according to the ISGPS as Type A (not-soft pancreatic texture, duct size > 3 mm), Type B (not-soft, ≤3 mm), Type C (soft pancreatic texture, duct size > 3 mm) or Type D (soft, ≤3 mm) [31]. Additionally, intraoperative patients' characteristics were classified on the basis of the Fistula Risk Score, depending on pancreatic texture, pathology, Wirsung diameter and intraoperative blood loss (see Table 3) [32]. Eight patients (8%) developed a clinically relevant pancreatic fistula (5 grade B and 3 grade C) and 8 patients (8%) suffered a biochemical leak (Table 4). Grade B pancreatic fistulas were handled conservatively with enzymatic inhibitors and maintaining the drainage tubes until normalization of the amylase levels. One patient with a grade C fistula was reoperated and two patients died of POPF complications. Three patients (3%) presented "primary" DGE, while in four cases (4%) we observed "secondary" DGE, related to abdominal collections or POPF. All cases were managed conservatively with prokinetics and/or repositioning of NGT for a few days. We observed only one case of prolonged DGE, which was solved with dietary re-education. We observed 32 complications in 26 patients (26% morbidity, see Table 4). Seven patients (7%) developed "isolated" abdominal collections diagnosed by CT scan and were treated conservatively with antibiotics (in case of infection signs) and/or delayed drainage tubes removal. The amylase dosage on the drained fluid was negative for all of these patients. Operative time (min) 224 (range 140-420) Type A (not-soft pancreatic texture, duct size > 3 mm), Type B (not-soft, ≤3 mm), Type C (soft pancreatic texture, duct size > 3 mm) or Type D (soft, ≤3 mm). PD: pancreticoduodenectomy, GEA: gastro-enteral-anastomosis. Eight hemorrhagic complications were observed (8%): none of the patients experienced early-onset hemorrhage (in the first 24 h), whereas all cases are classified as late-onset PPH since they occurred more than 24 h postoperatively [3]. Three patients developed necrotic hemorrhagic pancreatitis (3%), in one this was associated with grade C POPF. Four cases of hemoperitoneum (4%) occurred. Two patients presented bleeding associated with grade B POPF: in one, this terminated spontaneously after treatment with percutaneous drainage, while the other was treated with embolization. Two patients were reoperated to achieve bleeding control. One patient (1%) suffered gastrointestinal bleeding due to gastric ulcer and was treated with high-dose pump inhibitors and repositioning of the NGT. Additionally, one case of biliary fistula (1%) occurred but no cases of enteric fistula. The surgical mortality was 6% (6 patients): three patients died of necrotic hemorrhagic pancreatitis, two patients due to grade C POPF complications and one patient, who underwent segmental venous resection, died of fatal vascular thrombosis. The mean operative time was 224 (range 140-420) minutes and no intraoperative complications occurred. Nine patients (9%) received intraoperative blood transfusions. The reintervention rate was 3%: one patient was operated because of necrotic hemorrhagic pancreatitis complications and two patients because of hemoperitoneum. The median hospital stay was 13.6 days (range 7-55). Two patients were readmitted, one for hemoperitoneum caused by a pseudo-aneurysm of the hepatic artery and one for severe wound infection with wall dehiscence. Flowchart In total, 608 papers were found on PubMed and one additional article on a double-layer PGA was retrieved from the references of the examined studies [33]. Papers were analyzed for title and abstract; 503 records were excluded (outside the scope of the review (e.g., bariatric surgery, complications of PD, gastrectomy, gastroenteroanastomosis, surgeries other than PD, pancreas transplantation, pancreatic pseudocysts, surgical palliation), use of a technique other than WPGA, PGA without details on the surgical technique, PJA, POPF, endoscopic procedures, or animal studies). The remaining 106 papers were retrieved in full-text and 99 records were excluded because of the use of a technique other than WPGA (n = 34), no data of interest (n = 22), PGA without details on the surgical technique (n = 22), PJA (n = 19), overlapping data (n = 1), or patients undergoing surgical palliation (n = 1). Finally, seven articles were included in the systematic review, since they described a Wirsung-Pancreato-Gastro-Anastomosis ( Figure 2) [14,16,17,19,33,34]. In total, 608 papers were found on PubMed and one additional article on a doublelayer PGA was retrieved from the references of the examined studies [33]. Papers were analyzed for title and abstract; 503 records were excluded (outside the scope of the review (e.g., bariatric surgery, complications of PD, gastrectomy, gastroenteroanastomosis, surgeries other than PD, pancreas transplantation, pancreatic pseudocysts, surgical palliation), use of a technique other than WPGA, PGA without details on the surgical technique, PJA, POPF, endoscopic procedures, or animal studies). The remaining 106 papers were retrieved in full-text and 99 records were excluded because of the use of a technique other than WPGA (n = 34), no data of interest (n = 22), PGA without details on the surgical technique (n = 22), PJA (n = 19), overlapping data (n = 1), or patients undergoing surgical palliation (n = 1). Finally, seven articles were included in the systematic review, since they described a Wirsung-Pancreato-Gastro-Anastomosis ( Figure 2) [14,16,17,19,33,34]. Table 5 [14,[16][17][18][19]34]. The articles were published between 1984 and 2017 and had sample sizes ranging from 5 to 205 patients. Two studies were conducted in Japan, one in China, two in Egypt, one in Spain, and one in the United States of America. Participants were adult subjects who underwent pancreaticoduodenectomy (PD or PPPD) and reconstruction with "double-layer" WPGA, performed without anterior gastrostomy. As regards the surgical technique, an internal stent was reported in five studies [14,16,17,19,34], while no stent was adopted by Telford et al. and El Nakeeb et al. [18,33]. Concerning surgical outcomes, rates of grade B-C POPF ranged from 0.7% to 15.5%, DGE from 4% to 21.6%, and global morbidity from 19% to 37.8%. PPH was reported in only three studies (see Table 5) [16,17,33]. The duration of follow-up was reported in only one study [33]. Qualitative Analysis The characteristics and technical details of the included studies are summarized in Table 5 [14,[16][17][18][19]34]. The articles were published between 1984 and 2017 and had sample sizes ranging from 5 to 205 patients. Two studies were conducted in Japan, one in China, two in Egypt, one in Spain, and one in the United States of America. Participants were adult subjects who underwent pancreaticoduodenectomy (PD or PPPD) and reconstruction with "double-layer" WPGA, performed without anterior gastrostomy. Discussion Pancreatoduodenectomy is a challenging surgical procedure, even for skilled surgeons. Furthermore, an expert team of surgeons is necessary to adequately manage the postoperative course and complications, which may occur in a high percentage of patients (30-50%) [1][2][3] Pancreatic reconstruction after PD can be performed by means of several techniques, but choosing between PGA and PJA is the main issue. In 1988, Icard [36] demonstrated the anatomical, technical and physiological advantages of PGA, a technique that had been described for the first time in 1946 by Waugh and Clagett [37]. The anatomical relationship between the stomach and pancreas creates perfect conditions for a tensionfree anastomosis. Moreover, the good thickness and blood supply of gastric mucosa allows good vascularization and robustness of the suture, thus, better healing of the anastomosis [15,36,38]. The vertical position of the stomach prevents stagnation of gastric secretions, and consequently, tension on the anastomosis [14,27]. The distance between the biliary and pancreatic sutures reduces the risk of damage of the biliary anastomosis by pancreatic enzymes in cases of POPF [36,39]. Finally, the acid gastric environment and absence of enterokinases inhibit complete enzymatic activation and consequent damage of the anastomosis. The most significant disadvantage of PGA is the increased risk of pancreatic hemorrhage, due to erosion of the pancreatic remnant by gastric secretions [8][9][10]. In order to prevent this life-threatening complication, the technique was modified by some authors by adding a Wirsung-mucosa anastomosis (from PGA to WPGA) [40]. The technique presented in this series is a double pancreatic suture. Since the gastric mucosa is interrupted only where the Wirsung-mucosa anastomosis is performed, the pancreatic stump is protected by the gastric mucosa against gastric acid secretions. In theory, the limit of this technique is the impossibility of performing it in cases of a markedly friable pancreatic parenchyma. Notably, when Wirsung cannulation cannot be achieved, WPGA on an external stent cannot be performed. Nonetheless, we succeeded in performing this anastomosis even in cases of a soft pancreatic parenchyma, after Wirsung duct cannulation. The catheter acted as a guide while performing the anastomosis, avoiding pancreatic duct occlusion by the sutures. Importantly, it works as internal perianastomotic drainage so it allows early removal of NGT. Pancreatogastric sutures were applied to anchor the pancreas to the stomach and reduce tension on the anastomosis. Whichever technique is used for pancreatic reconstruction, POPF and DGE are the most common complications after PD, which mostly affect the postoperative course and the length of stay, increasing hospital costs [41]. Several meta-analyses of randomized controlled trials (RCTs) have focused on pancreatic anastomosis, although the issue as to which is the best reconstruction method for the treatment of the pancreatic remnant after PD is still a matter of debate [10,24,[26][27][28]38,42]. Hallet et al. [38] showed that PGA is associated with a lower risk of fistula as compared to PJA, in both low-and high-risk patients (the risk reduction for POPF is 4% and 10%, respectively). Nonetheless, Guerrini et al. [10] demonstrated that PGA is associated with low fistula rates, but without reducing the overall rate of complications. Nonetheless, a recent meta-analysis on 15 RCT concluded that duct-to-mucosa pancreaticogastrostomy is associated with lower fistula rate, besides DGE syndrome, intrabdominal abscess and morbidity rate [28]. In our series, we observed 8 patients (8%) with a clinically relevant pancreatic fistula (grade B-C). In the literature, a grade B-C pancreatic fistula rate of 11 to 28.3% is reported [43][44][45][46][47]. In our series, one patient with POPF needed reoperation because of accidental pancreatic stent dislocation, and ensuing hemorrhagic shock and gastric perforation. Another patient, discharged on POD 15, was readmitted one week later because of hemorrhage due to a pseudo-aneurysm of the hepatic artery, treated with radiologic embolization. Two patients who underwent conservative treatment died because of POPF complications. DGE syndrome is another important issue after pancreatic surgery, occurring in 19 to 61% of patients [2,48]; it often delays recovery and discharge of the patient. The ISGPS (International Study Group of Pancreatic Surgery) classified "primary DGE" as cases not related to abdominal complications and "secondary DGE" as cases associated with complications like fistula or abdominal collections [2,21]. Prevention of "secondary DGE" can be achieved by avoiding abdominal complications; instead, "primary DGE" could be related to the surgical technique itself. Klaiber et al. [49] showed, in a metaanalysis of 992 patients, that pylorus-resecting PD is not superior to pylorus-preserving PD for reducing DGE. This is in contrast with a Cochrane Review [50] that states that the Whipple operation significantly reduces the DGE incidence as compared with PPPD. A recent ISGPS review of 178 studies revealed average rates of DGE and clinically relevant DGE of 27.7% (range: 0-100%; median: 18.7%) and 14.3% (range: 1.8-58.2%; median: 13.6%), respectively [51]. It is remarkable that in our series only 7 patients (7%) developed DGE. These data include 3 patients (3%) with "primary DGE" and 4 patients (4%) with "secondary DGE". In our series, secondary DGE was always related to POPF, and in some cases associated with abdominal collections. Despite these encouraging data, there is no evidence in the literature that PGA could prevent DGE syndrome [52]. Seven patients presented "isolated" abdominal collections (7%), not related to other abdominal complications (i.e., POPF). The diagnosis was made by CT scan, performed in four cases because of fever (4% abdominal abscesses) and in 3 cases for anemia (3% blood collections). All cases were handled conservatively, with no need for reoperation. We observed 8 hemorrhagic complications (8%), all of them classified as late-onset, according to the ISGPS definition [3]. Four patients (4%) presented hemoperitoneum and two of them were reoperated: one on POD 2 to achieve bleeding control, another one on POD 11 because of hemorrhagic shock and gastric perforation, secondary to pancreatic catheter dislocation. Two patients with grade B POPF presented late-onset hemorrhage (>24 h postoperatively) [3]: one patient was managed conservatively, and treated with percutaneous drainage since no evidence of active bleeding was found at angio-CT scan; another patient was treated with radiologic embolization of the hepatic artery pseudoaneurysm. None of the patients experienced early-onset hemorrhage, defined as PPH raised within the first 24 h postoperatively [3]. Specifically, no case of acute pancreatic stump bleeding was observed. Only one patient suffered a biliary leak, probably because in our reconstruction technique the distance between the pancreatic and biliary anastomoses protects the latter from pancreatic complications [36,39]. Six patients (6%) died of surgical complications (Table 4). One patient, who underwent segmental portal resection, developed post-operative venous thrombosis (PVT) and consequently died of acute hepatic failure. Three patients suffered acute necrotic hemorrhagic pancreatitis and died of the complications of this life-threatening condition. These data are remarkable, considering that post-operative pancreatitis could be considered a pancreatic stump complication, even with no evidence of a pancreatic fistula. Besides, hemorrhagic pancreatitis of the pancreatic stump is considered as a late-onset hemorrhagic complication in the ISGPS definition, associated with high mortality rate [3]. In fact, the ISGPS reports that post-operative pancreatic hemorrhage (PPH) accounts for 11 to 38% of overall mortality after pancreatic surgery [3]. Yekebas et al. [52] reported a prevalence of 5.7% of PPH in a series of more than 1500 patients with PJA reconstructions. Analysis of the cases of delayed PPH (after POD 6) revealed a mortality rate of 47%, while no patients with early PPH (up to POD 5) died. They concluded that the worst prognosis was associated with late-onset PPH, often related to POPF, causing erosions, pseudoaneurysms and other vascular irregularities, and consequently, life-threatening bleeding. Roulin et al. [53] reviewed the incidence of delayed PPH (more than 24 h after pancreatic surgery) in 15 articles including 7400 patients. They found an overall incidence of 1.6 to 12.3% among different studies, and evinced that half of the cases were related to pancreatic leak. An overall mortality rate of 35% was reported by Roulin et al., which was caused by hemorrhagic or septic shock, disseminated intravascular coagulation and multiple organ failure [53]. A 43% mortality rate (3 of 7 patients) was observed in our series among patients with late-onset PPH. On the other hand, no cases of early-onset PPH occurred, therefore our modified technique eliminated the risk of acute pancreatogastric hemorrhage, which is the "Achille's heel" of PGA [53]. Nonetheless, necrotic-hemorrhagic pancreatitis of the pancreatic stump remains a major clinical problem. Erosion of peripancreatic vessels is a recognized cause of delayed PPH [3,52,53]. This mechanism could explain the cases of acute necrotic-hemorrhagic pancreatitis we observed in our series, despite the absence of biochemical or clinically evident POPF. In fact, a "silent" disruption of the suture between the Wirsung duct and gastric mucosa could create the conditions for pancreatic damage due to gastric secretions. Comparing our results with the literature, whatever the technique used for pancreatic anastomosis, pancreatic surgery has not yet reached the goal of eliminating the risk of hemorrhagic complications after PD, and PPH remains the main cause of mortality (up to 50%) after pancreatic resection [52,54]. In any case, when comparing PGA with PJA, pancreatogastric anastomosis is associated with a higher incidence of PPH [22]. The reintervention rate was 3% (3 patients). One patient underwent abdominal toilet for necrotic-hemorrhagic pancreatitis complications but died despite reoperation. Two patients suffered abdominal bleeding and were reoperated for hemorrhagic shock. One of them presented with pancreatic stent dislocation two weeks after surgery. The median hospital stay was 13.6 (7-55) days. Non-complicated patients were discharged after 7-10 days, thanks to postoperative management based on ERAS (enhanced recovery after surgery) protocols, which are strongly recommended after pancreatic surgery [15,16]. Furthermore, we observed a low rate of major complications (Clavien-Dindo grade III-IV, see Table 6), which usually affect the patient outcome, length of stay and hospital costs [41]. The limits of our study are represented by its retrospective nature and the small number of patients and complications observed. Also, this is the personal modification of a single experienced surgeon, who started with pancreaticoduodenectomy in 1998. Therefore, we cannot state that our technique for WPGA is widely reproducible or could reduce the rate of complications if adopted by other groups of surgeons. Comparison with Available Literature on WPGA The characteristics and technical details of the Wirsung-Pancreato-Gastro-Anastomoses included in the review are described in Table 5, according to the ISGS classification and compared with our technique [35]. Telford and Mason [18] describe a WPGA without stenting. The pancreatic stent ensures the patency of the Wirsung duct when anastomosis is performed, thus avoiding its occlusion by the sutures. This is very important, especially when the duct to be sutured is small (<3 mm). Takao and Shinchi et al. [19,34] reported their experience of the same surgical division. Compared with our technique, the main differences are the transfixing suture and the internal stent. In our experience, the use of the external pancreatic stent has the advantage of permitting early removal of NGT and resumption of oral intake. In fact, it allows external drainage of pancreatic and also gastric secretions, thanks to its gastric holes. The techniques reported by the remaining authors (four of seven in our review) involve different pancreatic sutures (i.e., running sutures instead of single stitches)-see Table 5. Furthermore, none of them used external stents [14,16,17,33]. More importantly, we do not section the gastric mucosa, as all of the authors described. The hole for the duct-to-mucosa anastomosis is created when the "armed" stent is passed through the posterior gastric wall, where the mucosal integrity has to be preserved at the time of seromuscular section. This technical detail avoids redundancy of gastric mucosa and reduces the risk of discontinuity between the gastric mucosa and the Wirsung duct. In the reviewed papers, we found that there was often a lack of information about the study inclusion and exclusion criteria (i.e., ASA score), Wirsung diameter (median, range), pancreatic parenchyma or other characteristics that could be related to surgical results. Furthermore, Shinchi et al. [19] did not use the ISGPS definition for POPF. Finally, the follow-up period was 90 days in our series, but was not specified in most of the reviewed articles, except for El Nakeeb et al. [33]. The difference in defining postoperative complications, as well as the lack of information about patient characteristics, exclusion criteria and follow-up period, could influence data collection, and therefore, a correct interpretation of postoperative outcomes. Therefore, we avoided comparing our results to other WPGA found in the literature. On the other hand, the aim of this review was to confirm the originality of our modified double-layer Wirsung-Pancreato-Gastro-Anastomosis. Conclusions Nowadays, pancreatic surgery offers satisfactory results in terms of mortality, but morbidity after PD remains elevated, whatever the technique used for treatment of the pancreatic stump. Despite all the guidelines and recommendations, currently there is no general consensus in the literature about the "gold standard" reconstruction technique to reduce the rate of POPF, DGE and surgical morbidity after PD. In this sense, the best technique is the one that reduces the risk of complications, especially pancreatic fistula and abdominal collections, and also the cases of "secondary DGE". Our modified double-layer WPGA, in which the gastric mucosa reduces the risk of pancreatic stump damage by the gastric acid secretions, is associated with a very low incidence of pancreatic fistula and DGE. Furthermore, this technique reduces the risk of acute hemorrhage of the pancreatic parenchyma as compared with PGA. Nonetheless, the goal to avoid life-threatening pancreatic stump complications, such as necrotic hemorrhagic pancreatitis, which is probably a consequence of "sub-acute" damage, has yet to be achieved. Our results are limited by the retrospective nature of the study, besides the small number of complications observed. Further studies are needed to confirm the safety of our modified technique. Data Availability Statement: The data presented in this study are available on request to the corresponding author.
v3-fos-license
2023-06-02T15:22:57.102Z
2023-05-31T00:00:00.000
259015117
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://doi.org/10.3390/m1654", "pdf_hash": "dc18d75aef2fed9f78ac400c6ea92f902500e238", "pdf_src": "MergedPDFExtraction", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41225", "s2fieldsofstudy": [ "Chemistry" ], "sha1": "95712d5a6c64b82c03cba6cd11239d25544371f5", "year": 2023 }
pes2o/s2orc
Synthesis and Spectroscopic Characterization of Furan-2-Carbaldehyde-d Here, we present a protocol for the one-step synthesis of the title compound in quantitative yield using adapted Vilsmeier conditions. The product was characterized by 1H-,2H-,13C-NMR-, as well as IR and Raman spectroscopy. Spectral data are given in detail. Introduction Furfural is an important starting material for a large number of reactions. It plays an indispensable role in many synthetic pathways, e.g., for the synthesis of pharmaceuticals, dyes, or polymeric materials [1][2][3]. Since furfural can be obtained from renewable resources, the heterocycle has attracted reasonable interest in terms of green chemistry [4][5][6]. Isotope labeling is a key concept in modern organic chemistry to track compounds, understand metabolisms, or clarify reaction mechanisms [7][8][9]. Because of the stronger deuteriumcarbon bond, deuterated drug molecules can alter the metabolism and help to improve pharmacokinetic or toxicological properties. Hence, deuteration is an important research issue in medicinal chemistry as well [10,11]. In addition, 2 H-, and even 14 C-and 18 Oisotope derivatives of furfural, especially of the aldehyde group, have been synthesized before [10,12]. As early as 1973, D.J. Chadwick et al. pioneered the synthesis of all single D-labeled furfural isomers [13,14]. Besides the complete deuteration of a molecule, it is of interest to mark a specific position with an isotope label in order to track its path during a chemical transformation. Scheme 1 summarizes the known literature methods for the synthesis of furan-2-aldehyde-d (1), starting either from furfural (2), furan (3), di(furan-2-yl)ethanedione (4), or 2-(furan-2yl)-2-oxoacetic acid (5). The different synthetic strategies can be summarized as follows: (1): Starting from 2, the aldehyde is protected as a thioacetale, subsequent deprotonation by a lithium base, and quenching with D 2 O, followed by the mercury-catalyzed removal of the protecting group [15,16]; (2): Vilsmeier reaction from 3 using DMF/POCl 3 [13]; (3): cleavage of 4 by cyanide in D 2 O as solvent [13]; and (4): the decarboxylation of 5 using PPh 3 /NEt 3 in deuterated water [17]. Upon closer inspection, all the described methods suffer from demanding multiple steps, expensive starting materials, and/or a poor yield in relation to expensive materials. For the direct exchange of the aldehyde proton in 2 by deuterium to 1, no reactions were found. More recently, two methods for the direct H/D exchange via a photo-redox reaction or an N-heterocyclic carbene (NHC) catalyzed reaction were described. The scope of these methods includes a wide range of aldehyde substrates, but furfural (2) is missing in both cases [18,19]. Shinada et al. summarized methods for the formation of deuterated aldehydes [20], but they did not mention a straightforward synthesis for 1. However, some reactions using furfural-d (1) for the synthesis of deuterated compounds have been described before [17,[21][22][23]. In this report, we describe the optimized formylation of furan (3) using DMF-d7 under adapted Vilsmeier conditions to yield 1 quantitatively. Comprehensive 1 H-, 13 C-, 2 H-NMR, EI-MS, as well as vibrational spectra (IR/Raman) are presented in detail. Results and Discussion To improve the accessibility of furan-2-carbaldehyde-d (1), we optimized the Vilsmeier protocol to obtain deuterated furfural in a quantitative yield starting from furan (3). Therefore, we choose DMF-d7 as the source of formyl. However, DMF-d1 is commercially available as well, but for a higher price than DMF-d7. Utilized DMF-d7 had a degree of deuteration of 99.6%, determined by 1 H-NMR spectroscopy. For the reaction, stoichiometry DMF-d7 was calculated as the limiting reactant, and all other components were used in excess. As a result, a gram-scale synthesis method was developed (see Section 3.2.). Compared to the earlier-described Vilsmeier protocol [13], the atom economy of the used DMF-d7 was maximized as proven by the quantitative yield (99%). The applied extraction/evaporation workup yielded a product purity of about 97% ( 1 H-NMR). Hence, the received product can be used for subsequent reactions without further distillation purification. Overall, the new synthetic procedure increases the yield by 15%, while also accelerating the reaction protocol by avoiding a time-demanding purification procedure which was necessary in previous reports. The increase in the yield is especially important as the deuterated starting material is expensive. NMR Analysis Details The recorded 1 H-as well as 13 C-spectra are shown in Figure 1. The observed 1 H-NMR signals are in good agreement with earlier published data [15,24]. The low abundance of byproducts (<3%) proves the excellent reaction and workup selectivity. A degree of deuteration of about 99.6% was determined by the residual aldehyde 1 H-resonance at 9.64 ppm (see Figure 1, inlay). In the proton decoupled 13 C-NMR spectrum, expected 13 C/ 2 H multiplicity was observed. Coupling constants of 1 J( 13 C(1)-D) = 27.3 Hz and 2 J( 13 C(2)-D) = 4.5 Hz were found. Comparison between the 13 C-NMR resonances of furfural (2) and furfural-d (1) indicates a deuterium isotope shift of Δδ(C(1)) = 20.6 Hz and Δδ(C(2)) = 2.4 Hz. Comprehensive 13 C-Spectra were recorded from a mixture of fur- In this report, we describe the optimized formylation of furan (3) using DMF-d 7 under adapted Vilsmeier conditions to yield 1 quantitatively. Comprehensive 1 H-, 13 C-, 2 H-NMR, EI-MS, as well as vibrational spectra (IR/Raman) are presented in detail. Results and Discussion To improve the accessibility of furan-2-carbaldehyde-d (1), we optimized the Vilsmeier protocol to obtain deuterated furfural in a quantitative yield starting from furan (3). Therefore, we choose DMF-d 7 as the source of formyl. However, DMF-d 1 is commercially available as well, but for a higher price than DMF-d 7 . Utilized DMF-d 7 had a degree of deuteration of 99.6%, determined by 1 H-NMR spectroscopy. For the reaction, stoichiometry DMF-d 7 was calculated as the limiting reactant, and all other components were used in excess. As a result, a gram-scale synthesis method was developed (see Section 3.2). Compared to the earlier-described Vilsmeier protocol [13], the atom economy of the used DMF-d 7 was maximized as proven by the quantitative yield (99%). The applied extraction/evaporation workup yielded a product purity of about 97% ( 1 H-NMR). Hence, the received product can be used for subsequent reactions without further distillation purification. Overall, the new synthetic procedure increases the yield by 15%, while also accelerating the reaction protocol by avoiding a time-demanding purification procedure which was necessary in previous reports. The increase in the yield is especially important as the deuterated starting material is expensive. NMR Analysis Details The recorded 1 H-as well as 13 C-spectra are shown in Figure 1. The observed 1 H-NMR signals are in good agreement with earlier published data [15,24]. The low abundance of byproducts (<3%) proves the excellent reaction and workup selectivity. A degree of deuteration of about 99.6% was determined by the residual aldehyde 1 H-resonance at 9.64 ppm (see Figure 1, inlay). In the proton decoupled 13 C-NMR spectrum, expected 13 C/ 2 H multiplicity was observed. Coupling constants of 1 J( 13 C(1)-D) = 27.3 Hz and 2 J( 13 C(2)-D) = 4.5 Hz were found. Comparison between the 13 C-NMR resonances of furfural (2) and furfural-d (1) indicates a deuterium isotope shift of ∆δ(C(1)) = 20.6 Hz and ∆δ(C(2)) = 2.4 Hz. Comprehensive 13 C-Spectra were recorded from a mixture of furfural (2) and furfural-d (1) (see Figure 2). A 2 H-NMR spectrum was recorded on a 20 mg/mL DMSO solution with 1% DMSO-d 6 as an internal reference. 2 H resonance of 1 was observed at 9.35 ppm with a FWHM of 23 Hz. fural (2) and furfural-d (1) (see Figure 2). A 2 H-NMR spectrum was recorded on a 20 mg/mL DMSO solution with 1% DMSO-d6 as an internal reference. 2 H resonance of 1 was observed at 9.35 ppm with a FWHM of 23 Hz. Comprehensive EI-MS Spectra Analysis The comparison of the EI-MS spectra is shown in Figure 3. The degree of deuteration of the analyzed sample of compound 1 was 99.6% ( 1 H-NMR). The expected 1 Dalton fural (2) and furfural-d (1) (see Figure 2). A 2 H-NMR spectrum was recorded o mg/mL DMSO solution with 1% DMSO-d6 as an internal reference. 2 H resonance o observed at 9.35 ppm with a FWHM of 23 Hz. Comprehensive EI-MS Spectra Analysis The comparison of the EI-MS spectra is shown in Figure 3. Comprehensive EI-MS Spectra Analysis The comparison of the EI-MS spectra is shown in Figure 3. Comprehensive IR Spectra Analysis In Figure 4, the ATR-IR spectra are shown. As expected, the characteristic ν(O=C-H) bands of furfural (2) were observed between 2847 and 2715 cm −1 . For the deuterated furfural 1, ν(O=C-D) bands were found between 2139 and 2080 cm −1 . The exchange of H to D caused a shift of about 700 cm −1 for 1. The sp 2 -CH bands were found at similar frequencies in both cases. The comparison of the finger print regions of both compounds reveals clear differences too (see Figure 5). For furfural (2), the bands at, e.g., 1687/1668 as well as 1472/1463 were previously assigned as vibrational modes of OO-cis-and OO-trans-furfural conformers [25]. For 1, these bands were not observed. Hence, we speculate that the rotational barrier along the C(1)-C(2) axis prevents isomerization and favors one isomer in neat material at an ambient temperature. Comprehensive IR Spectra Analysis In Figure 4, the ATR-IR spectra are shown. As expected, the characteristic ν(O=C-H) bands of furfural (2) were observed between 2847 and 2715 cm −1 . For the deuterated furfural 1, ν(O=C-D) bands were found between 2139 and 2080 cm −1 . The exchange of H to D caused a shift of about 700 cm −1 for 1. The sp 2 -CH bands were found at similar frequencies in both cases. The comparison of the finger print regions of both compounds reveals clear differences too (see Figure 5). For furfural (2), the bands at, e.g., 1687/1668 as well as 1472/1463 were previously assigned as vibrational modes of OO-cis-and OO-transfurfural conformers [25]. For 1, these bands were not observed. Hence, we speculate that the rotational barrier along the C(1)-C(2) axis prevents isomerization and favors one isomer in neat material at an ambient temperature. The comparison of the fingerprint regions reveals only small differences. The furfurald (1) misses the earlier-mentioned double bands at 1651 cm −1 and 1462 cm −1 and instead only shows a single band at these positions. In addition, in the case of 1, the band at 1366 cm −1 is missed. Characteristic bands for 1 were instead found at 1047 cm −1 and 712 cm −1 (see Figure 5). Comprehensive Raman Spectra Analysis Raman spectra were recorded using neat materials. In Figure 6, the received Raman spectra of furfural (2) and furfural-d (1) are shown. Characteristic C=O-H bands were found at 3125, 2855, and 2719 cm −1 . By the H/D-exchange, C=O-D bands were found at 2144, 2120, and 2084 cm −1 and are in good agreement with the expected shift of about 700 cm −1 also seen in the IR spectra. Other CH bands were unaffected by the deuteration, and thus bands were observed at the same wavenumbers for both molecules. A comparison of fingerprint bands revealed very similar spectra (see Figure 7). In the case of 2, deformation vibrations for C-H bonds were found at 1473 and 1466 cm −1 . Even here, the OO-cis/OO-trans conformation effect can be seen. For the bands at 1367 cm −1 , no equivalent band was found in the spectra for the deuterated compound 1. Hence, the Raman bands behave similarly to the IR absorption bands. The comparison of the fingerprint regions reveals only small differences. The furfural-d (1) misses the earlier-mentioned double bands at 1651 cm −1 and 1462 cm −1 and instead only shows a single band at these positions. In addition, in the case of 1, the band at 1366 cm −1 is missed. Characteristic bands for 1 were instead found at 1047 cm −1 and 712 cm −1 (see Figure 5). Comprehensive Raman Spectra Analysis Raman spectra were recorded using neat materials. In Figure 6, the received Raman spectra of furfural (2) and furfural-d (1) are shown. Characteristic C=O-H bands were found at 3125, 2855, and 2719 cm −1 . By the H/D-exchange, C=O-D bands were found at The comparison of the fingerprint regions reveals only small differences. The furfural-d (1) misses the earlier-mentioned double bands at 1651 cm −1 and 1462 cm −1 and instead only shows a single band at these positions. In addition, in the case of 1, the band at 1366 cm −1 is missed. Characteristic bands for 1 were instead found at 1047 cm −1 and 712 cm −1 (see Figure 5). Comprehensive Raman Spectra Analysis Raman spectra were recorded using neat materials. In Figure 6, the received Raman spectra of furfural (2) and furfural-d (1) are shown. Characteristic C=O-H bands were −1 Figure 5. Comprehensive ATR-FTIR spectra of the fingerprint region highlighting differences between 1 and 2. cm −1 also seen in the IR spectra. Other CH bands were unaffected by the deuteration, and thus bands were observed at the same wavenumbers for both molecules. A comparison of fingerprint bands revealed very similar spectra (see Figure 7). In the case of 2, deformation vibrations for C-H bonds were found at 1473 and 1466 cm −1 . Even here, the OO-cis/OO-trans conformation effect can be seen. For the bands at 1367 cm −1 , no equivalent band was found in the spectra for the deuterated compound 1. Hence, the Raman bands behave similarly to the IR absorption bands. Method of Synthesis In total, 2.1 g (28.4 mmol, 2.04 mL, 1.0 eq.) of DMF-d 7 , 14.5 g (213 mmol, 15.4 mL, 7.5 eq.) of furan, and dichloromethane (DCM, 20 mL) were added into a 250 mL flask equipped with a magnetic stirring bar and a dropping funnel under inert conditions (N 2 ). The flask was placed in an ice bath for 10 min before 3.9 g of (31.2 mmol, 2.65 mL, 1.1 eq.) (COCl) 2 in 20 mL DCM was added dropwise within 10 min. A white precipitate formed during the addition. The batch was stirred in the ice bath and allowed to warm up overnight. After 12 h, a clear, slightly reddish solution formed. The flask was cooled again within an ice bath before 50 mL of saturated Na 2 CO 3 solution was added in small portions. After the gas development ended, the reaction mixture was stirred for 10 min in the ice bath before the phases were separated and the aqueous phase was extracted 3x with 30 mL of DCM. The combined organic phases were washed 3x with about 30 mL of saturated NH 4 Cl solution and 3x with about 30 mL of brine. The organic phase was dried with 15 g of MgSO 4 and subsequently evaporated under a reduced pressure to produce 2.7 g (yield: 99%) of a slightly reddish liquid. 1 H-NMR analyses showed a purity better than 97% (see Figure 1) and a deuteration degree of 99.6%. Depending on the subsequent reaction steps, this product can be used without further purification. Bulb-to-bulb distillation was at 80 • C, 20 mbar, and ice cooling yielded about 2.55 g (26.3 mmol, 93%) of furan-2-carbaldehyde-d (1). The product was stored under inert gas and cooled conditions to prevent discoloration and decomposition. 1 Instrumental Analytics NMR spectra were recorded using a 300 MHz Avance I (Bruker, Germany) with a QNP probe head at 25 • C using standard pulse sequences. All compounds were analyzed as a 40 mg/mL CDCl 3 solution. The 2 H-NMR probe was analyzed as 20 mg/mL DMSO-d 6 solution with 1% DMSO as the internal reference. Data were analyzed using the software MestReNova V.14.3.1 (Mestrelab Research, S.L. 15706 Santiago de Compostela, Spain). For the comprehensive 13 C-Spectra, a 20 mg/20 mg mixture of 1 and 2 was analyzed (see Figure 2). The Raman spectra of the neat materials were recorded on a JASCO FT/IR-6300 Spectrometer equipped with an RFT-6000 Raman unit. The Raman spectra obtained 1064 nm laser excitation. The ATR-IR spectra of the neat materials were recorded using a JASCO FT/IR-6300 spectrometer equipped with a PIKE Technologies MIRacle Single Reflection ATR-Unit. ATR-correction was applied for all the spectra. IR-and RAMAN data were analyzed using Peak Spectroscopy Software V.4.00 (Operant LLC, Monona, WI 53716, USA). Conclusions We presented a protocol for the quantitative synthesis of furan-2-carbaldehyde-d (1) starting with furan using DMF-d 7 and (COCl) 2 . The desired product was obtained in quantitative yield (99%). The 1 H-NMR spectra revealed a deuteration degree of >99% and a purity of the product of >96%. 13 C-NMR-, ATR-IR, and Raman spectra were recorded and discussed in detail. Supplementary Materials: The supporting information can be downloaded online. Author Contributions: Conceptualization, methodology, data analysis, and writing-original draft preparation R.G. and G.A.G.; synthesis work, analytical data generation, and review and editing E.D. and J.J. All authors have read and agreed to the published version of the manuscript.
v3-fos-license
2019-04-09T13:04:12.116Z
2000-03-05T00:00:00.000
103802595
{ "extfieldsofstudy": [ "Chemistry" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/2218-0532/68/1/41/pdf", "pdf_hash": "c5bc63e29a694672d78ca26929d12e8132714f4c", "pdf_src": "Adhoc", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41227", "s2fieldsofstudy": [ "Chemistry", "Medicine" ], "sha1": "ca3e63f58d8d98deeb0ad1a53aa79cbd5c6b47fc", "year": 2000 }
pes2o/s2orc
Quantitative Structure-Property Study on Pyrazines with Bell Pepper Flavor A quantitative structure-property (QSPR) study on pyrazines with bell pepper aroma is performed by means of different statistical methods, which correlate appropriate molecular descriptors with the biological activity. The different methods lead to consistent results, indicating which of the molecular properties of the compounds under consideration are significant for bell pepper flavor. These results are compared with other models. INTRODUCTION Ch. Th. K I e i n et al.: The relationship between the molecular structure of flavor compounds and the intensity and the quality of their aroma impression has received more and more interest in the past years. The conformational change, induced by binding the odor or aroma molecule to olfactory receptors, activates the adenylate cyclase cascade, leading to the opening of an unspecific cation channel by CAMP, and thus releasing an action potential [2]. However, it has been shown that only some of the odor molecules simulate the adenylate cyclase cascade. In the mean time, in different species, inositol-l,4,5-triphosphate (IP3) was found to be a second messenger in the olfactory signal transduction [3]. IP3 is supposed to open a specific ca2'channel by binding to this membrane protein [3]. On the other hand, the gap between the knowledge of the primary structure and the threedimensional geometry of olfactory receptors is large: while the sequence of some receptors is already known, no detailed structural elucidation exists for the moment. This is a strong motivation to study the odor molecule-receptor interaction by molecular modeling approaches. In the present study, structure-flavor relationships on pyrazine-based flavor molecules with bell pepper aroma are analyzed by means of three different methods: multiple linear regression (MLR), cluster analysis and comparative molecular field analysis (CoMFA) [4]. Pyrazine-based aroma compounds show a broad spectrum of flavor impressions, reaching fiom earthy, nutty, roasted to bell pepper or woody. Their general structure is presented in Figure 1. Pyrazines were first identified in heated food as bread [5], different meats [6], backed potatoes [7], or coffee [8], where they are formed during the Maillard reaction from reducing sugars and amino acids [9], but they also occur in fresh vegetables like tomatoes, asparagus, beans, spinach [lo], or in bell pepper [1 11. From the analysis of the obtained regression and CoMFA [4] models, conclusions on steric and electronic requirements, responsible for the bell pepper flavor are deduced. with the semiempirical AM1 method [13] implemented in the MOPAC program package [ 141. MATERIALS AND METHODS For the obtained structures the following molecular properties are calculated, using the TSAR (Tools for Structure-Activity Relationships) software [15]: (i) steric descriptors: molecular volume (V) and surface(S), molecular refractivity (MR), and the Verloop parameters (L, BI, B2, Bh B4) [16] for the four possible substituents, R1 to Rq. ( Figure 1). As R1 always the substituent with the heteroatom is considered, except for the three compounds where the substituents do not contain any heteroatom: compound 3 (R~=rnethyl), compound 59 (R~=ethyl) and compound 8 1 (R~=ethyl). The Verloop parameter L(~) represents the maximal length of substituent i along the axis defined by the bond which connects the substituent with the heterocycle. B:) u=1, ..., 4) denotes the width of a substituent i perpendicular to this axis, and is chosen in such a way that BI < B2 < B3 < Bq; the molecular refractivity, being related to the volume and to the polarizability of a compound, is not only a steric descriptor, but also gives information whether dispersion forces are important in the interaction with the receptor or not. (ii) descriptor of lipophilicity: logP where P is the partition coefficient of the respective compound between octanol and water; the larger P (and thus logP) is, the more hydrophobic is a compound. Within cluster analysis a distance matrix is calculated from the molecular properties, which is then used to class@ samples into clusters of similar members. Cluster analysis is performed with the TSAR [15] program, using Ward clustering [18] with Euclidean distances. CoMFA analysis [4] is performed with the SYBYL software [19]. The molecules are superimposed by fitting the atoms of the heterocycle and the first atom of substituent 1 (R1). Grid sizes of 1, 2 and 3 A and different probe atoms [sp3C(+1), sp30(-1) and H(+l)] are employed for the evaluation of the molecular field. For the calculation of the electrostatic field the same AM1 charges as in MLR are used. The SAMPLS [20] variant of PLS is applied, with the cross-validation option of leaving out one compound in turn. The quality of the models is estimated by the same statistical indicators as in MLR. Multiple Linear Regression The best regression model found by two-way stepping reads as: As can be seen from Table 1, the variables from Eqn. (1) show no significant correlation among each other (2 < 0.40 indicates that no correlation exists among the variables). i.e. the significance, at 95 % level, of the individual regression coefficients is also given. The predictive power of the model is high, since $, , is high and fairly close to / (0.893). The values in brackets denote the standard errors of the coefficients [which for Eqn. (1) are given in Table 21. As in the previous case, the used variables do not correlate with each other. The overall regression and the individual regression coefficients again are statistically significant, as judged by the Fand the t-values. The predictions of the biological activity with the MLR equations is shown in Table 3. The high cross-validation 2 (2cv) values suggest that the remarkable statistical qualities of the models should not stem from chance correlation. Nevertheless, in order to exclude chance correlation, the effects of randomization on the dependent variables are analyzed: the 32 dependent variables are redistributed by a random number generator, and subsequently models are generated as previously by F-stepping variable selection. Table 4 shows that randomization causes, in all cases, the loss of correlation and statistical significance. 4 < 0.40 (i.e. r < 0.63) indicates that no signifcant correlation exists among the independent variables. This is the case in five out of the ten situations. However, the other five cases have values only slightly above Table 3. Actual and predicted biological effects. CoMFA prediction stems from the best model Another test which confirms the good statistical qualities of the regression models (1) and (2) is the relatively high stability of the r?,, to different sizes of leave out groups, shown in Table 5 Actual The results of CoMFA are summarized in Table 6. The models obtained with the three different probes [sp3c(+1), sp30(-1) and H(+l)] and different grid spacing have comparable qualities, as reflected by the statistical parameters. The best model (no. 3, Table 6) has the highest predictive 2 (3cv) and the lowest standard error of prediction (SEP). Favorable and non favorable steric and electrostatic components of the molecular field are shown in Figure 4. In Figure 4 light grey indicates unfavourable, dark grey favourable steric regions, i.e. bulky substituents in the light grey zone will diminish, in the dark grey region will increase the biological activity (bell pepper flavor). A corresponding picture for electrostatic interactions indicates the region@) where a stronger negative field (light grey), or a stronger positive field (dark grey) increases the biological activity. (Fig. 4), while Eqn. (2) is in better concordance with CoMFA concerning the electrostatical situation (Fig. 4). Three of the four steric regions, which appear to be important according to CoMFA (Fig. 4 Only the unfavorable contribution of bulky groups in the region of substituent % as predicted by CoMFA, is not reproduced by the MLR equations. In a similar fashion, Eqns. (1) and (2) suggest that increased negative charges on atoms C3 and C6 are of advantage for the bell pepper flavor, because the values of dC3) and C'C6) are negative for all compounds and have negative regression coefficients. This situation is more or less in agreement with the CoMFA picture (Fig. 4). However, the favorable negative electrostatic field in this regions seems to result fi-om both, the ipso and the substituents atoms. The favorable effect of a positive electrostatic field in the region of substituent R1 resulting fi-om CoMFA, is also in agreement with Eqn. (2): since the values of 6"" can be positive and negative; the more positive 6"" will be, the larger its contribution to "bell pepper flavor" will be. However, one has to keep in mind that the MLR models consider only the first atom of the substituent, while CoMFA takes into account the whole substituent. The differences of the two methods, 2D-QSAR and CoMFA, stem obviously from the differences in the approaches used. a bulky, rather long shaped [correlation with L~' ] substituent R2 is favorable for bell pepper flavor; this suggests the existence of a binding pocket for this substituent. an increased electrostatic field in the regions of atoms C3 and C6 (and the substituents R2 and %, respectively) is advantageous for bell pepper aroma impression. the substituent RI should not be too bulky; larger substituents than the methoxy group appear to be unfavorable for the biological activity. Rz for bell pepper aroma, suggesting that larger substituents (up to 6-9 C-atoms) favor the aroma impression. This is in agreement with both approaches (MLR and CoMFA ). [23] propose that besides the hydrophobic interaction stemming fiom the alkyl group R2, hydrogen bonding between the nitrogen atoms of the pyrazine nucleus and the heteroatom as donors on one hand, and acceptors from the receptor-pocket on the other hand, should be important for bell pepper flavor. Although we have no direct evidence for hydrogen bonding in o w models, a favorable negative electrostatic field in the vicinity of the pyrazine nitrogen and the heteroatom of R1 is in agreement with the hydrogen bonding hypothesis. A more general model (including pyrazines, pyridines and thiazoles) has been proposed by Rognon and Chastrette 1241 . It is presented in Figure 5 and will be briefly discussed. The bulky group at C2, with a volume between 34 A3 and 85 A3, is R2 in our notation. Masuda and Mihara However, R2 is supposed to consist of two substructures, G I and G2, with G' lying in the N-CZC3-plane. G~ is preferentially a branched allcyl group. The sp2-nitrogen is assumed to form a hydrogen bond with a donor from the receptor. The substituent at C3, -x-G3, is R1 in our notation. It is supposed to be smaller (volume between 13 A3 and 34 A3 ) than R2. Dimensions and positions of G', G~ and G3 are postulated to be relevant parameters for bell pepper flavor. The steric requirements of the model are, more or less, in agreement with our results (bulky R2, less bulky RI). However, in our models no substructures, G' and G2, in R2 are identified, since they do not exist in the pyrazine derivatives used. CONCLUSIONS The MLR models developed on the basis of 16 pyrazines with bell pepper and 16 pyrazines with no bell pepper flavor, have high predictive power, as reflected by the cross-validation 4 (9,"). The dependent variables are uncorrelated, and thus permit conclusion on the parameters important for bell pepper flavor. The results from MLR models are in good agreement with CoMFA and identlfy steric and electronic requirements for bell pepper aroma impression of pyrazine molecules. Moreover, the results are in good agreement with other models.
v3-fos-license
2019-02-08T12:12:25.073Z
2017-08-01T00:00:00.000
86868401
{ "extfieldsofstudy": [ "Sociology" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://doi.org/10.1016/j.childyouth.2017.07.016", "pdf_hash": "3109fbfcc984d86d6501ee67546bd2926bf281ab", "pdf_src": "Adhoc", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41228", "s2fieldsofstudy": [ "Sociology", "Law" ], "sha1": "cb4d8d20605afb784617085489d4f1787a3df478", "year": 2017 }
pes2o/s2orc
Recalcitrance, compliance and the presentation of self: Exploring the concept of organisational misbehaviour in an English local authority child protection service This article examines how social workers reinterpreted certain legal requirements to meet their organisation's performance targets. Using an ethnographic approach, I combine organisational misbehaviour theory and Goffmanesque conceptions of dramaturgy to explore the regional activity of one team in a statutory agency. I argue that singly neither misbehaviour theory nor dramaturgical performances account for our understanding of why workers respond differently to organisational changes in a neo-liberalist environment. This study differs from current literature by shifting emphasis away from workers either resisting or conforming with organisational directives on to the ways in which individuals and collectives devise methods which instead give the appearance of co-operation. I demonstrate how workers disguised their resistance in an attempt to achieve potentially unachievable objectives and in turn avoid disciplinary action. I conclude by suggesting that applying Goffman to studies of organisation can advance scholars' understanding of how certain individuals respond to change and might come to be defined as loyal and compliant. This approach can also encourage discussions relating to the concept of recalcitrance and whether it is developed, and enforced, by those in powerful positions on the basis of their own desire to be well regarded by others. Introduction Studying organisational misbehaviour is a feature in organisations' literature which has grown in popularity in recent years. However, in studies of social work it is a relatively unidentified and unexplored form of resistance (Carey and Foster, 2011;Wastell, White, Broadhurst, Peckover, Pithouse, 2010). Although human relations scholars widely recognise that misbehaviour is endemic in organisations, in social work it is sometimes not always seen for what it is. This may be because revealing the extent of misbehaviour is not an easy task to undertake. It involves an exercise of detection, identification and making particular definitions of what the behaviour is (Ackroyd and Thompson, 1999). One scholar who dedicated his attention to exploring the (mis)behaviour of people was Erving Goffman (1959Goffman ( -1982. In his seminal study, The Presentation of Self (1959), Goffman's attention was drawn particularly towards the performances that individuals 'put on' in social situations which were supported in 'the context of a given status hierarchy' (Lemert and Branaman, 1997: xlvi). As a sociologist Goffman was inherently interested in how the self, as a social product, depended on validation awarded and withheld in accordance with the norms of a stratified society (Manning, 2002). Goffman (1959) developed the theory of impression management whilst carrying out anthropological fieldwork in the Shetland Isles. He found that communication between individuals took the form of the linguistic (verbal) and non-linguistic (body language). These gestures were employed between individuals when in interaction with others. By observing the local crofter culture closely, Goffman discovered that individuals who overcommunicated gestures were trying to reinforce their desired self, whilst those who undercommunicated gestures were detracting from their desired self (Lewin and Reeves, 2011). Impressions of the self were therefore managed actively by individuals during their social interactions, a process which Goffman termed 'impression management', and in order to be seen as credible they relied on the intimate cooperation of more than one participant. The presentations that individuals performed were undertaken in two distinct areas: the front region and the back region (Goffman, 1959). In the front region, Goffman observed performances as more formal, restrained in nature. Whereas in the back region, performances were more relaxed and informal and thus allowed the individual to step out of their front region character. However, Goffman also felt that individuals used the back stage to prepare for front stage performances. Each region therefore has different rules of behaviour, the back region is where the show is prepared and rehearsed; the front region is where the performance is presented to another audience (Joseph, 1990). Goffman's contributions to organisational theory have been hailed 'substantial, significant and stylish' (Clegg, Courpasson and Phillips, 2006: 144) and his recent return to the disciplinary space of organisational theory has provided researchers with the tools to explore a variety of scenes relating to misbehaviour within the occupational community (McCormick, 2007). Goffman's framework has also been applied widely across healthcare research such as medicine (Lewin and Reeves, 2011), nursing (Melia, 1987) and oncology (Ellingson, 2005). However, although often loosely referred to, Goffman's frameworks for conceptual analysis in studies of social work are less well incorporated (Hall, Slembrouck, Haigh and Lee, 2010). The purpose of this article, therefore, is to demonstrate how a Goffmanesque perspective of organisational misbehaviour can provide an interdisciplinary understanding of how broader social and institutional orders can affect individuals in the children's social work setting. By combining Goffman with misbehaviour theory, I present a symbolic interactionist account which theorises why different members of a social work agency dealt with managerialist directives in a particular way. I argue that organisational misbehaviour differs in meaning according to the position, location and perspective of the actor. Organisations are made up of individuals who negotiate issues that they encounter in different ways depending on the appearance they want to give. Goffman (1959) recognised that impressions tend to be treated as claims or promises which have a moral character because they involve a multitude of standards pertaining to politeness, decorum and exploitation. To understand the crux of everyday social interactions we need to explore the 'moral lines of discrimination' that blur what is seen, or is purposefully overlooked (Goffman, 1959: 242). These moral lines of discrimination were what drew my attention to the misbehaviour I observed in the Child and Family Agency (CFA), the organisational setting of this study which was situated in England. The term "just nod and smile" became a popular colloquial term when senior management announced that the service was soon to expect an Ofsted (Office for Standards in Education) inspection. This announcement came shortly after they had revealed that redundancies were also going to take place due to a sudden government reduction in resources. As senior managers became concerned that team performances were not going to meet the standards expected to achieve a 'good' or higher rating team managers started to feel that they needed to impress their seniors by reaching certain performance targets if they were to avoid involuntary redundancy. What followed was a general belief that as long as targets were achieved the methods chosen to achieve them were not of importance. This in turn conjured a growing belief amongst social workers that they should comply with top down directives if they were to receive promotion or, more conversely, avoid punishment. Yet, in busy teams, when the demands to support families are tactically subordinated to pressures which help to reduce 'workflow', identifying and meeting the needs of the child is a task which is often overlooked (Broadhurst, Wastell, White, Hall, Peckover, Thompson, Pithouse and Davey, 2010:16). The neo-liberal context The context in which local authority, or statutory, social work is now practised has changed considerably from the 1980s through to this present day. Largely influenced by Taylorism, many statutory social work management practices have aligned with the ideology that care work is best performed if the productivity of practitioners is closely examined (Bissell, 2012). This is because managerial practices have developed over time to reduce local government spending and improve service delivery (Jones, 2015). Both Schofield (2001) andBriscoe (2007) have contended that this bureaucratic approach has provided social workers with professional autonomy and shielded them from political fads. Yet critics of this process have argued that whilst this approach can free people from arbitrary rule, it can also interlock them into an official hierarchy which can be deskilling and authoritarian (Clegg et al. 2006). The dominant discourse of care in the community has become redundant as social workers now have to work in accordance with managerialist agendas which focus heavily on paperwork and performance targets (Broadhurst et al. 2010;Gibson, 2016;Wastell et al. 2010;White et al. 2008). The impact of bureaucracy has led to a number of intra-agency conflicts as social workers often feel that their professional values have been sacrificed for the benefit of protocols and standardised services (Author, 2017;Bissell, 2012). Arguably, instead of social workers delivering quality care for those in need, workers frequently find they are enacting a cutbacks policy agenda and in effect, injecting neo-liberalism into the lives of service users and communities (Baines and van de Broek, 2016). In recent decades, neo-liberal ideology has been pursued by dominant political parties within Britain and the implications of this capitalist rationality for social work has been profound (Ferguson, 2004). Furthermore, as required by the Education and Inspections Act (2006), the role of Ofsted has also changed. Ofsted has become responsible for not only inspecting the performances of schools but also those of statutory agencies delivering social work. Although Ofsted is only one part of the neoliberal system, it plays an important part as its findings are reported to Parliament. The outcomes can have serious consequences for local authorities as those which do not perform well have often been criticised for poor managerial leadership, face the prospect of becoming a trust and losing control of their children's services (Jones, 2015). Although reforms to social work have always been an integral part of its history, in recent years this ever increasing top-down direction and regulation has contributed to an intensification of organisational restructure and an over standardised response to the varied needs of children (Jones, 2015;Munro, 2011). Indeed, a recent briefing entitled, "Do it for the child and not for Ofsted" which is critical of social workers resentment towards completing paperwork, demonstrates how Ofsted inspectors believe social workers have lost sight of the child when in the midst of completing standardized assessments (Schooling, 2017). This is the context in which the CFA department was situated at the time this study took place. All of the factors outlined above had a noticeable impact on the department as it became evident that in attempting to navigate external pressures, internal discursive confusion amongst frontline workers and managers ensued. This was even more pronounced when the agency heard it was due an Ofsted inspection as managerial attention became excessively focused on the process rather than the practice of social work. Understanding organisational misbehaviour It is widely accepted that organisational misbehaviour is constructed within discursive contexts but it is also recognised that individuals are able to negotiate and shape these contexts in different ways (Ackroyd and Thompson, 1999;Broadhurst et al. 2010;Carey and Foster, 2011). In fact, Lipsky (1980: xii) argued that policy on the ground rarely bears any resemblance to the formal public policy enacted, mainly because 'street level bureaucrats' will interpret it to establish routines and strategies that help them cope with uncertainty and work pressures. Howe (2009), however, disputed Lipsky's argument as he felt that social workers' discretion had been curbed as the power they once had shifted into alignment with the framework of the legal and managerial authority that now governed their practice. In a neo-liberal context where organisations require social workers to comply with their expectations and standards, it is hardly surprising that practitioners feel they have to do what is necessary to align with their institution's directives if they are to avoid managerial scrutiny. Sociological literature is rich in examples of how the ability to perform, or comply, effectively in some capacity is apparent in settings or situations where competence is a desirable outcome (McLuhan, Pawluch, Shaffir, Hass, 2014). Edgerton's (1967) concept of the "cloak of competence", or the presentation of a competent self, has been an enduring theme in studies of professional or occupational socialization that focus on how new recruits acquire the skills, values and attitudes expected of those in the profession (Hughes 1958;Kleinman 1984). However, it has been noted that the cloak of competence has often been translated into the 'cloak of conformity', serving to jeopardise innovation and creative potential of professionals during meetings and at work (Puddephat, Kelly and Adorjan, 2006). Yet the desire for workers to conform may do more than stifle innovation especially when they find they are persistently scrutinised. For example, in his ethnography of a local authority, Gibson (2016: 125) found that children's social workers who were capable of keeping up with the administrative requirements were seen to be "doing a good job" whereas those who resisted, or could not keep up, were policed through shame and humiliation tactics. This naming and shaming process not only served to defend the institutional expectations but also deterred workers from taking part in any form of deviation. However, other studies in social work have found that there is a fine line between competence and recalcitrance as workers demonstrated their competence by complying with organisational directives, whilst simultaneously displaying acts of resistance. Such situations again relate to the administrative expectations of front line workers to meet the demands of the Integrated Children's System (ICS) (see White et al. 2010). However, in these cases, rather than wholly comply or resist, social workers and team managers developed deflection strategies to deter the high number of child protection referrals turning into assessments. Creative techniques such as 'signposting' were employed where referrers were redirected to another service (see Broadhurst et al. 2010) or 'strategic deferment' which involved putting cases on hold while more information was obtained (see Pithouse, Broadhurst, Hall, Peckover, Wastell and White, 2009). These simple methods were designed to create an appearance that the work-force was competent and in control despite the fact that in reality workers were struggling to find the time to deal with their open cases. So far, the studies which have focused on children's social welfare departments have questioned whether professional discretion, or indeed subversion as a tactical device, is compatible in the relational world of practice as social workers endeavour to appear competent in the neo-liberal context. Yet, in adult's social work, Carey and Foster (2011: 585) interviewed social workers who purposefully used their position to bend 'the rules' to help the service user rather than just meeting the needs of the system. Some even went as far as using a "cloak of incompetence" (see McLuchan et al. 2014) and minimized their displayed level of competence by "whistle blowing" to the local media about planned cuts to support services [seemingly via an anonymous fax], encouraging informal carers to challenge local authority decisions to refuse support services or encouraging service users to exaggerate or provide false information when applying for support services (Carey and Foster, 2011: 588). However, not all participants were inspired by such acts of altruism, as some admitted to using deviant behaviours simply to relieve boredom from overexposure to regulation, bureaucracy and resentment towards patronising colleagues, managers or higher professionals. In summary, organisational misbehaviour is not as straight forward as it may initially seem as it presents in different guises depending on where the performer is situated and what kind of performance is desired. Although these performances appear to emerge from the interactions between the organisation as a directive system and the self-organisation of its workers, they are further exacerbated by wider contextual issues which affect the way in which the social worker and the agency functions. In the current social work context, exercising professional discretion appears to be continuously compromised as a result of increased bureaucracy, surveillance and monitoring. Those who comply, or operate inside the constraints of rules, do so to appear competent and to avoid being shamed (Gibson, 2016). However, the other argument, that practitioners are still able to use their own discretion when negotiating and implementing formal policy (Lipsky, 1980) is apparent as we see social workers covertly 'bending the rules' or overtly, 'ignoring the rules' (Carey and Foster, 2011;Broadhurst et al. 2010;Pithouse et al. 2009). In the next sections, I want to explore how the phrase "just nod and smile" arose within the CFA department and was employed to signify to social workers that they should accept and agree with the organisational directives even if they disagreed. However, although the term was used in a similar manner to that of the "cloak of competence" (see Edgerton, 1967), as it foreground the worker's competence and concealed their incompetence, it was also used to disguise a form of tactical resistance to the agency's standards and expectations. Introducing the case and method This paper is based on data drawn from a yearlong ethnographic study of a safeguarding children and families social work statutory agency. At the time this study began, the Conservatives and Liberal Democrats, had just been elected to form a coalition government and all local authorities across the country were subsequently faced with having to reduce their spending (Jordan, 2011). The CFA agency dealt with both child in need (low level intervention) and child protection referrals (when a child is at risk of significant harm). All the managers at the CFA, from the Assistant Team Manager tier up through the managerial hierarchy to the Assistant Director, were qualified social workers. The CFA consisted of four safeguarding teams which had in total 36 social workers, ten middle managers (team managers and assistant team managers) and three senior managers (two service unit managers and one assistant director). The West Team consisted of 7 social workers, 2 senior practitioners, 1 Assistant Team Manager and 1 Team Manager. Post qualification experience ranged from 1 to 10 years. Data collection and analysis The aim of the larger study was to explore how organisational culture affected the social interactions of workers within a social work department. Although data was collected from all four safeguarding teams, for the purpose of this paper due to limited space I will focus on the findings from one of these teams which I will refer to as the West Team. This particular team was chosen for this paper to explore why individuals from the same team responded differently to the same managerial directive. A multi-method ethnography was used to analyse the way in which different social workers interacted with the work place discourse at CFA. As in Goffman's work on Presentation of Everyday Self (1959) and Stigma (1963) a variety of documentary sources enabled him to see incongruity in certain situations and as a result, develop insights, metaphors and hypotheses as to why these may have occurred. The main ethnographic approach used was that of participant observation as this method allowed for the exploration of participants' activities, beliefs, meanings, values and motivations and in doing so, develop an understanding and interpretation of the members' social world (Hammersley & Atkinson, 2007). Participant observation allows the researcher to focus on the less explicit aspects of organisational life which can often include the kind of phenomenon that is only apparent in the back stage regions of an agency such as jokes, complaints and arguments. The West Team was observed in the CFA for a total period of 630 hours. In order to be immersed in the field and yet maintain a sense of free thought and movement, I adopted an observation-orientated fieldwork role which enabled me to pay close attention to dialogue in informal and formal meetings. As well as observing interactions between social workers and their managers in the office, my observations also included team meetings, ad hoc meetings and a team building day. During this time I made detailed observational notes and also tried to capture the contextual features of spoken interaction. This enabled me to record 'bodily orientation and tone of voice' which is important when trying to understand behaviours and self-presentational displays (Goffman, 1981: 127). My observations were supported with additional resources such as semi structured interviews and document analysis (policies and procedures; emails and case notes). I carried out in-depth interviews which lasted from 1 to 2 hours with five social workers on the West Team and one manager. I also interviewed two senior managers who oversaw the work of all the teams within the department. Interviews were developed from my own observations and were focused on understanding the individual's interpretation of events, their sense of self and the team dynamics. All interviews were taped and transcribed. At the time of this study I worked as an Out of Hours social worker (emergency duty cover during nights and weekends) for the same organisation but in a different building to that of the CFA. My position within the authority proved to be useful because although I was considered an 'insider' to the social work setting and members of the CFA were familiar with who I was, I was also seen as an 'outsider' because I was not a member of the teams I was observing. I was what Hammersley and Atkinson (2007: 90) have referred to as a 'marginal native'-where the researcher can gain both 'inside' and 'outside' perspectives of both front and back stage regions of the West Team. However, a limitation of this approach was that I soon realised that the findings were more emotionally active than I had originally anticipated (see Author, 2013). Both Hammersley and Atkinson (2007: 90) have warned that the marginal native needs to always retain 'a sense of social and intellectual distance' from the field setting if they are to avoid 'becoming' affected. In order to develop into a 'marginal reflexive ethnographer' I used meetings with my research supervisor as means of gaining the required analytic space. The field notes, documents (emails and case notes) and interviews were transcribed and uploaded onto NVivo, a software assisted data management and analysis tool. I was particularly interested in how the team of social workers at the CFA interpreted and responded to the senior managerial directive that was perhaps seen as the cause of the conflict. As recommended by Charmaz and Mitchell (2001) a modified grounded theory method was used to analyse the ethnographic data which enabled me to explore particular key incidents and use memos to develop common themes and categories across the data produced from the whole study. Different situations occurred regularly across the department. In order to deepen my analysis and explore alternative meanings, I coded key incidents as they emerged. This process involved breaking down the data into units, which usually consisted of a few sentences. Code labels were used for field notes, interviews and documents which were developed from reading and re-reading the data. Once initial coding had taken place, this led to the development of broader descriptive terms which were later used to produce themes and categorise the data. At this stage, key categories were identified and named for example 'resisting', 'complying'. These variants helped shape the preliminary analytic framework but later I returned to the whole dataset and used focused coding. This was in part to be rigorous with the analysis but also to explore why an inconsistency between members of the same team had occurred. Drawing from Katz's (1982) method of analytic induction I compared the differences between the different individual's situations to deepen my analysis. Each shift required a reanalysis and reorganisation of my data. In the findings part of this paper, I also draw from dramaturgical and misbehaviour theory perspectives to examine the emerging themes and to ensure that the interpretations are clearly grounded in these theoretical perspectives. By moving back and forth between the data, the analysis and the relevant theories I have thus gradually developed an empirical framework for what follows (see Hammersley and Atkinson, 2007). Ethical approval was granted by University [name]. To conceal and protect the identity of participants, names have been changed. Changing landscapes When this study began the agency was experiencing new changes and although social workers were aware there would be "cuts" it was not until they received an email from the Assistant Director that they became fully informed of the extent of these cuts. An email arrived today telling staff that no more children are to come into care because the [local authority] has gone £5 million over budget. It said "if we do not reduce spending we will have to look elsewhere to recoup our losses". This comment seems to have created panic as the rumours suggest that redundancies are on the horizon. (Field notes, Day 5). Although social workers pride themselves in attempting to empower, discuss and resolve issues (Ferguson, 2011), this ideology was not always apparent in the CFA and it was instantly observable that this email had a significant impact on the social work department. It was sent by a senior organisation leader without any prior discussion of this serious issue. Although the email appeared to have been sent with the aim of highlighting to all staff that the CFA had suddenly accrued a very large debt, it was interpreted by Debbie, the team manager of the West Team, as a "veiled threat" because she feared that all managers' jobs would be at risk if the debt was not reduced. As each team had two managers, a team manager and an assistant team manager, the belief was that it would be easier for managers to be released from their posts than social workers. It was not long after this email was sent that it was then announced that the organisation was due to expect an Ofsted visit. As the date of the Ofsted inspection drew nearer senior managers informed team managers from each team that they would receive an individual rating which would be awarded following close examination of individual and team performances. Drawing from Goffman, I will explore the crucial and discrepant roles of the performers of the West Team. Goffman (1959) made it clear that when establishing where performances take place, one needs to clarify the reference point of a particular performance and the function that the place happens to serve at that time for a given performance. In the West team therefore, the front region will refer to the heart of the office where senior management would circulate when they visited the team. This front region would become a back region when the audience was not present. It became a place where a tone of informality would prevail. Negotiating new territories As managers started to become concerned with how their performance would be measured and interpreted by their audience (senior managers and Ofsted inspectors), a number resorted to using different tactics to ensure that social workers would turn assessments around on time. In the next extract Beth explains one method which was used by her manager: Me: A star chart? Beth: Yes, a star chart was put up last week by Debbie so we can see who is meeting targets on time. Those of us who complete an assessment on time, get a gold star. Those who don't get a red one. If you get one red star then you have a meeting with the manager, two red stars then you're sent to [service unit manager] and could face disciplinary procedures. Me: What? Beth: Yes, it's bullshit, it's patronising and demoralising. We don't sit on these assessments for fun. I'm way over my recommended allocation already. (Beth, 8 years qualified) Debbie, Beth's manager, was a team manager and the mother of a three year old. She told me that her reason for using the 'Star Chart' method was because it worked well with her son. However, it also served another purpose as it enabled Debbie to maintain face in front of senior managers. By showing deference for and affirming their objectives, which specifically required teams to reach performance targets within timescale, Debbie presented her 'self' as competent and turned the office into a field of strategic gamesmanship (see Goffman, 1959). Debbie brought the back stage into the front region by placing the Star Chart in the main office for both her team and the audience. Debbie's Star Chart was seen as a coercive performance tactic by her team, one which named and shamed those social workers who were failing to meet targets whilst praising those who did. This tactic acted as a "cloak of competence" in that it allowed Debbie to still appear competent despite the performances of her 'failing' staff (see Edgerton, 1967). It also concealed the lack of support Debbie was offering her social workers because rather than trying to reduce her team's caseloads with deflection techniques (see Broadhurst et al. 2010;Pithouse et al. 2009), social workers found their case allocation had increased. Debbie's tactic in turn served to divide her workers as some accepted it and others challenged it. The Star Chart may have highlighted how many social workers were meeting targets within timescale but for Beth it did not take into consideration other impeding factors that were affecting those who were not, such as: time constraints, rising caseloads and other daily unexpected emergencies that practitioners have to deal with. Just nod and smile: an individual approach Beth later told me that she had voiced concerns to Debbie that her focus on reaching targets was being "prioritised over spending quality time with families". However, there were others in the team who rather than challenge the party line developed their own strategies: Kenny: ...at first Tina came here as an agency worker and then I find out she has been made permanent and promoted to senior practitioner without being interviewed which a lot of us are not happy about. When Beth was complaining about it she said "I can't believe they've done that. It was never advertised. She has just literally been offered a senior prac post on a plate". Well I started laughing. I said "You know why they gave her that, don't you? It's 'cos she just nods and smiles". (Kenny, 10 years qualified) It was around this time that the term "nod and smile" became a popular colloquialism within the agency. It referred to the way in which management expected front line staff to toe a particular party line. In this instance, Kenny used the term to describe how a former agency worker, Tina, was promoted to senior practitioner because she did meet performance targets without challenging management directives. The gold stars on the office wall openly praised Tina for her performance and showed senior managers when they visited the team that it was possible to achieve desired targets despite the struggles other social workers were known to face. The credibility of performances, however, depends on the segregation of social space because although the 'front region' was where the desired performance was provided, in the 'back region' the suppressed facts about Tina were revealed. This knowledge created conflict amongst some of the team. It was well known within the team that when Tina carried out an assessment she took a support worker with her on the visit. While Tina talked to the family, the support worker would make notes and on return to the office would type up the assessment. Tina would then read the assessment and sign it off. Yet as members of the team often reminded me, the role of the support worker was to implement the plans created by the social worker not to act as a personal assistant to the social worker. Also, legally, social workers are expected to personally complete assessments so that their own appropriate training and knowledge can be used to analyse the family's situation carefully (Working Together, 2015). Nonetheless, in the CFA, meeting the requirements of the organisation often came before the needs of the family and Debbie promoted Tina as she could be trusted 'to perform properly' (Goffman, 1959: 95). And by discreetly promoting Tina, Debbie confirmed to Beth and Kenny that it did not matter how you carried out your assessments, because if you did complete assessments within timescale, you would receive praise and recognition. In contrast, Beth and Kenny felt that they were overlooked for promotion, most probably because they were failing to fulfil what was expected of them. Instead of toeing the party line, Beth and Kenny regularly challenged their managers and their organisation's ideology. The "nod and smile" term gained more levy within the team after Beth was suspended. Beth had accrued 30 days of TOIL (time off in lieu) for all the overtime she had generated in recent months. However, after receiving two red stars, her extra work was not acknowledged. Instead she was told by Debbie that she needed to meet with the Service Unit Manager because there were concerns about her fitness to practise. Beth refused to go to the meeting. She told Debbie that she would be able to catch up on her assessments if her caseload was reduced and she was given the opportunity to complete her assessments. When Debbie did not agree to this proposition, Beth informed Debbie that she was going to use her accrued TOIL to complete her work. She then walked out of the office and went home. Beth was later informed that her actions were considered to be representative of gross misconduct and she was subsequently suspended. After losing a good colleague, Kenny became disenfranchised with the team's objectives and in a team meeting had a disagreement with the assistant team manager, Mark, about how social work practice should be conducted. It was during this disagreement that Kenny announced his distaste for both Tina and Debbie's inappropriate practice. This disagreement continued by phone and email after the meeting concluded. Kenny informed me that one evening, Mark emailed him and warned him, "Your cards are marked". This comment annoyed Kenny and so he forwarded it to all of the senior managers and the Assistant Director of the organisation in the hope that they would follow the matter up with Mark and Debbie. However, Kenny did not hear back from anyone. A few weeks later he was suspended from his post for allegedly not following correct procedure when undertaking a section 47 investigation about a child at risk of significant harm (see Children Act 1989). An overall objective of any team is to appear credible and competent but to maintain that appearance it requires the whole team to over-communicate some of the facts and undercommunicate others. These 'facts', or team secrets, are often concealed from the audience as they pertain to the intentions and strategies of a team (Goffman, 1959: 141). Yet the impression that Debbie wanted to give could only be deemed credible if all members concealed the secrets of the team. When Kenny revealed what was happening back stage to senior managers he broke the team loyalty rules and was seen as a 'traitor' or 'turncoat' (Goffman, 1959: 164). It was because of his performance, because he did not "nod and smile", that Kenny believed he had been suspended. Just nod and smile: A team approach The remaining team members had observed the interactions with Beth and Kenny over the previous few weeks. The impression and understanding fostered by Beth and Kenny's performances, and those of other managers, had saturated the back region and positioned the others in a situation which forced them to contemplate their next move. Although they were unhappy with the way in which Beth and Kenny had been treated, they were also fearful that they would be suspended next if they challenged their manager's practice. With Ofsted inspectors' arrival expected at any time soon, the atmosphere in the agency was particularly anxious as senior managers took a more aggressive approach towards ensuring that social workers completed their child protection visits on time. In this next extract, Jane, another senior practitioner from the West Team, explains to me how she and the others devised a plan together that would ensure they completed visits to the children on their child protection plans within timescale to avoid receiving their 'summons'. Jane: Our summons is like what we get at the end of each month if our team under performs. We get a list from [name of senior manager] summoning those who haven't completed their CP (child protection) visits within timescale to the office. Me: No way, that's like you're at school. Jane: It's worse than that. If you get called in more than once you're out so we've started covering for each other so no one gets called. I download all the CP visits that are outstanding one week before the month's end and then one of us does them all in one day and we cover for that person while they're out. Me: Have you thought of talking to someone about this? Jane: We've talked to the union about what's been going on but they are no use, they don't understand what it means. It's just easier to nod and smile. (Jane, 5 years qualified) Statutory provisions dictate that children who are subject to a Child Protection Plan should be visited at least once every four weeks (Children Act, 1989). This is one of the performance indicators that Ofsted examines during an inspection and therefore an area that is of concern for senior management in the local authority. With all social workers struggling to meet this target, senior managers had started calling in those who did not reach it to discuss reasons why they had not. This meeting was referred to as "The Summons" and represented the gravity of the situation because if social workers were called more than once then they were threatened with suspension for practice issues. Goffman (1959) has suggested that an important element of team collusion is found in the system of secret signals through which performers can surreptitiously receive or transmit pertinent information. These staging cues typically come from, or to, the director of the performance who in this case was Debbie. The West Team were fully aware that 'aggressive face-work' was at play as both Beth and Kenny had challenged this protocol and were suspended (Goffman, 1959: 90). To prevent this from happening to the rest of the team, Jane, came up with a strategy that would ensure the remaining members of the team could carry out child protection on time. This form of team collusion meant that although the child was seen by a social worker, it was not always the same social worker who was allocated to the case. Although this should not have been agreed to by senior managers, it was a strategy that no one from that organisational tier had yet, apparently, picked up on. It was nonetheless a method that the team manager Debbie was aware of but which she later informed me she had turned "a blind eye" to because it met "everyone's needs". By this she meant the needs of senior managers and her own performance targets. As a 'go between' Debbie was in the position where she was aware of her team's secrets but because they fostered a good impression front stage, she was willing to overlook them as they produced mutually agreeable outcomes for all involved (Goffman, 1959: 103). Apart from, perhaps, the children who were subject to the child protection plans. Discussion My main objective in this paper has been to illustrate how a Goffmanesque perspective of organisational misbehaviour can provide an interdisciplinary understanding of the way in which broader social and institutional orders can affect individuals. Individually, conceptual driven understandings of organisational misbehaviour and dramaturgy cannot account for why certain behaviours arise in teams or why individuals desire the need to be well regarded. In combination however, with the support of an ethnographic approach, a more comprehensive exploration of organisational dynamics has provided nuanced explanations of why particular social interactions take place in given regions of a social work agency. This study contributes to the field of social work in many ways. First, despite the theories of Goffman (1959) being written some years ago it is evident that his work is still valuable and significant when applied to the organisational setting in which social work is situated today. The individuals he spoke of are recognisable in this agency as social workers have demonstrated that they are able to negotiate and shape different contexts through impression management. However, it became apparent that although all team members recognised that meeting the required organisational directives within timescale was impossible, practitioners addressed the issue in different ways. As a result, binary contrasting roles emerged within Debbie's team which positioned social workers as either resistant or compliant. Those who resisted were seen by management as non-compliant and unmanageable. Yet those who preferred not to overtly challenge organisational directives, used their discretion, either individually or collectively, to reinterpret the rules so that they could achieve targets and impress management. However, this practice was not without consequence. To address the needs of the organisation practitioners, and managers, resorted to a Machaveillian form of identity management to present their selves as competent. Although this approach enabled one to advance her career and others to avoid punishment, their actions had an adverse effect on the families receiving the service. This point leads to the second contribution of the study which incorporates and extends on the literature of organisations and misbehaviour in social work. In contrast to the findings of Carey and Foster (2011) where social workers used their skills to ignore the rules and help service users, the actions of these practitioners had negative consequences for the families they were working with. The dramaturgical aspect of Goffman's theory demonstrated that regions, and regional behaviour, played crucial parts in the (in)visibility of organisational social work practice. In the front stage, it seemed as if legal framework requirements were being met and children and families were receiving the service they were entitled to. It was only back stage that the truth was known, and practitioners were able to conceal these activities from view with the use of 'props' and 'illusions' (Goffman, 1959: 114). The two examples provided in this paper demonstrate that in both cases, despite social work targets being reached, families were not receiving the service they deserved and furthermore, they were not even aware of it. Although Pithouse et al. (2009) andBroadhurst et al. (2010: 365) identified that team managers were 'fudging it' by taking short cuts that would protect their social workers from further burden, in this context we have seen managers depart from working with social workers to only protecting those who will conform with their desired image of competence. However, while presenting a "false front stage" persona appeared important for those who attempted to meet organisational directives (Puddephatt, 2006: 85), adopting this strategy was not only detrimental for those receiving a service but also for the cultivation of congruent culture. Rather than adopting a coactive power approach (see Clegg et al. 2006) and discussing the issues the team faced together, the team divided and a climate of mistrust and suspicion became dominant features of everyday activity (Author, 2017). These findings extend on Gibson's (2016) work by revealing how practitioners sacrificed their values and ethical principles to avoid being named and shamed. The third contribution contributes to debates on organisational misbehaviour and how the perspective of the actor is affected according to their position and location. Although the discussion so far has been critical of the language used by social workers and its purpose in practice, it has failed to mention how the "cloak of competence" (Edgerton, 1967) can conceal misbehaviour and dupe those who are more focussed on process rather than practice. In this case, Ofsted's impending visit meant that members of management became focused on ensuring statutory duties were completed within timescale rather than the way in which these tasks were carried out. Situated in a culture controlled by audits and technology, the team manager, Debbie, used her professional discretion to overlook her social workers' misbehaviour so that they could collectively meet statutory obligations and her role within the agency would be secure. The level of competence displayed by Debbie and her team impressed senior managers as well as Ofsted inspectors as the department passed the inspection with a 'good' grade. This narrow view of social work practice cultivated the belief that managerial control over workers leads to good performance outcomes, providing the worker followed superior cues at face value, kept in line and exercised tact (see Thompson, 1977). Furthermore, these incongruent practices were endorsed by Ofsted inspectors, most likely because they too have adopted and fostered the neo-liberal discourse which focuses heavily on paperwork and performance targets (Broadhurst et al. 2010;Wastell et al. 2010;White et al. 2008). Ofsted's inspection would have falsely reported to Parliament that patent and standardised services could be delivered despite limited resources. Yet the story that was not told, was that these services were not being delivered in accordance with the expectations outlined in certain legal frameworks and procedures. Therefore, the 'moral lines of discrimination' that occurred in the CFA blurred what was known, with what senior managers purposefully overlooked (Goffman, 1959: 242). The dominant discourse of care in the community was contested when practice became heavily focused on appearing competent and meeting performance targets (see White et al. 2008). It was only after the Ofsted results had been published that senior managers addressed the concerns raised in Kenny's email. Shortly after inspectors left, Debbie announced to the team she had been offered voluntarily redundancy and would be leaving with immediate effect. Beth and Kenny's suspensions were revoked but although both were asked to return to the CFA department neither did. Beth went travelling and Kenny accepted voluntary redundancy and left. Conclusion It has been widely acknowledged that the neo-liberalist context within which social work is situated has serious ramifications for organisational culture, practice and services (Ferguson 2004;Jones, 2015). This study has contributed a different angle to the debate by moving from the macro to the micro-level, and using Goffman's (1959) dramaturgy theory to explore how social workers inside a local authority service are affected by and respond to the demands of a performance culture. By analysing the data through a dramaturgical lens a more intimate insight of intra-agency performativity has emerged and in turn, revealed how front and back stages were used by management to present idealized lines and exert expressive control. These messages have important implications for social work organisations because they highlight how certain external factors influence intra-agency practice and subsequently contribute to the belief that deviant behaviours need to be resorted to if social workers are to survive in the workplace. This important distinction demonstrates that encouraging workers to toe a particular party line may actually have little benefit in improving productivity or quality of service but it will have a detrimental effect on the service received by children and their families. This particular insight must be brought back to centre stage especially when considering Ofsted publications. Schooling (2017) recently argued that social workers, and organisations, need to re-focus on the needs of the child and not the needs of Ofsted. But as the findings in this study demonstrate, social workers were not resentful of the paperwork, they were concerned about what would happen to them if they were not able to complete the paperwork within timescale. If social work is to re-focus on the needs of the child then serious consideration needs to be given to the impact a performance culture has on practice. This raises a further implication for social work, especially with regards to language. Practice is mediated by language and interaction which in turn, produces inferences about what to do, to what extent and what should happen next (Hall et al. 2010). The use of a colloquial term such as "just nod and smile" was a powerful signifier as it demonstrated how certain inconspicuous sayings can socialize workers into adopting particular stances within a team: do as you are told or face the consequences. Part of the problem, in this instance, was that social workers felt they inhabited subordinate positions within the organizational hierarchy. Rather than provide a safe space for practitioners to reflect on dilemmas and concerns, managers implemented aggressive performance strategies. These not only altered team relationships but prevented social workers and managers from gaining insights into the ways in which practice was being carried out. With social workers trying to impress their seniors and their seniors seeking to impress Ofsted inspectors, few paused to consider how the term "just nod and smile" had inadvertently affected the lives of children and their families.
v3-fos-license
2018-05-21T22:38:44.947Z
2018-05-01T00:00:00.000
21720821
{ "extfieldsofstudy": [ "Materials Science", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/1996-1944/11/5/825/pdf", "pdf_hash": "e882590a506ca3e438c02519f9fcfe770074687b", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41230", "s2fieldsofstudy": [ "Materials Science" ], "sha1": "e882590a506ca3e438c02519f9fcfe770074687b", "year": 2018 }
pes2o/s2orc
Combined Effects of Set Retarders and Polymer Powder on the Properties of Calcium Sulfoaluminate Blended Cement Systems This study investigates the effects of set retarders on the properties of polymer-modified calcium sulfoaluminate (CSA) and Portland cement blend systems at early and long-term ages. The fast setting of the cement blend systems is typically adjusted by using retarders to ensure an adequate workability. However, how the addition of retarders influences the age-dependent characteristics of the cement blend systems was rarely investigated. This study particularly examines the effects of retarders on the microstructure and strength development of polymer-modified CSA and Portland cement blend pastes and mortars from 2 h to 90 days. The macro- and microstructural properties are characterized by compression testing, powder X-ray diffraction, mercury intrusion porosimetry, and scanning electron microscopy with energy dispersive spectroscopy. The test results reveal that the use of retarders delayed the strength development of the cement blend systems at the very early age by hindering the production of ettringite, which was cumulative to the delaying effect of polymer, but it increased the ultimate strength by creating denser and finer pore structures with the evolution of hydration products. Introduction Calcium sulfoaluminate (CSA, Ye'elimite) cement is a promising binder alternative to Portland cement. It can be manufactured from calcium sulfate, limestone, and bauxite at a kiln temperature of about 1250 • C, which is about 200 • C lower than that of Portland cement [1,2]. Thus, the manufacturing of CSA cement gives off less CO 2 than Portland cement; the production of 1 ton of Portland cement generates about 800 kg of CO 2 [3][4][5]. In respect to the composition, CSA cement is usually comprised of many mineral phases such as anhydrite, belite, calcium aluminate ferrite, and gehlenite [6]. Besides such mineral phases, the manufacturing process of CSA cement clinker can incorporate many industrial by-products such as blast furnace slag and fly ash [4,7]. Besides the aforesaid environmental advantages, one important feature of CSA cement is the rapid setting and hydration at the early age between 2 and 12 h, occurring with low shrinkage and limited self-stressing [8]. Thus, CSA cement concrete may be incorporated in many structures exposed to severe weather conditions requiring the rapid setting and strength development of concrete, such as tunnel linings, bridge decks, and airport runways. Taking advantage of the rapid hardening quality of CSA cement systems, our research group recently developed an effective method for upgrading existing ballasted railway tracks into concrete slab tracks [9,10]. This method involves the on-site injection of fresh CSA and Portland cement blend mortar into voids between existing ballast aggregates. At the early stage of developing the method, two critical problems were identified [9,11]. One was that the durability properties such as freeze-thaw resistance were inferior due to defects in the interfacial transition zones (ITZs) between reused ballast aggregates and the mortar. The other was associated with workability such that the quick hardening permitted too short of a construction time after mixing [11], which would cause the formation of many cold joints between discrete mortar placements. The durability and workability issues were treated by adding an acrylic redispersible polymer and set retarders, respectively. The addition of the polymer effectively refined the microstructure of the ITZs so as to improve the durability properties; this is the main topic of the companion paper [11]. Moreover, it was deemed that the formation of polymer films during the hydration of cement particularly enhanced waterproofness, resistance to chloride ion penetration, freezing-thawing resistance, and chemical resistance [11][12][13][14]. The addition of the set retarders delayed the setting time, as was observed similarly with the use of the polymer in the companion paper, which showed a slower growth of viscosity over time with a higher polymer ratio from rheology tests [11]. However, the effects of the retarders on the microstructure and strength development of CSA and Portland cement blend mortars in the process of cement hydration were not clearly understood, especially in the presence of the polymer. In this regard, the study of the microstructure of cement-based materials is important, because it has a direct influence on their durability and service properties in general [15][16][17][18]. In general, set retarders are categorized into two types; one has just a set-retarding function, while the other has not only a set-retarding function but also a water-reducing function, which improves workability by reducing the water demand or plasticizing the mixture [19,20]. As with the latter type of retarders, a denser, more ordered, and finer cementitious matrix can be achieved, resulting in an increased ultimate strength compared with an un-retarded mixture having no retarders [20][21][22]. Some preceding studies examined the effects of set retarders on the macroscopic strength development of Portland cement concrete [22][23][24][25][26]. However, microstructural characterizations were scarcely attempted to figure out the mechanisms of retarders, especially for CSA and Portland cement blend concrete. Given the aforesaid concerns, the main purpose of this study is to investigate the effects of set retarders on the microstructure and strength development of CSA and Portland cement blend pastes and mortars at early and long-term curing ages (that is, 2 h to 90 days). In particular, the combined effects of retarders and polymer were scrutinized. To achieve the goal, a series of microstructural analysis techniques were employed including elemental analysis (EA), X-ray fluorescence (XRF), powder X-ray diffraction (XRD), scanning electron microscopy (SEM) with backscattered electron image (BSE), elemental energy dispersive spectroscopy (EDS), and mercury intrusion porosimetry (MIP). Materials The cement used in this study is a premixed blend of 60 wt.% CSA-based cement containing gypsum and lime and 40 wt.% Type I Portland cement as per ASTM C150 [27] with ground granulated blast furnace slag (GGBFS). The CSA and Portland cement blend with GGBFS, set retarders, redispersible polymer powder, and quartz sand were used to produce various mixtures of cement pastes and mortars ( Figure 1). Two types of set retarders were employed; citric acid and zinc acetate. The zinc acetate functions as just a set retarder, and the citric acid functions as both a set retarder and a water reducer. For Portland cement, citric acid delays the early-age hydration by hindering the dissolutions of C 3 S and C 3 A from the cement particles [28][29][30]. For CSA cement, however, the carboxylic acid groups (-COOH) within citric acid made use of the Ca 2+ ions from the cement to create the precipitation of Ca-complexed carboxylic acid compounds (that is, calcium citrate chain) around the surfaces of cement grains [28]. The precipitated acid compounds form hydrophobic barrier layers that hinder the very early reaction of ye'elimite [31], which causes a delayed setting and hydration [28]. In addition, the hydrophobic barrier layers improve the dispersibility of anhydrous cement grains, resulting in a finer and denser hydrated cementitious matrix [20][21][22]. In contrast, zinc acetate defers the setting and hydration by producing protective films of insoluble hydroxide around C 3 S in cement grains for Portland cement [24,32]. Thus, zinc acetate was employed separately to delay the setting and hydration of C 3 S in Portland cement, whereas the retardation of ye'elimite in CSA cement was dependent on the citric acid. Materials 2018, 11, x FOR PEER REVIEW 3 of 20 very early reaction of ye'elimite [31], which causes a delayed setting and hydration [28]. In addition, the hydrophobic barrier layers improve the dispersibility of anhydrous cement grains, resulting in a finer and denser hydrated cementitious matrix [20][21][22]. In contrast, zinc acetate defers the setting and hydration by producing protective films of insoluble hydroxide around C3S in cement grains for Portland cement [24,32]. Thus, zinc acetate was employed separately to delay the setting and hydration of C3S in Portland cement, whereas the retardation of ye'elimite in CSA cement was dependent on the citric acid. The polymer powder used as a filler was acrylic and made from condensation resin, and it consisted of 100% solid content with a bulk density of 540 kg/m 3 . The particle size distribution of the quartz sand ranged from 0.22 to 0.70 mm. Detailed characterizations for each raw material were conducted using various analysis techniques including EA, XRF, and XRD, which are presented in Section 3.1. Mix Proportions A total of ten mix proportions were tested as shown in Table 1: (1) five mixtures of mortars; and (2) five mixtures of cement pastes with no sand. The main test variables are the presence of set retarders, amount of polymer powder, and the curing age of samples. The portion of each set retarder was the same in all the mixtures excluding the two without retarders (M-0-N and B-0-N); retarder A (citric acid) and retarder B (zinc acetate) were 0.16% and 0.12% of the weight of the cement, respectively. The portion of polymer powder varied from 0 to 10% of the weight of the cement. In the mortar mixtures, the amount of the sand corresponds to 42.0% of the weight of the mixture except the polymer and retarders. As for the curing age of the samples, the tests were conducted for samples cured for 2 h to 90 days. In the mixture label, the first letter (M or B) stands for a mortar or cement paste, the following number signifies the percentage of polymer powder, and the last letter (N or Y) stands for the absence (N) or presence (Y) of set retarders. The water-to-cement ratio was 0.38 in all the mixtures, and no plasticizer was used. The polymer powder used as a filler was acrylic and made from condensation resin, and it consisted of 100% solid content with a bulk density of 540 kg/m 3 . The particle size distribution of the quartz sand ranged from 0.22 to 0.70 mm. Detailed characterizations for each raw material were conducted using various analysis techniques including EA, XRF, and XRD, which are presented in Section 3.1. Mix Proportions A total of ten mix proportions were tested as shown in Table 1: (1) five mixtures of mortars; and (2) five mixtures of cement pastes with no sand. The main test variables are the presence of set retarders, amount of polymer powder, and the curing age of samples. The portion of each set retarder was the same in all the mixtures excluding the two without retarders (M-0-N and B-0-N); retarder A (citric acid) and retarder B (zinc acetate) were 0.16% and 0.12% of the weight of the cement, respectively. The portion of polymer powder varied from 0 to 10% of the weight of the cement. In the mortar mixtures, the amount of the sand corresponds to 42.0% of the weight of the mixture except the polymer and retarders. As for the curing age of the samples, the tests were conducted for samples cured for 2 h to 90 days. In the mixture label, the first letter (M or B) stands for a mortar or cement paste, the following number signifies the percentage of polymer powder, and the last letter (N or Y) stands for the absence (N) or presence (Y) of set retarders. The water-to-cement ratio was 0.38 in all the mixtures, and no plasticizer was used. Sample Preparation In preparation of the samples, at first polymer powder was dry-mixed with cement blend (or a combination of cement and quartz sand) by a mechanical stirrer. Then, the dry mixture was poured into a mixing bowl containing water in which the set retarders were dissolved in advance. The mixing continued about 10 min at a speed of 100 rpm. Finally, the plastic mixture was placed into various molds for microstructural analyses and compression tests. Then, each mold was compacted on a mechanical vibration table for a few seconds to eliminate entrained air bubbles during the mixing and placing. Excessive vibration was carefully avoided to prevent the separation of polymer powder and/or the bleeding of water. After 24 h of curing, all hardened samples (hardened mortars and cement pastes) were demolded. Then, the curing of the hardened samples continued in an air-conditioned room with a relative humidity of 60 ± 5% and a temperature of 20-25 °C until testing. As for the preparation of the test samples for both the MIP and SEM, the hardened cylindrical sample with a 20 mm diameter and 30 mm height was cut into the pieces with a specific dimension using a low-speed diamond saw. To perform the microstructural characterizations at each curing age, further hydration was halted using acetone or isopropanol as solvents. The corresponding volumetric ratio between the sample (powder or bulk solid) and the solvent (acetone or isopropanol) was under less than 1/240. For the first three days of solvent exchange, the solvent was replaced every 24 h. Compression Test Compression tests were performed on the mortar mixtures in Table 1. At least three cubic samples with sides 50 mm long were tested for each mixture. The compression tests followed ASTM C109 [33]. Additionally, various microstructural analyses were conducted to characterize the raw materials and hydrated cement pastes (HCPs) in Table 1 in both quantitative and qualitative manners. XRD and XRF The XRD patterns of the raw materials and HCPs were measured to explore constituent phases and to identify crystalline phase transitions and hydration products that might occur over the curing period. The XRD tests were performed on powdered samples using a high-power X-ray diffractometer (Rigaku, Tokyo, Japan) that utilizes the emission of an incident Cu-K radiation beam (λ = 1.5418 Å) with a 2θ scanning range of 5-70° at room temperature. For all the microstructural characterizations, the powder samples were solvent-exchanged by acetone for approximately 14 days to halt further hydration and were dried in a desiccator with a constant vacuum pressure of about 4 Sample Preparation In preparation of the samples, at first polymer powder was dry-mixed with cement blend (or a combination of cement and quartz sand) by a mechanical stirrer. Then, the dry mixture was poured into a mixing bowl containing water in which the set retarders were dissolved in advance. The mixing continued about 10 min at a speed of 100 rpm. Finally, the plastic mixture was placed into various molds for microstructural analyses and compression tests. Then, each mold was compacted on a mechanical vibration table for a few seconds to eliminate entrained air bubbles during the mixing and placing. Excessive vibration was carefully avoided to prevent the separation of polymer powder and/or the bleeding of water. After 24 h of curing, all hardened samples (hardened mortars and cement pastes) were demolded. Then, the curing of the hardened samples continued in an air-conditioned room with a relative humidity of 60 ± 5% and a temperature of 20-25 • C until testing. As for the preparation of the test samples for both the MIP and SEM, the hardened cylindrical sample with a 20 mm diameter and 30 mm height was cut into the pieces with a specific dimension using a low-speed diamond saw. To perform the microstructural characterizations at each curing age, further hydration was halted using acetone or isopropanol as solvents. The corresponding volumetric ratio between the sample (powder or bulk solid) and the solvent (acetone or isopropanol) was under less than 1/240. For the first three days of solvent exchange, the solvent was replaced every 24 h. Compression Test Compression tests were performed on the mortar mixtures in Table 1. At least three cubic samples with sides 50 mm long were tested for each mixture. The compression tests followed ASTM C109 [33]. Additionally, various microstructural analyses were conducted to characterize the raw materials and hydrated cement pastes (HCPs) in Table 1 in both quantitative and qualitative manners. XRD and XRF The XRD patterns of the raw materials and HCPs were measured to explore constituent phases and to identify crystalline phase transitions and hydration products that might occur over the curing period. The XRD tests were performed on powdered samples using a high-power X-ray diffractometer (Rigaku, Tokyo, Japan) that utilizes the emission of an incident Cu-K radiation beam (λ = 1.5418 Å) with a 2θ scanning range of 5-70 • at room temperature. For all the microstructural characterizations, the powder samples were solvent-exchanged by acetone for approximately 14 days to halt further hydration and were dried in a desiccator with a constant vacuum pressure of about 4 kPa (30 mmHg) for 24 h to remove any remaining solvent [34]. The oxide compositions of the raw materials were quantified using a wavelength dispersive XRF spectrometer (Bruker S8 Tiger, Billerica, MA, USA). Regarding XRD and XRF, the powder samples were prepared using a pestle by hand. The maximum particle size is approximately 80 µm because the powdered samples were filtered through an 80 µm sieve before each test. In the XRF test, the pressed pellet samples were prepared in cylinder dies using a mechanical press without a binder. EA The weight percentages of carbon, hydrogen, sulfur, and oxygen of the polymer powder were estimated using an EA (Flash 2000, Thermo Fisher Scientific, Cambridge, UK). The sample weight was approximately 1 mg. The average test result was obtained from five independent measurements. MIP The pore volumes and pore size distributions of the HCPs were evaluated using a MIP (Auto Pore IV 9500, Micromeritics, Norcross, GA, USA). For each mixture, multiple cubic samples with 5 mm long sides were cut from the cores of hardened cylindrical samples for the MIP tests. The cubic samples were solvent-exchanged by isopropanol for 4 weeks to stop further hydration and subsequently dried out in the same condition as done for the XRD (Rigaku, Tokyo, Japan) samples. Preceding studies [34,35] reported that isopropanol was more effective than acetone in preserving the relatively small pores and microstructures of the solid samples. The pore size distribution was determined by measuring the amount of intruded mercury for each of the pore sizes ranging from 360 to 0.003 µm in diameter; the corresponding pressure of the mercury intrusion was increased from 0 to approximately 60,000 psi. The density, surface tension, and contact angle of mercury were set as 13.534 g/mL, 485 dynes/cm, and 130 degrees, respectively. SEM/EDS The morphologies of the HCPs were examined using an ultra-high resolution field emission SEM (Hitachi S-4800, Tokyo, Japan) with EDS. For the selected HCPs in Table 1, at least three sliced samples with 3 mm thickness were cut and prepared from hardened cylindrical samples. The sliced samples were solvent-exchanged as done for the MIP samples, and were fixed in a cold-mounting with epoxy resin. The top surface of each sliced sample was ground to remove about 1-mm thickness and was polished using 6 µm, 3 µm, and 0.25 µm diamond suspensions in a row. Bulk-sectioned samples were used for all HCPs in this study. Before the SEM tests, all polished samples were coated by an osmium film to minimize charging, and to raise the contrast of obtained images. Characterization of Raw Materials The chemical oxide composition of the cement blend obtained from the XRF tests is given in Table 2. The cement blend contains a higher portion (that is, 11.8%) of aluminum oxide than Portland cement, which typically includes approximately 2.5-5% of aluminum oxide [36]. This accelerates the setting and hydration of the cement blend with the presence of plentiful calcium sulfate. The measured XRD pattern of the cement blend is displayed in Figure 2. The cement blend reflected a great portion of minerals. Among them, anhydrite (ICDD PDF no. 00-006-0226), ye'elimite (ICDD PDF no. 00-016-0335), alite (C 3 S) (ICDD PDF no. 01-086-0402), and belite (C 2 S) (ICDD PDF no. 00-033-0303) were found as the major phases, which is in accordance with large calcium oxide, sulfur oxide, silicon oxide, and aluminum oxide contents in the XRF (Bruker S8 Tiger, Billerica, MA, USA) results (Table 2). and quantity of hydration. However, it is often needed to slow down the setting time of CSA cement blend so that the mortar or concrete can retain a sufficient workability before placing. Thus, the role of set retarders is crucial in many applications of CSA cement blend. The measured XRD patterns of the two organic retarders used in this study are presented in Figure 3, which are (a) citric acid (ICDD PDF no. 00-015-0985) and (b) zinc acetate (ICDD PDF no. 00-033-1464). Table 3 summarizes the oxide composition of the acrylic redispersible polymer powder from the XRF tests, as well as the elemental composition of the polymer powder from the EA analyses. The EA results suggest that about 71.1% of the polymer powder is composed of elemental carbon. In CSA cement-blended products, calcium sulfate is appended in the form of anhydrite or gypsum to adjust initial hydration reactions related to the early-age strength [37]. Preceding studies [29,38] showed that the availability of calcium sulfate (CS) over ye'elimite (C 4 A 3 S) controls the rate and quantity of hydration. However, it is often needed to slow down the setting time of CSA cement blend so that the mortar or concrete can retain a sufficient workability before placing. Thus, the role of set retarders is crucial in many applications of CSA cement blend. The measured XRD patterns of the two organic retarders used in this study are presented in Figure 3, which are (a) citric acid (ICDD PDF no. 00-015-0985) and (b) zinc acetate (ICDD PDF no. 00-033-1464). Compressive Strength Regarding the initial setting of the mortars in Table 1, the mortar without retarders (Mixture M-0-N) quickly hardened within about 20 min of curing. However, the other mortars with retarders spent more curing time (at least 1 h) to harden, which would effectively improve the workability before placing it in actual construction. Figure 4 and Table 4 compare the compressive strengths of all mortar cases in Table 1, for curing ages of 2 h to 90 days. Of the cases with no polymer, the strength of the mortar without retarders (M-0-N) was approximately 25.8 MPa at 2 h of curing, which is about 41.6% of the strength at 90 days. In contrast, the mortar with retarders (M-0-Y) at 2 h showed a 36.8% lower strength than M-0-N. This confirms a typical delay in the strength development at the early age due to the retarders. However, the strengths of M-0-Y and M-0-N became similar at 1 day of curing, and furthermore, the former exhibited a higher strength than the latter after 7 days. The strength ratio between M-0-Y and M-0-N was the largest at 28 days of curing (approximately 1.10). The superior strength of M-0-Y to M-0-N at the long-term age was likely due to the effects of the retarders that increased the dispersibility of the cement grains, their specific surface areas, and their accessibility of water, which resulted in a finer, more ordered, and denser hydrated cementitious matrix [39][40][41]. Among the cases with retarders, an increase in the polymer amount induced a reduction in the mortar compressive strength at all the curing ages ( Figure 4). This implies that the use of a higher polymer ratio caused a longer delay in the hydration of the cement blend in the presence of the retarders. Especially at 2 h of curing, the mortar with 10% polymer ratio (M-10-Y) showed a 50.9% Table 3 summarizes the oxide composition of the acrylic redispersible polymer powder from the XRF tests, as well as the elemental composition of the polymer powder from the EA analyses. The EA results suggest that about 71.1% of the polymer powder is composed of elemental carbon. Compressive Strength Regarding the initial setting of the mortars in Table 1, the mortar without retarders (Mixture M-0-N) quickly hardened within about 20 min of curing. However, the other mortars with retarders spent more curing time (at least 1 h) to harden, which would effectively improve the workability before placing it in actual construction. Figure 4 and Table 4 compare the compressive strengths of all mortar cases in Table 1 However, the strengths of M-0-Y and M-0-N became similar at 1 day of curing, and furthermore, the former exhibited a higher strength than the latter after 7 days. The strength ratio between M-0-Y and M-0-N was the largest at 28 days of curing (approximately 1.10). The superior strength of M-0-Y to M-0-N at the long-term age was likely due to the effects of the retarders that increased the dispersibility of the cement grains, their specific surface areas, and their accessibility of water, which resulted in a finer, more ordered, and denser hydrated cementitious matrix [39][40][41]. that caused the increased strength of the mortar at the long-term age. The strength increase in the mortars became much slower after 1 day of curing, and there were only small changes of strength after 60 days of curing (Figure 4). For the five mortar mixtures, the strength ratio of 1 and 90 days ranged between approximately 49.6% and 57.7%, which are much higher than those of typical Portland cement concretes. At 90 days of curing, the compressive strengths of the four mortars excluding M-10-Y converged to approximately 62-64 MPa (Figure 4). Additionally, the strength of M-10-Y is expected to be comparable with the others at the long-term age exceeding 90 days; M-10-Y displayed a slow but steady growth of strength from 60 to 90 days (Figure 4). The strength convergence of the mixtures with different polymer ratios at the long-term age was likely attributed to the formation of a more refined load-carrying pore structure by the use of a higher polymer ratio, regardless of the inherent inferior strength of polymer powder [42]. More detailed discussions of the effects of the polymer on the hydration and microstructure of cement blend systems can be found in the companion paper [11]. Among the cases with retarders, an increase in the polymer amount induced a reduction in the mortar compressive strength at all the curing ages (Figure 4). This implies that the use of a higher polymer ratio caused a longer delay in the hydration of the cement blend in the presence of the retarders. Especially at 2 h of curing, the mortar with 10% polymer ratio (M-10-Y) showed a 50.9% strength reduction compared to the mortar with no polymer (M-0-Y). Moreover, the delaying effect of the polymer added to that of the retarders at the very early age, so that the strength of M-10-Y at 2 h was only approximately 8.0 MPa. Companion samples with no retarders [11] and several previous studies [42,43] also observed the delayed setting and strength development due to the use of redispersible polymer powders. It is supposed that this phenomenon was primarily attributed to entrained air by surfactants contained in the polymer powder, retention of water by micelles in the polymer emulsion, and partial or complete covering of cement hydrates by polymer films [42,43]. As the curing progressed, the strength ratio between M-10-Y and M-0-Y increased from 0.49 at 2 h to 0.89 at 90 days. Even though the addition of the polymer caused a strength reduction in the mortars with retarders, those with a polymer ratio not more than 6% had a strength higher than or similar to Mixture M-0-N since 28 days of curing (Figure 4). This emphasizes the aforesaid effect of the retarders that caused the increased strength of the mortar at the long-term age. The strength increase in the mortars became much slower after 1 day of curing, and there were only small changes of strength after 60 days of curing (Figure 4). For the five mortar mixtures, the strength ratio of 1 and 90 days ranged between approximately 49.6% and 57.7%, which are much higher than those of typical Portland cement concretes. At 90 days of curing, the compressive strengths of the four mortars excluding M-10-Y converged to approximately 62-64 MPa (Figure 4). Additionally, the strength of M-10-Y is expected to be comparable with the others at the long-term age exceeding 90 days; M-10-Y displayed a slow but steady growth of strength from 60 to 90 days (Figure 4). The strength convergence of the mixtures with different polymer ratios at the long-term age was likely attributed to the formation of a more refined load-carrying pore structure by the use of a higher polymer ratio, regardless of the inherent inferior strength of polymer powder [42]. More detailed discussions of the effects of the polymer on the hydration and microstructure of cement blend systems can be found in the companion paper [11]. (Table 5). With regard to the effect of the retarders, the total porosities of B-0-Y at 1 and 60 days of curing were 2.0% and 3.8% lower than those of B-0-N, respectively (Table 5). This is likely because the retarders increased the dispersion and specific surface area of anhydrous cement grains, which facilitated a finer and more ordered hydrated cementitious matrix after the first day of curing [40,41]. The lower porosity of B-0-Y is in accordance with the higher strength of M-0-Y compared with M-0-N after 1 day of curing (Figure 4). As for the effect of the curing age, the total porosity decreased in both mixture B-0-N and mixture B-0-Y as the curing progressed. Porosity In contrast, the average pore diameters of both B-0-N and B-0-Y significantly increased at the curing age of 60 days from 1 day. It is noted that the volume of pores bigger than 50 nm in diameter (that is, macropores) increased as the curing progressed from 1 day to 60 days, whereas the volume of pores smaller than 50 nm in diameter (that is, micropores) decreased ( Figure 5). This might be possible because the cement pastes without a polymer suffered considerable drying shrinkage during the air-curing process, compared with those containing the polymer powder, as observed in the authors' preliminary study [9,10], which might cause the coalescences of micropores into macropores [44]. (Table 5). With regard to the effect of the retarders, the total porosities of B-0-Y at 1 and 60 days of curing were 2.0% and 3.8% lower than those of B-0-N, respectively (Table 5). This is likely because the retarders increased the dispersion and specific surface area of anhydrous cement grains, which facilitated a finer and more ordered hydrated cementitious matrix after the first day of curing [40,41]. The lower porosity of B-0-Y is in accordance with the higher strength of M-0-Y compared with M-0-N after 1 day of curing ( Figure 4). As for the effect of the curing age, the total porosity decreased in both mixture B-0-N and mixture B-0-Y as the curing progressed. In contrast, the average pore diameters of both B-0-N and B-0-Y significantly increased at the curing age of 60 days from 1 day. It is noted that the volume of pores bigger than 50 nm in diameter (that is, macropores) increased as the curing progressed from 1 day to 60 days, whereas the volume of pores smaller than 50 nm in diameter (that is, micropores) decreased ( Figure 5). This might be possible because the cement pastes without a polymer suffered considerable drying shrinkage during the air-curing process, compared with those containing the polymer powder, as observed in the authors' preliminary study [9,10], which might cause the coalescences of micropores into macropores [44]. Figure 6 compares the pore size distributions of the HCP samples with the retarders and different polymer amounts (Mixtures B-0-Y and B-10-Y) at the curing ages of 1 and 60 days. In general, the sample with a higher polymer ratio had a smaller porosity at both 1 and 60 days (Figure 6a and Table 5). The sample with 10% polymer ratio (B-10-Y) at 60 days of curing exhibited the least porosity among all the samples in this study, which signifies that the retarders and polymer together had synergistic effects in refining the pore structures of the cement pastes. In the authors' companion study, the sample with only 10% polymer and no retarders showed a total porosity of 21.9% and an average pore diameter of 32.6 nm at 60 days of curing [11]. Contrary to the samples without polymer (B-0-N and B-0-Y), B-10-Y showed a continuous decrease in the average pore diameter from 1 to 60 days of curing (Table 5). A majority of pores of B-10-Y were macropores at 1 day of curing (Figure 6), indicating a slower strength development as discussed in Section 3.2. This is likely because the surfactants of the polymer powder entrained more air bubbles and amplified the average pore size at the early age [42,43]. After 60 days of curing, however, the vast majority of pores in B-10-Y were micropores ( Figure 6). Hydration Phase Evolution XRD tests were conducted to investigate the effects of the retarders on the hydration process and the products of the cement pastes at different curing ages, with or without the polymer. In Figure 7, the XRD pattern of mixture B-0-Y at 1 day of curing displays the reflections of both hydrated and unhydrated mineral compounds. These include ettringite (ICDD PDF no. 00-037-1476), alite (ICDD PDF no. 01-086-0402), belite (ICDD PDF no. 00-033-0302), anhydrite (ICDD PDF no. 00-006-0226), monosulfate (ICDD PDF. no. 01-083-1289), ferrite (ICDD PDF no. 01-070-2765), ye'elimite (ICDD PDF no. 00-016-0335), and portlandite (ICDD PDF no. 00-044-1481). Although all the different cement pastes developed similar types of hydrated mineral phases, their proportions at a given curing age were significantly influenced by the presence of the retarders (Figures 8-10). Figure 8 presents the effects of the retarders on the XRD patterns of the cement pastes without a polymer (B-0-N vs. B-0-Y) at curing ages of 3 h, 1 day, and 60 days. In Figures 8 and 9, "A" stands for anhydrite, "E" for ettringite, "A" for anhydrite, "M" for monosulfate, "Y" for ye'elimite, "C3S" for tricalcium silicate (alite), and "C2S" for dicalcium silicate (belite) (note that the y-axis values of the Contrary to the samples without polymer (B-0-N and B-0-Y), B-10-Y showed a continuous decrease in the average pore diameter from 1 to 60 days of curing (Table 5). A majority of pores of B-10-Y were macropores at 1 day of curing (Figure 6), indicating a slower strength development as discussed in Section 3.2. This is likely because the surfactants of the polymer powder entrained more air bubbles and amplified the average pore size at the early age [42,43]. After 60 days of curing, however, the vast majority of pores in B-10-Y were micropores ( Figure 6). Hydration Phase Evolution XRD tests were conducted to investigate the effects of the retarders on the hydration process and the products of the cement pastes at different curing ages, with or without the polymer. In Figure 7, the XRD pattern of mixture B-0-Y at 1 day of curing displays the reflections of both hydrated and unhydrated mineral compounds. These include ettringite (ICDD PDF no. 00-037-1476), alite (ICDD PDF no. 01-086-0402), belite (ICDD PDF no. 00-033-0302), anhydrite (ICDD PDF no. 00-006-0226), monosulfate (ICDD PDF. no. 01-083-1289), ferrite (ICDD PDF no. 01-070-2765), ye'elimite (ICDD PDF no. 00-016-0335), and portlandite (ICDD PDF no. 00-044-1481). Although all the different cement pastes developed similar types of hydrated mineral phases, their proportions at a given curing age were significantly influenced by the presence of the retarders (Figures 8-10). Figure 8 presents the effects of the retarders on the XRD patterns of the cement pastes without a polymer (B-0-N vs. B-0-Y) at curing ages of 3 h, 1 day, and 60 days. In Figures 8 and 9, "A" stands for anhydrite, "E" for ettringite, "A" for anhydrite, "M" for monosulfate, "Y" for ye'elimite, "C 3 S" for tricalcium silicate (alite), and "C 2 S" for dicalcium silicate (belite) (note that the y-axis values of the XRD patterns are square-rooted, and they are scaled such that the highest peak has the same height in all the different diffractograms). At 3 h of curing, the use of the retarders significantly inhibited the growth of ettringite; B-0-Y showed very small peaks of ettringite with high peaks of both ye'elimite (restrained by citric acid) and anhydrite. This is in accordance with the delayed strength development of M-0-Y at the early age of 3 h. At day 1 of curing, B-0-Y showed a significant growth of ettringite compared with B-0-N, which agrees with the strength test results that M-0-Y attained a similar strength to M-0-N at day 1, and became stronger than M-0-N after 7 days (Figure 4). At 60 days of curing, mixtures B-0-N and B-0-Y presented quite similar XRD patterns, which also accords with the similar strength results. In the HCP samples without a polymer, monosulfate was barely detected at 3 h of curing, but it became apparent at 1 day of curing (Figure 8), which was possibly made from calcium sulfate (such as gypsum). The growth of monosulfate peaks in B-0-Y was more obvious than that in B-0-N. Winnefeld and Lothenbach [1] reported that monosulfate started to form after about 1-2 days of curing in their CSA cement-blended pastes when the formation of ettringite became less active. From day 1 to day 60, there were little changes of monosulfate in the HCP samples ( Figure 8). Figure 9a,b present the effects of the polymer on the XRD patterns of the cement pastes in the presence of the retarders. In general, the effects of the polymer were similar to those in the mixtures with no retarders, which are discussed in the companion paper [11]. The XRD pattern of B-10-Y at 3 h barely reflected the ettringite phase with high peaks of ye'elimite and anhydrite (Figure 9a). This was due to the combined effects of the retarders and polymer that additively delayed the hydration of the cement blend system at the very early age. At 1 day of curing, both B-0-Y and B-10-Y exhibited a rapid growth of ettringite peaks (Figure 9a). Meanwhile, regardless of the polymer ratio, the ye'elimite phase was markedly consumed within the first day of curing. Furthermore, at 60 days of curing, there were no noticeable differences in the XRD patterns of the samples with different polymer ratios (Figure 9b). This appears to reflect that the strengths of the mortar samples with different polymer ratios showed convergence at the long-term stage even in the presence of the retarders (Figure 4). ye'elimite phase was markedly consumed within the first day of curing. Furthermore, at 60 days of curing, there were no noticeable differences in the XRD patterns of the samples with different polymer ratios (Figure 9b). This appears to reflect that the strengths of the mortar samples with different polymer ratios showed convergence at the long-term stage even in the presence of the retarders (Figure 4). Morphological Transition The SEM tests were conducted to identify the effects of the retarders on the morphological transitions of the polymer-modified cement blend systems at different curing ages. The compositions of hydrated or unhydrated products were measured by the EDS analyses at selected locations. For more detailed understanding on the hydration of the cement blend systems, the relationship between the Al/Ca and Si/Ca ratios, measured in a randomly selected and representative zone of 10 × 10 μm 2 by the EDS spot analyses, was examined for the C-S-Hs of each sample ( Figure 10 and Table 6). As the curing progressed, mixture B-0-N showed a clear trend of more Al uptake in the C-S-Hs with the increasing Si/Ca ratio. At 60 days of curing, B-0-Y had a slightly more Al uptake on average than B-0-N. This is likely to suggest that the samples with the retarders underwent a similar or slightly more progression of hydration at the long-term age. However, mixture B-10-Y exhibited little change of the Ca/Si ratio from 28 to 60 days of curing (Figure 10b), with a lesser Al uptake than mixture B-0-Y at a given curing age. This implies that, regardless of the presence of the retarders, the use of a higher polymer ratio led to a lower Al uptake of C-S-Hs of the cement blend systems; refer to the companion paper [11] for the cases with no retarders. The BSE images of mixture B-0-N at 2 h of curing are given in Figure 11. In Figures 11-16, "A" stands for anhydrite, "C4AF" for tetracalcium aluminoferrite, "C3S" for tricalcium silicate, "D" for dolomite, "C2S" for dicalcium silicate, and "E" for ettringite. A substantial amount of unhydrated phases such as cement clinker, anhydrite, C2S, C3S, and C4AF were identified at 2 h. Additionally, there were significant footprints of hydrated phases in complex forms (indicated as "Al-Si rich" in Figure 11) containing both the ettringite (AFt) and amorphous alumina phases. After 1 day of curing, B-0-N exhibited a fast growth of ettringite ( Figure 12). In addition, the monosulfate phases (indicated as "AFm-rich" in Figure 12), which were not seen at 2 h of curing, but were visible after 1 day of curing. After 60 days of curing (Figure 13), there were much smaller unhydrated phases than the early ages, as identified by the XRD tests ( Figure 8). From the early age of 2 h, mixture B-0-N also developed Al-rich calcium silicate hydrates (C-S-Hs); the average calcium-to-silica (Ca/Si) ratio was 1.68, 1.47, and 1.30 at 2 h, 1 day, and 60 days, respectively (Table 6). For reference, the Ca/Si molar ratio of C-S-Hs in Portland cement pastes ranges between 1.2 and 2.3 after 1 day to 3.5 years [45,46]; the ratio is primarily governed by the type of binder, water/binder ratio, and curing age. At 60 days of curing, the average Ca/Si ratios of C-S-Hs in B-0-N and B-0-Y were very similar, which were 1.30 and 1.26, respectively (Table 6). These similar Ca/Si ratios seem to reflect similar degrees of hydration, which was evident from the convergent strengths of M-0-N and M-0-Y at the long-term age. Morphological Transition The SEM tests were conducted to identify the effects of the retarders on the morphological transitions of the polymer-modified cement blend systems at different curing ages. The compositions of hydrated or unhydrated products were measured by the EDS analyses at selected locations. For more detailed understanding on the hydration of the cement blend systems, the relationship between the Al/Ca and Si/Ca ratios, measured in a randomly selected and representative zone of 10 × 10 µm 2 by the EDS spot analyses, was examined for the C-S-Hs of each sample ( Figure 10 and Table 6). As the curing progressed, mixture B-0-N showed a clear trend of more Al uptake in the C-S-Hs with the increasing Si/Ca ratio. At 60 days of curing, B-0-Y had a slightly more Al uptake on average than B-0-N. This is likely to suggest that the samples with the retarders underwent a similar or slightly more progression of hydration at the long-term age. However, mixture B-10-Y exhibited little change of the Ca/Si ratio from 28 to 60 days of curing (Figure 10b), with a lesser Al uptake than mixture B-0-Y at a given curing age. This implies that, regardless of the presence of the retarders, the use of a higher polymer ratio led to a lower Al uptake of C-S-Hs of the cement blend systems; refer to the companion paper [11] for the cases with no retarders. The BSE images of mixture B-0-N at 2 h of curing are given in Figure 11. In Figures 11-16, "A" stands for anhydrite, "C 4 AF" for tetracalcium aluminoferrite, "C 3 S" for tricalcium silicate, "D" for dolomite, "C 2 S" for dicalcium silicate, and "E" for ettringite. A substantial amount of unhydrated phases such as cement clinker, anhydrite, C 2 S, C 3 S, and C 4 AF were identified at 2 h. Additionally, there were significant footprints of hydrated phases in complex forms (indicated as "Al-Si rich" in Figure 11) containing both the ettringite (AFt) and amorphous alumina phases. After 1 day of curing, B-0-N exhibited a fast growth of ettringite ( Figure 12). In addition, the monosulfate phases (indicated as "AFm-rich" in Figure 12), which were not seen at 2 h of curing, but were visible after 1 day of curing. After 60 days of curing (Figure 13), there were much smaller unhydrated phases than the early ages, as identified by the XRD tests (Figure 8). From the early age of 2 h, mixture B-0-N also developed Al-rich calcium silicate hydrates (C-S-Hs); the average calcium-to-silica (Ca/Si) ratio was 1.68, 1.47, and 1.30 at 2 h, 1 day, and 60 days, respectively (Table 6). For reference, the Ca/Si molar ratio of C-S-Hs in Portland cement pastes ranges between 1.2 and 2.3 after 1 day to 3.5 years [45,46]; the ratio is primarily governed by the type of binder, water/binder ratio, and curing age. At 60 days of curing, the average Ca/Si ratios of C-S-Hs in B-0-N and B-0-Y were very similar, which were 1.30 and 1.26, respectively (Table 6). These similar Ca/Si ratios seem to reflect similar degrees of hydration, which was evident from the convergent strengths of M-0-N and M-0-Y at the long-term age. Mixture B-10-Y at 28 days of curing also contained both the unhydrated and hydrated phases ( Figure 15). B-10-Y at 60 days of curing showed no significant change of morphology from 28 days, showing the same types of unhydrated and hydrated phases (Figure 16). At 60 days, the Al-rich C-S-Hs of B-10-Y had a higher average Ca/Si ratio of 1.59 than those of B-0-N and B-0-Y (that is, 1.30 and 1.26). This supports that the hydration progress of B-10-Y was hindered by the presence of the polymer. This finding also substantiates the lowest strength development of Mixture M-10-Y at all curing ages (Figure 4). Mixture B-10-Y at 28 days of curing also contained both the unhydrated and hydrated phases ( Figure 15). B-10-Y at 60 days of curing showed no significant change of morphology from 28 days, showing the same types of unhydrated and hydrated phases (Figure 16). At 60 days, the Al-rich C-S-Hs of B-10-Y had a higher average Ca/Si ratio of 1.59 than those of B-0-N and B-0-Y (that is, 1.30 and 1.26). This supports that the hydration progress of B-10-Y was hindered by the presence of the polymer. This finding also substantiates the lowest strength development of Mixture M-10-Y at all curing ages (Figure 4). Mixture B-10-Y at 28 days of curing also contained both the unhydrated and hydrated phases ( Figure 15). B-10-Y at 60 days of curing showed no significant change of morphology from 28 days, showing the same types of unhydrated and hydrated phases (Figure 16). At 60 days, the Al-rich C-S-Hs of B-10-Y had a higher average Ca/Si ratio of 1.59 than those of B-0-N and B-0-Y (that is, 1.30 and 1.26). This supports that the hydration progress of B-10-Y was hindered by the presence of the polymer. This finding also substantiates the lowest strength development of Mixture M-10-Y at all curing ages (Figure 4). Conclusions In this study, five mixtures of cement pastes and five mixtures of mortars were tested to investigate the effects of set retarders on the hydration, microstructure, and strength development of polymer-modified CSA blended cement systems at curing ages from 2 h to 90 days. Overall, the use of retarders proved to be an effective method for improving workability for the fast-setting cement blend systems without sacrificing and rather even enhancing the ultimate strength. The findings and conclusions of this study are summarized as follows: 1. The XRD and SEM results show that the growth of the hydrated phases (for example, ettringite, monosulfate, C-S-Hs) was substantially restrained with the retarders at the very early age (that is, 2-3 h). This likely occurred because the retarders formed hydrophobic barrier layers surrounding anhydrous mineral phases (for example, anhydrite, ye'elimite, C 3 S, C 2 S, C 3 A) in the finely dispersed cement grains. Moreover, the delaying effect of the retarders cumulatively added to the delaying effect of the polymer. 2. The use of the retarders increased the ultimate strength of the cement blend systems at the long-term age. Even with a polymer ratio up to 6%, the mortars with the retarders showed higher compressive strengths than the mortar without both retarders and polymers after 28 days of curing. This was likely to happen because the retarders created a finer and denser hydrated cementitious matrix as observed in the MIP results, by increasing the dispersibility of cement grains, their specific surface areas, and the accessibility of water to them. 3. Despite variations in the polymer ratio, the compressive strengths of all the mortars with retarders tended to converge at the age of 90 days. This reflects the formation of a more refined pore structure with a higher polymer ratio that compensated the inherited weak strength of the polymer powder itself, as well as the formation of a monolithic co-matrix between the cement hydrates and polymer phases. The authors will examine the combined effects of the set retarders and polymer powder on the ITZs in the near future. 4. At the age after 60 days, the sample with retarders and 10% polymer exhibited both the smallest porosity and average pore diameter among all the HCP samples. This highlights that the combined use of retarders and polymer had a synergetic effect to refine the pore structures of the cement blend systems. 5. According to the EDS spot analyses, the paste with the retarders at 60 days of curing had a slightly more Al uptake on average than the paste with no retarders. This supports the compression test results that the mortar with the retarders showed higher strengths than the mortar without retarders after 28 days of curing.
v3-fos-license
2022-08-07T15:01:50.818Z
2022-01-01T00:00:00.000
251370235
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "http://thesai.org/Downloads/Volume13No7/Paper_41-Mining_Educational_Data_to_Analyze_the_Students_Performance.pdf", "pdf_hash": "75c9bf43762ceba87414660bbb72adc84ad6e823", "pdf_src": "ScienceParsePlus", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41233", "s2fieldsofstudy": [ "Education", "Computer Science" ], "sha1": "b3ebcf4ef17b479b80fc508b8193812080712139", "year": 2022 }
pes2o/s2orc
Mining Educational Data to Analyze the Student’s Performance in TOEFL iBT Reading, Listening and Writing Scores — Student scores in TOEFL IBT reading, listening, and writing may reveal weaknesses and deficiencies in educational institutions. Traditional approaches and evaluations are unable to disclose the significant information hidden inside the student's TOEFL score. As a result, data mining approaches are widely used in a wide range of fields, particularly education, where it is recognized as Educational Data Mining (EDM). Educational data mining is a prototype for handling research issues in student data which can be used to investigate previously undetected relationships in a huge database of students. This study used the EDM to define the numerous factors that influence students' achievement and to create observations using advanced algorithms. The present study explored the relationship among university students’ previous academic experience, gender, student place and their current course attendance within a sample of 473 (225 male and 248 female). Educational specialists must find out the causes of student dropout in TOEFL scores. The results of the study showed that the model could be suitable for investigation of important aspects of student outcomes, the present research was supposed to use the statistical package for social sciences (SPSS V26) for both descriptive and inferential statistics and multiple linear regressions to improve their scores. I. INTRODUCTION Over the last decade, test developers and experts have fixated much of their time and focus on developing a theoretical view of language ability in order to understand better the nature of language proficiency, as well as developing and applying more sophisticated statistical tools to analyze language tests and test takers' performance in order to best tap these issues [1]. However, language testing research shows that language aptitude is not the only factor influencing test takers' performance. Almost all screening processes in academic environments, from seeking college admission to applying for an exchange student programmer, require the applicant to present TOEFL iBT or other Standard English language test scores. The TOEFL iBT (Test of English as a Foreign Language) Language testing is largely concerned with whether the results clearly effectively reflect test takers' underlying ability in a certain area in a given testing setting [2]. After graduation, English proficiency is necessary for developing career options and attaining aspirational goals in the workplace [3]. The Educational Testing Service (ETS) commissioned a recent survey study and found a high link between high English proficiency and the income of young professionals (full-time workers in their 20s or 30s) across all major industries. This higher income allows them to put more money into improving their English abilities, which are "a vital instrument for success in today's world". Test-takers personality factors to the testing scenario, such as education level, Gender, and place, can all affect their performance [4]. But these construct-irrelevant elements are regarded as potential causes of test bias, which might cause the acquired results to be unrepresentative of the underlying skill that a language test is attempting to assess. As a result, a thorough assessment of the likely effects of such factors is worthwhile. Taking these factors into account and the popularity of the TOEFL iBT as a proficiency exam worldwide, this study aims to determine the future effects of test education level, Gender, and place on TOEFL iBT listening reading and writing results. II. LITERATURE SURVEY Test fairness is a challenging topic in the literature when it comes to language testing. Debates about test fairness aim to create tests free of discrimination and contribute to testing equity [5,6]. When students with the same language ability perform differently on a test, it may be called discriminatory. When the substance of the test is discriminatory to test takers from certain groups, other criteria such as education level, Gender, and test place play a factor. The test's requirements may have different impacts on test takers from different groups; test taker factors such as education level, Gender and place can all contribute to test bias. These factors can impact a test's validity and lead to measurement mistakes. As a consequence, in the design and development of language exams decreasing the impact of these factors that are not part of the language competence is a top objective [7]. The association between TOEFL score and GPA was shown to be positive and statistically significant; however, it was less for engineering students than for students in other professions and for engineering courses than for nonengineering courses. In logistic regressions of CAE pass rate and graduation rate, the TOEFL score was also statistically significant, showing an increased probability of success with a higher TOEFL score. However, model goodness-of-fit values were low, showing that many students defied overall trends in their performance [8]. Accord to the previous survey, a mixed ANOVA was used to answer the following study question: Is there a significant difference between pre and post TOEFL test scores for male and female students? Is there an interaction between male and female students' pre and post TOEFL test scores? According to those findings, there was a substantial change between pre and post TOEFL exam scores, but no significant variation between genders. Furthermore, no correlation was found between male and female students' pre and post TOEFL test scores [9]. In agreement with the past research, there was a relationship between overseas students' academic performance and their language skills, academic self-concept and other factors that influence academic achievement. The research looked at first-year international students enrolled in undergraduate business programs at a Canadian Englishmedium institution. The following data was gathered on the students: grades in degree program courses, annual GPA, and EPT scores (including sub scores). Students also filled out an academic self-concept measure. In addition, instructors in two obligatory first-year business courses were interviewed regarding the academic and linguistic requirements in their courses and the profile of successful students to acquire additional information about success in first-year business courses [10]. In the other side the purpose of this study was to determine whether there was a significant difference in the capacity of male and female students to respond to factual and vocabularyin-context questions on the TOEFL-like reading comprehension test. The results of reading comprehension tests taken from twenty-one male and twenty-one female students in the English Education Program were used for secondary data analysis. Through the use of random sampling, samples were chosen. Utilizing an independent sample t-test, data were evaluated [11]. On the other hand in this study, the self-efficacy of university students in responding to TOEFL questions is examined in relation to gender and participation in TOEFL courses. This study uses a descriptive design with a total sample of 200 university students from two large institutions who are majoring in both English and non-English [12]. III. PROPOSED METHODOLOGY After reviewing data and determining the research aim and objectives, this paper examines the effects of characteristics such as education level, attendance, and student gender to examine students' scores in TOEFL iBT reading, listening, and writing using data mining approaches. For this study's techniques and data preparation procedures, methodologies are discussed below. A. Dataset The data for this study came from 473 students. Arabic is one of their first languages. 473 students in total took the TOEFL. The study enlisted the participation of 225 male and 248 female students (Table I). B. Data Preparation All activities were taken from the raw data to create the final dataset (data that was entered into the design tool). The dataset's variables were prepared to generate the models needed in the next phase. The students received a variety of English language skills, including a TOEFL preparation session, during the rigorous English language program. The TOEFL scores of the students were used as the research tool. At the end of the course, students take the TOEFL (paper-based test). Students were in class for five hours a day and were given TOEFL-related assignments. Listening, grammar/structure, and reading are the three skills that make up the TOEFL score. The TOEFL score ranges between 310 and 677. This study aims to determine the future effects of test education level, gender, place, and attendees on TOEFL iBT listening, reading, and writing results. Fig. 1 depicts a framework for predicting student success. First, the data on student performance is fed into this system. This student data set has been preprocessed to eliminate noise and make the data set more consistent. The input data set is then subjected to various SPSS statistics analyses. Next, data analysis is carried out. Finally, different algorithms' categorization results are compared. IV. MODEL AND ALGORITHM Likewise, gender is another factor that is usually studied, but there is a lack of good research to identify whether male and female language learners have significantly different TOEFL results. From a psychological standpoint, there are numerous variables related to gender [13]. In general, females are believed to be more successful in language learning than males. Therefore, many scholars in language acquisition studied how gender disparities can affect students' language learning proficiency. In other words, ten studies found that female students were superior to male students in reading comprehension. In contrast, five studies found that male students were superior [14,15] also undertook a quantitative study to see if there are any gender differences in TOEFL scores and found no significant differences. The Educational Testing Service (ETS), on the other hand, came to a different result. According to the survey, female pupils are more advanced than male students [16]. Females, for example, outperformed males in writing and reading, though the difference was minor. On the other hand, Male students performed higher in terms of listening and comprehension, as well as vocabulary proficiency [17]. Additionally, a standardized English language assessment examination, such as the Test of English as a Foreign Language, is required at most English language colleges and universities (TOEFL). However, because there are few standardized evaluation measures for all candidates, English proficiency ratings are occasionally utilized for purposes other than evaluating the "abilities of non-native English speakers to use and understand English." However, in the lack of standard ranking techniques for all candidates, the TOEFL score may be used as a stand-in for those criteria; the TOEFL score is occasionally employed as a predictor of how well a potential student will perform at a university. Even when the TOEFL is not used as the main measure of academic success, minimum TOEFL score requirements are frequently enforced. Despite the fact that the underlying English-language communication abilities that TOEFL scores represent may be significantly more important to academic performance in specific areas, TOEFL score minimums for admission frequently do not vary among academic majors or fields of study. Requiring the same minimum TOEFL score whatever of a student's selected major may lead to the exclusion of otherwise talented students from academic programmers where academic achievement is not contingent on language competence [18]. For example, an increased TOEFL score is less correlated with academic success in college students than in other college students (possibly because English communication skills largely determine academic success in these areas). It may be reasonable to adopt the TOEFL score entry requirements. More lenient for engineering applicants, especially those who can show enough preparation through means other than a TOEFL score. Despite the fact that course enrollment has tripled in the past 10 years, little is known about the impact of environment tests and attendance on learning. According to a recent study of college students, course attendance and the student place have an impact on the examination scores. Therefore, differences in student accomplishment between groups should be viewed with caution. This study adds to the body of knowledge by addressing a recurring problem of earlier research: determining the impact of various classroom test conditions on exam scores. The features of test environments are rarely described in previous research. This study compares test scores from students who took examinations off-campus with test scores from students who were called back to school for probationary exams within a semester [8]. V. EXPERIMENTS AND RESULTS The analysis of this paper was done using the statistical package for social sciences (SPSS V26) for both descriptive and inferential statistics. In this work, ANOVA was used as a statistical analysis method. Because this study examines the significance of group differences, it uses an ANOVA statistical model with a continuous dependent variable (TOEFL scores) and categorical independent factors. Because this study tries to observe the interaction between gender differences, ANOVA is the most appropriate statistical procedure among the numerous varieties of ANOVA [19]. Pre and post TOEFL scores are within-subject factors, while male and female are between-subject variables. To address the first study question, a statistically significant mean difference between before and post TOEFL scores will be studied. After that, we'll look at the statistically significant mean difference between male and female TOEFL scores. The impacts will next be compared between the TOEFL scores of males and females. Table II provides descriptive statistics for the Furthermore, the results of the multiple regression were reported, and it can be noticed that all variables have significant positive effect on the total score since (P<0.001), as a result, the null hypothesis is rejected, and the alternative hypothesis is accepted in Table IV. Table IV, the assumptions of this study were examined using multiple regression analysis in this part. On the other side, the impact of demographic variables on the students' overall scores will be studied in this section. Finally, the normal distribution test was done utilizing Skewness and kurtosis tests to choose between parametric and nonparametric testing Table VI [20]. Table VI, the values of Skewness and kurtosis for the score were within the range of ±2, indicating that the total score was normally distributed, according to the normality statistics. First hypothesis: there is a significant difference in total scores regarding the Gender of the students. The independentsamples t-test is the appropriate parametric test because Gender is a categorical variable with two independent categories. Table VII, some descriptive statistics of the total score according to each category were given. Fig. 3 can be concluded from that the average degree of females (487.49) was greater than that of males (462.04). In addition, Levene's test for equality of variances was done and found that the variances were equal since ( = .449, > 0.05). The results of the independent-sample t-test show that there is a significant difference in total scores between males and females since P-value is less than 0.05 as ( = −3.961, < 0.001) Table VIII. In Table VIII, the results of the independent-sample t-test show that there is a significant difference in total scores between males and females since P-value is less than 0.05 as( = −3.961, < 0.001). Moreover, in the second hypothesis: there is a significant difference in total scores regarding the attendees of the students. Since the student's attendance is a categorical variable with more than two independent categories, the suitable parametric test is the analysis of variance (ANOVA) test. In Table IX, some descriptive statistics of the total score according to each category were given. In Table X, the results of the ANOVA test show that there is no significant difference in total scores between the number of attendees since the P-value is greater than 0.05 as( = .151, > 0.05). As well the third hypothesis: there is a significant difference in total score regarding the place of the test. Since the place of the test is categorical variable with two independent categories, so the suitable parametric test is the independent-samples t-test. Table XI shows some descriptive statistics of the total score according to each category were given. In Table XII, Levene's test for equality of variances reveals that the variances were equal since( = .475, > 0.05). The results of the independent-sample t-test show that there is a significant difference in total scores between Cairo and Sheikh Zayed since P-value is less than 0.05 as ( = −2.848, < 0.01). Subsequently, the fourth hypothesis shows a significant difference in total scores regarding the level of education. Since the level of education is a categorical variable with more than two independent categories, the suitable parametric test is the analysis of variance (ANOVA) test. Table XIII shows some descriptive statistics of the total score according to each category were given. Fig. 6 concluded that students' average scores were different across the level of education. Table XIV shows the results of the ANOVA test show that there is a significant difference in total scores across the level of education since the P-value is less than 0.05 as ( = 8.407, < 0.001). Finally, the fifth hypothesis: there is a significant difference in TOEFL parts (Listening, Grammar, and Reading) regarding the Gender. Table XV shows some descriptive statistics of the TOFEL parts according to each category were given. Then, Levene's test for equality of variances was conducted. It can be noticed that for listening, we have unequal variances since ( = 7.566, < 0.01) but for Grammar, we have equal variances since( = .007, > 0.05)and the same for Grammar. We have equal variances since( = 1.870, > 0.05). The results of the independent-sample t-test show that there is a significant difference in listening scores between males and females since P-value is less than 0.05 as ( = −3.082, < 0.01). Moreover, there is a significant difference in grammar scores between males and females since P-value is less than 0.05 as( = −3.900, < 0.001). Finally, there is a significant difference in reading scores between males and females since P-value is less than 0.05 as( = −3.716, < 0.001) Tables XVI and XVII. In Tables XVI and XVII, since the Gender of the students is categorical variable with two independent categories; the suitable parametric test is the independent-samples t-test. This study looked at the TOEFL results of 473 students based on how much time they spend studying, their educational level, gender, course attendance, and place. AS EXPECTED, the TOEFL scores improved from pre-to post-test, and the change was statistically significant. In this survey, there was significant difference by educational level, gender, attendance, and place difference. Furthermore, there was a relationship between male and female students' before and post TOEFL scores. As a result, the study's findings offer students with useful information. Furthermore, TOEFL educators can propose that the more time a student devotes to learning, the higher their TOEFL score will be. This also aids programmer makers in class design by giving them a sense of what students (who are prepared for the TOEFL) could expect. Because many students are applying to universities each year, generalizing TOEFL scores to the general population is insufficient.
v3-fos-license
2012-09-28T05:42:59.000Z
2010-11-24T00:00:00.000
119193567
{ "extfieldsofstudy": [ "Physics" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://link.springer.com/content/pdf/10.1007/JHEP06(2012)090.pdf", "pdf_hash": "f31eea1ae1b89d09f08df042b3e266b1364aa1c4", "pdf_src": "Arxiv", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41235", "s2fieldsofstudy": [ "Physics" ], "sha1": "1b63c2f28cb4852c5bdbccc6b20b69859017b9de", "year": 2012 }
pes2o/s2orc
QCD parton showers and NLO EW corrections to Drell-Yan We report on the implementation of an interface between the SANC generator framework for Drell-Yan hard processes, which includes next-to-leading order electroweak (NLO EW) corrections, and the Herwig++ and Pythia8 QCD parton shower Monte Carlos. A special aspect of this implementation is that the initial-state shower evolution in both shower generators has been augmented to handle the case of an incoming photon-in-a-proton, diagrams for which appear at the NLO EW level. The difference between shower algorithms leads to residual differences in the relative corrections of 2-3% in the p_T(mu) distributions at p_T(mu)>~50 GeV (where the NLO EW correction itself is of order 10%). Introduction At high energy hadron colliders, studies of Drell-Yan (DY) like processes are of great importance. They are crucial for the understanding of QCD and electroweak (EW) interactions in hadron-hadron collisions. Drell-Yan processes have large cross sections and clean signatures in the detectors. They are used for monitoring of the collider luminosity and calibration of detectors. DY is a reference process for measurements of EW boson properties at hadron colliders. Combination of accurate experimental measurements of these processes with elaborated theoretical predictions allows the extraction of parton density functions (PDFs) in the kinematical regions which were not yet accessed in DIS experiments. DY processes provide an important background to many other processes studied at hadron colliders including searches for Higgs scalar as well as for W ′ and Z ′ bosons in particular. All this gives a strong motivation to have an advanced high precision theoretical description of DY. The experimental precision of DY measurements at the LHC can reach the 1% level. That means that the accuracy of the theoretical predictions needs to be even higher. For this reason it is obvious that QCD and electroweak radiative corrections should be taken into account. This article presents the results of application of parton shower algorithms to a hard process that was calculated with electroweak radiative corrections in the SANC system [1,2]. The showering procedure was applied to the Drell-Yan processes: pp → (W + ) → l + ν l (γ) + X, pp → (γ, Z) → l + l − (γ) + X, (1.1) where X represents hadrons and l is one of e, µ, τ . The parton shower algorithms implemented in the general-purpose Monte Carlo generators Pythia8 [3] and Herwig++ [4] were used for these processes. It is worth noting that these two programs use essentially different parton shower algorithms: Pythia8 uses an evolution scheme based on transversemomentum ordering [5] and Herwig++ uses the coherent branching algorithm based on angular ordering of emissions in the parton shower [6]. Earlier the combination of the effects due to parton showers (PS) and due to EW radiative corrections for charged current process was considered in [7,8]. In those studies an interface between HORACE [9,10] and fortran event generator Herwig [11] was developed. Here, we present studies in which the SANC generator [12,13] is used for the treatment of complete NLO EW corrections with interfacing it to the Herwig++ v2.4.0 and Pythia8 v.130 generators to apply parton showers. The paper is organized as follows. In the next sections we describe the chain of simulations and other topics associated with the calculation procedure. In section 2 the relevant features of the SANC MC event generators are described. Section 3 discusses aspects of the parton showers that are added by Pythia8 or Herwig++. Numerical cross checks and results are given in section 4. In section 5 the obtained results and prospects are discussed. The Drell-Yan processes in SANC The SANC system [12,13] provides tools for calculating the differential cross sections of the Drell-Yan processes taking into account the complete (real and virtual) O(α) electroweak radiative corrections. Here we give a brief summary of the main properties of this framework. All calculations are performed within the OMS (on-mass-shell) renormalization scheme [14] in the R ξ gauge which allows an explicit control of the gauge invariance by examining a cancellation of the gauge parameters in the analytical expression of the squared matrix element. We subdivide the total EW NLO cross section of Drell-Yan process at the partonic level for observables X ( X = (x 1 , ..., x n ) is a generic observable which is a function of the final-state momenta) into four terms: where σ Born is the Born level cross-section, σ virt is a contribution of virtual(loop) corrections, σ soft corresponds to a soft photon emission and σ hard is a contribution of a hard (real) photon emission. The terms with auxiliary parametersω (photon energy which separates phase spaces associated with the soft and hard photon emission) and λ (photon mass which regularizes infrared divergences) cancel out after summation and the differential EW NLO cross-section for infrared-safe observables does not depend on these parameters [15][16][17]. The tree level diagrams for the DY process are shown in figure 1 for neutral and charged currents. Examples of the diagrams corresponding to the electroweak NLO component for neutral and charged currents are shown in figures 2 and 3 respectively. For real photon emission we separate contributions from initial and final state radiation and their interference in a gauge invariant way. In case of photon emission off the virtual W we introduce the splitting of the W-boson propagators by the following formula: The so-called on-shell singularities which appear in form of logarithms log(ŝ−M 2 W +iǫ) can be regularized by the W -width [18]: Electroweak NLO radiative corrections contain terms proportional to logarithms of the quark masses, log(ŝ/m 2 u,d ). They come from the initial state radiation contributions including hard, soft and virtual photon emission. Such initial state mass singularities are well known, for instance, in the process of e + e − annihilation. However, in the case of hadron collisions these logs have been already effectively taken into account in the parton density functions (PDF). In fact, in the procedure of PDF extraction from experimental data, the QED radiative corrections to the quark line were not systematically subtracted. Therefore current PDFs effectively include not only the QCD evolution but also the QED one. Moreover, it is known that the leading log behaviours of the QED and QCD DGLAP evolution of the quark density functions are similar (proportional to each other). Consequently one gets an evolution of the PDF with an effective coupling constant where α s is the strong coupling constant, α is the fine structure constant, Q i is the quark charge, and C F is the QCD colour factor. We will use here the MS subtraction scheme, the DIS scheme may be used as well. A solution described in [19] allows to avoid the double counting of the initial quark mass singularities contained in our result for the corrections to the free quark cross section and the ones contained in the corresponding PDF. The latter should also be taken in the same The MS subtraction to the fixed (leading) order in α is given by: where q(x, M 2 ) is the parton density function in the MS scheme computed using the QED DGLAP evolution. The differential hadronic cross section for DY processes [1.1] is given by whereq 1 (x 1 , M 2 ),q 2 (x 2 , M 2 ) are the parton density functions of the incoming quarks modified by the subtraction of the quark mass singularities and σ q 1 q 2 →ℓℓ ′ is the partonic cross section of corresponding hard process. The sum is performed over all possible quark combinations for a given type of process (q 1 q 2 = ud, us, cd, cs for CC and q 1 q 2 = uū, dd, ss, cc, bb for NC). In our calculations we used fixed factorization scales M 2 = M 2 W for CC and M 2 = M 2 Z for NC. The effect of applying different EW schemes in the SANC system is discussed in [13]. In the current study we are using the G µ -scheme [20] since it minimizes EW radiative corrections to the inclusive DY cross section. In this scheme the weak coupling g is related to the Fermi constant and the W boson mass by equation where ∆r represents all radiative corrections to the muon decay amplitude [21]. Since the vertex term between charged particles and photons is proportional to g sin θ W , one can introduce an effective electromagnetic coupling constant which is evaluated from (2.7) in a tree-level approximation by setting ∆r = 0. The total NLO electroweak corrections to the total charged current DY cross section for 14 TeV pp collisions are estimated to be about −2.7% for the G µ -scheme and can reach up to 10% for the differential cross section in certain kinematical regions [12,13]. The EW NLO calculations for the DY processes were performed using semi-analytic calculations with the aid of the FORM symbolic manipulation system [22] and employ LoopTools [23] and SancLib [1] libraries for evaluation of scalar and tensor one-loop integrals. The analytical expressions for different components of the differential EW NLO cross-section for DY processes are realized within standard SANC Fortran modules which are used in our Monte Carlo event generators of unweighted events. Photon induced contributions At the O(α) level one can see that there is a non-zero probability to find a quasi-real photon inside one of the colliding protons. This brings up an additional QED contribution to the EW corrections, so called inverse bremsstrahlung. The complete set of O(α) photoninduced contributions for both NC and CC Drell-Yan processes was evaluated in [24]. The charged current results for this component were given by S. Dittmaier and M. Krämer in the proceedings to the Les Houches workshop [25]. The results for the neutral current were presented in [26], using an approach which implies effective resummation of the top quark one-and two-loop corrections in the LO cross section via s W renormalization: and with the function ρ (2) given in [27]. The coupling constant α Gµ is replaced by α Gµs 2 W /s 2 W in this approach. Table 1 presents comparison of SANC results with [26] with corresponding input parameters for the photon-induced contribution without these key differences in calculation schemes taken into account. In SANC the corresponding effects are considered as a part of the first-(and higher) order radiative corrections. This comparison shows that although the size of the photon-induced contribution can differ by 10% or more between the two approaches, this corresponds to at most a per mille level difference in the total lepton pair cross section. It is therefore smaller than the aimed for accuracy and we do not need to consider this difference in further detail. The fixed-order diagrams corresponding to the processes are shown in figures 4 and 5. The inverse bremsstrahlung component for the hadronic cross section can be written in a standard way: where f q (x 1 , M 2 ) and f γ (x 2 , M 2 ) are the parton density functions for quark and photon respectively. The quark mass singularity subtraction is performed for this contribution in analogous way to the processes with two quarks in the initial state. The photon induced channels are explicitly included in the SANC event generators. The corresponding correction value defined as δ γq = σ γq /σ 0 , where σ 0 is the tree level process cross section, is below the percent level for the total cross section, but reaches several percents in certain kinematic regions. The corrections for muon-neutrino pair transverse mass and µ + transverse momentum distributions in the charged current process pp → µ + ν for δ γq are shown in figure 6. The corrections for µ + µ − invariant mass and µ + transverse momentum distributions in the neutral current process pp → µ + µ − for δ γq are shown in figure 7. The large corrections for µ + transverse momentum in the charged current process in the region of p T (µ + ) > M W /2 is due to the recoil of a virtual W . The inverse bremsstrahlung contributions can be of resonant and non-resonant type. The latter have the incoming photon coupling to leptons and require a special colour flow interpretation in the code used to apply QCD parton showers to the hard process. As a workaround one can write an event entry for such contributions as a 2 → 3 process without internal structure. The resonant component can be treated in a standard way indicating a Z boson as a virtual propagator. Parton Showers In contrast to the fixed-order calculations described above, parton showers rely on an iterative (Markov-chain) branching procedure to reach arbitrary orders in the perturbative expansion. By keeping the total normalization unchanged, the shower explicitly conserves unitarity at each order, generating equal-magnitude but opposite-sign real and virtual corrections. Each branching step is based on universal splitting functions that capture the leading singularities of the full higher-order matrix elements exactly. Subleading terms can usually only be taken into account approximately, and hence different shower models (and "tunings") can give different answers outside the strict soft/collinear limits. Still, in practice, parton showers are reasonably accurate even for finite emission energies and angles, as long as the characteristic scale of each emission is hierarchically smaller than that of the preceding process (strong ordering). As such, they are complementary to the fixed-order truncations discussed above, which are accurate only in the absence of large hierarchies. Several different shower formulations have been developed. In Herwig++ and Pythia8, which we shall be concerned with here, the shower approximation is cast in terms of evolution equations using DGLAP splitting kernels, which nominally capture only the leadinglogarithmic (LL) behaviour of higher perturbative orders. To further improve the accuracy, parton showers incorporate a number of improvements relative to the naive leading-log picture; 1) they use renormalization-group improved couplings by shifting the argument of α s for shower emissions to α s (p ⊥ ), thereby absorbing the β-function-dependent terms in the one-loop splitting functions into the effective tree-level ones, 2) they approximately incorporate the higher-order interference effect known as coherence by imposing angular ordering either at the level of the evolution variable (Herwig++) or in the construction of the shower phase space (Pythia8), 3) they enforce exact momentum conservation to all orders, albeit in different ways between the two (different "recoil strategies"), and 4) both programs include at least some further corrections due to polarization effects. The resulting approximation is thus significantly better than "pure LL", although it cannot be formally claimed to reach the NLL level. Prior to the writing of this paper, the initial-state showers in both Herwig++ and Pythia8 included photons as emitted particles, but not as evolving ones. Interfacing of the photon-induced subprocesses to Herwig++ and Pythia8 therefore required a certain modification of these codes. In the two following subsections, we briefly summarize the main properties of these modifications, for each program respectively. Processes with incoming photons in Pythia8 For the Pythia8 implementation of incoming photons, we re-use the existing backwardsevolution framework for gluons, with the modification that there is no photon self-coupling and replacing the q → g coupling and colour factor byᾱ = e 2 q α em /2π for q → γ. For future reference, we here summarize the steps specific to the photon backwards-evolution. Denoting the Pythia8 evolution variable p 2 ⊥evol (see [28]), the evolution equation for a photon-in-a-proton is cast as a standard Sudakov evolution, with subsequent p ⊥evol "trial" scales generated according to an overestimate of the physical evolution probability, obtained by solving the trial evolution equation, where∆ is the trial shower Sudakov, representing a lower bound on the probability that there are no branchings between the two scales p 2 ⊥now and p 2 ⊥next , R is a random number distributed uniformly between zero and one, and the EM coupling used for the trial emission isᾱ =ᾱ(ŝ), withŝ being the CM energy of the two incoming partons (specifically, it is an overestimate of α(p 2 ⊥ ), which will be imposed by veto, below). The trial z integral,Î z , is defined byÎ where x γ is the momentum fraction carried by the incoming photon and the z limits are defined in [28]. Solving eq. (3.1) for p 2 ⊥next , we get Given a trial p ⊥next value obtained from this equation, we may now generate a corresponding trial z value according to where R ′ is a second random number distributed uniformly between zero and one, and the z limits are the same as those used in eq. (3.2). We have now obtained an importance-sampled pair of trial (p ⊥next , z) values. The quark flavour q is chosen with probability proportional to e 2 q f q (x γ /z, p 2 ⊥next ). Since the overestimates used for the importance sampling are not quite identical to the physical distributions we wish to obtain, the second step of the algorithm is to employ the veto algorithm and accept only those trials that lie inside the physical phase space with a probability, where the coupling factor translates the argument of α em in the manner mentioned above, the first PDF factor corrects the factorization scale used in the photon PDF to the new evolution scale, the second PDF factor corrects both the x and Q 2 of the would-be parent quark (or antiquark) to their correct post-branching values, and the last factor (the z factor) corrects for the form of P (z) used for the trial generation (the factor √ z, which may seem to complicate matters unnecessarily, arises since we use a factor 1/ √ z in the trial generation to suppress the high-x bump on valence quark distributions, which could otherwise lead to the trial generator not overestimating the physical splitting probability). If no acceptable branching is found above the global initial-state shower cutoff (cf. the documentation of Pythia8's spacelike showers [28]), the photon is considered as having been extracted directly from the beam remnant. Also note that the maximum expressed by eq. (3.2) could be violated, yielding P > 1 in eq. (3.5), if the photon PDF exhibits any thresholds or other sharp features. Further work would be needed to properly take into account such structures. We note, however, that the code forces the PDF to be bounded from below, so that a vanishing PDF should result in warnings, not crashes. The (yet higher order) possibility of a fermion backwards-evolving to a photon has not yet been included in this framework. The net effect is therefore only to allow the initialstate shower off an incoming photon to reconstruct back to a quark or antiquark, but not the other way around. Processes with incoming photons in Herwig++ The physics implementation of processes with incoming photons is very similar in Her-wig++ to that already described for Pythia8, so we do not go into as much detail. However, the practical implementation is somewhat different. Table 2: Event generation conditions referred to in the text as cut1. When the Herwig++ parton shower is presented with a hard process with an incoming photon, it calls a PreShowerHandler of the IncomingPhotonEvolver class, which is specially written for this purpose. It generates a step of backward evolution from the photon to an incoming quark, in exactly the same way as described above. In particular, the transverse momentum of the q → qγ vertex is required to be smaller than the scale of the hard process, as in Pythia8. However, this backward evolution step is required to be generated with a probability of unity and if no backward step is generated above the infrared cutoff, or if it is generated outside the allowed phase space, then the evolution scale is reset to the hard scale and it loops back to try again. In a very small fraction of events it can happen that, due to mismatch between the hard process (SANC) and parton shower (e.g. in parton mass values and hadron remnant treatment) no backward step is possible. In such cases an EventException is thrown. Having generated a backward step, the IncomingPhotonEvolver modifies the hard process to include it. That is, it replaces a γ → X event by the corresponding q → qX event, which the rest of Herwig++'s parton shower machinery operates on as normal. The quark line is correctly labeled as colour-disconnected from the rest of the hard process, so the colour coherence inherent to Herwig++'s shower ensures that it only radiates with opening angles smaller than the q → q scattering angle. The IncomingPhotonEvolver has one parameter that may be of interest to users: minpT, the minimum transverse momentum generated for the q → qγ vertex. All of the plots shown below were generated with the default value of 2.0 GeV. Cross Checks and Validation Several cross check simulations were performed in order to verify the new implementation for the processes with incoming photons. The simulations included two steps: i ) hard event generation using the SANC MC generator for charged (CC) and neutral (NC) current cases, and ii ) addition of the parton showers using Herwig++ or Pythia8 generators. At the generation step the event selection shown in table 2 was applied. Events which satisfied these cuts were written in the Les Houches event format (LHEF [29,30]) for further processing. The generators Pythia8 and Herwig++ in the second step were run with the following non-default configuration. In order to speed up the simulation without significantly influencing final results, multiple interactions and hadronization were turned off in both programs. The QED component of initial and final state radiation was disabled to avoid process cuts pp → W + → l + ν l (γ) + X M inv (µ + ν µ ) > 20 GeV, |η(µ + , ν µ )| < 2.5, p T (µ + , ν µ ) > 20 GeV Table 3: Selection criteria applied after showering procedure referred to in the text as cut2. 2332 (1) double counting of the radiative corrections, which are calculated in SANC generator in the complete EW NLO approximation. In Herwig++ less strict than default kinematic constraints were set for photon momenta: k T (γ) > 0.0 and |η γ | < 10. In order to avoid edge effects in the distributions after parton showers were applied the kinematic constraints were strengthened as shown in table 3. The lower limit on invariant mass of the leptons was not changed, which would not lead to edge effects since the transverse momenta constraints for W/Z decay products would indirectly increase the actual threshold for M ℓℓ by a factor of 2. The physics setup corresponding to the LHC conditions used in this study is specified in [12,13]. The electroweak scheme for the calculations was chosen to be the G µ -scheme. As parton distribution functions the MRST2004QED [31] set was used since it allows to take the photon induced contribution into account. The factorization scale was set to M Z for neutral current case and to M W for the charged current. Numerical Results The results presented in this paragraph were obtained for statistics of 7 × 10 7 events for each channel (CC and NC) calculated in both LO and EW NLO approximation. The data produced in the wide selection criteria (cut1) were then subjected to the selection (cut2) with ∼ 50% efficiency. Table 4 shows the effect of this selection on the inclusive cross section and electroweak NLO correction values. Here δ denotes the relative corrections, δ = (σ N LO /σ LO − 1) × 100%. Expressions like "HP SANC + PS Herwig++ " denote the case when the hard process (HP) data were generated with the SANC generator and then processed with Herwig++ to apply parton showers (PS). The first and second rows in the table show the generator-level cross sections calculated with the SANC generator before parton shower algorithms applied in the cut1 and cut2 conditions, respectively. The third and fourth rows show effects of the cut2 selection when parton showers via Herwig++ and Pythia8 were applied. To compare the parton shower algorithms in Pythia8 and Herwig++ it is convenient to introduce a parameter where dσ Pythia8 /dX represents a differential cross section by an observable X calculated with parton showers applied by Pythia8. Figures 8-13 show distributions for various observables obtained after the cut2 selection. Each figure contains three rows of plots with distribution of the differential cross sections themselves (top row), electroweak K-factor which is defined as usual as K = σ N LO /σ LO (middle row) and the R X value (bottom row). The distributions show that R X values can differ from unity by up to 10% for p T (right columns in figures 8,9,11,12) and by several percent for other observables. The difference between the shower algorithms is most noticeable in the p T distributions. Nevertheless R X distributions in M inv (µ + µ − ) and M T (µ + ν µ ) are practically flat and differ from unity by only 2-3%. It should here be emphasized that the prescription presented in this paper only concerns incorporating the first order of EW corrections into a shower framework. In particular, the description of QCD corrections is still handled only with leading-logarithmic precision, and does not include any matching to higher-order QCD matrix elements (see, e.g., [32,33]). Thus, the description of vector boson plus jets can only be expected to be correct for jets with p T ≪ m Z (representing the bulk of the cross section). For harder jets, differences between Herwig++ and Pythia8 reflect the uncertainty associated with QCD corrections beyond LL. Further work would be required to include QCD matrix-element corrections in this region. The right columns of the plots in figures 8, 11 show the muon transverse momentum in the pp → µ + ν µ + X and pp → µ + µ − + X processes. Although the radiative corrections are washed out by parton showers in the peak region they reach up to 15% for higher p T values. The difference in the parton shower algorithms for EW RC is mildly noticeable at p T > 50 GeV where the K-factors for Pythia8 and Herwig++ diverge with maximum 2%. The small bends in the 20 GeV region are edge effects that appear in the showered events selection and play no role in the physics of the process. A similar behaviour can be seen in the Z/W transverse momentum distributions in figures 9, 12: the K-factors deviate up to 4%. The (µ + µ − ) invariant mass and (µ + ν µ ) transverse mass plots in figures 10, 13 show no significant effects. Summary An interface between the SANC matrix-element generator and the Herwig++ and Pythia8 parton shower Monte Carlo codes has been presented. As part of this work, the new possibility of backwards evolution of photons has been added to both the Herwig++ and Pythia8 initial-state showers. Several numerical crosschecks have been performed, with reasonable results. The addition of parton showering gives a natural smearing effect on the EW K-factor distributions in the Drell-Yan process. The lepton p T distributions are mostly affected in the Z and W peak region. The remaining difference between Pythia8 and Herwig++ showering algorithms was another focus of this study. The comparative plots included show that the difference in differential cross section can reach up to 10% for certain observables. We expect that this could be further reduced by extending the prescription presented here to include a matching to fixed-order QCD matrix elements for vector boson plus jets. Since the completion of this work, two implementations of electroweak corrections to W boson production in the POWHEG framework have appeared [34,35], combining both EW and QCD corrections. However these works do not take into account effects of photon-induced processes. We consider that the implementation into a general-purpose electroweak tool like SANC has advantages for precision EW studies, given the importance of EW scheme-dependence in radiative corrections and the need for consistent scheme implementations between different processes. Nevertheless, it is clear that the POWHEG framework is an extremely powerful tool in describing and combining hard QCD and parton shower corrections consistently, and we look forward to making detailed comparisons between the results of these implementations and our own.
v3-fos-license
2019-05-30T23:47:20.855Z
2019-02-26T00:00:00.000
169942864
{ "extfieldsofstudy": [ "Business" ], "oa_license": "CCBY", "oa_status": "GREEN", "oa_url": "https://doi.org/10.1002/essoar.10500841.1", "pdf_hash": "ba8570a6fcec72418e6ac2ba84c21e40aa100b9c", "pdf_src": "ScienceParsePlus", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41237", "s2fieldsofstudy": [ "Environmental Science", "Political Science" ], "sha1": "a8878d19835559cf8f0ee348557672031d86069c", "year": 2019 }
pes2o/s2orc
Regional Climate Change Adaptation Policy Network in Southeast Florida , • Social Network Analysis (SNA) offers new leverage for answering standard social and behavioral science research questions by giving precise formal definition to aspects of the political, economic, or social structural environment (Wasserman & Faust, 1994) • Focuses on relationships among social entities, and on the patterns and implications of those relationships (Wasserman & Faust, 1994) • Actors and their actions are viewed as interdependent rather than independent, autonomous units • Relational ties (linkages) between actors are channels for transfer or "flow" of resources (either material or nonmaterial) • Network models focusing on individuals view the network structural environment as providing opportunities for or constraints on individual action • Network models conceptualize structure (social, economic, political, etc.) as lasting patterns of relations among actors • In this research, we use SNA to study the professional activities of organizations and individuals who work on education, advocacy, and the scientific and/or policy dimensions of climate change-e.g.sea level rise in South Florida-to understand how these organizations and their representatives collaborate with one another • Together with the Southeast Florida Regional Climate Change Compact, we seek to identify points of centrality among these organizations and professionals • The results of this investigation are intended to inform policy and support activities to enhance and extend collaboration between local climate-engaged organizations. Motivations and Objectives • Potential Respondents for the web-based survey (Qualtrics) were generated through the Southeast Florida Regional Climate Change Compact's immediate list of collaborators • Survey was created using Dr. Adam Henry's previous Risk Professionals and Organizations survey as a template • After sending out our initial seed, each respondent was asked to nominate max ten people/organizations from their collaborator list for us to send the survey • Nominations from each of the respondents were sorted into seeds based on time initial e-mail was sent • Data was imported from Qualtrics and sorted for analysis in Excel by creating separate Node and Edge .csvfiles Data Analysis • For visualizing and analyzing the large network graphs, open-source software Gephi • Imported Node and Edge .csvfiles into Gelphi and sorted data according to software specifications • Node Sizes were generated proportional to their degree (# of connections) • Graph Spatialization was created using Fruchterman Reingold (with decided area of 4000) and Force Atlas 2 (with 10 scaling) algorithms • Final rendering and centrality measures were created by calculating the Average Weighted Degree of the Nodes and using that value to rank the nodes • Labeled nodes and finalized graphs according to personal visual specifications • Graphs for other survey data were generated using Qualtrics Data Visualization tool Discussion Strategic Challenges in Stakeholder Networks: A Case for Climate Change Adaptation Collaboration • Zdziarski and Boutillier (2016) argues that a three-way integration of resource dependence theory (RDT), social network analysis, and stakeholder theory offer important insights for options of maneuvering networks and addressing strategic challenges in gaining access to resources controlled by stakeholders • Resource Dependence Theory • dependence on "critical" and important resources influences the actions of organizations and that decision and actions can be explained depending on the particular situation (Nienhüser, 2008) • Stakeholder Theory • the decision and actions of organizations are dependent on external and internal social actors-i.e.stakeholders-that have a stake in the actions of the organizations (Freeman, 1984) • by either being affected or being able to affect the actions of organizations, certain stakeholders are able to control resource access (Freeman, 1984) • Social Network Analysis • while identifying resources is important, successful governance requires identifying the organizational capacity of the policy network to deploy and exploit its resources (Amit & Schoemaker, 1993;Hill & Jones, 1992;Makadok, 2001;Teece, Pisano, & Shuen, 1997) Conclusions/Further Research Initiatives Climate Change Adaptation Policy Network in Southeast Florida Though development and subsequent discussions of the survey diverged from this focus, preliminary analysis note that the Compact plays a central role in climate change issues in South Florida • Preliminary analysis positions the Compact as having Global Centrality within the network and correlates with Qualtrics data on engagement with the Compact • Preliminary Network Analysis of NOAA, FIU, UM, Broward and Miami-Dade Counties, and CLEO correlate with survey data on Sources of Information and Target Audience survey questions, bolstering assumptions of local centrality Florida International University, 2 University of Arizona, 3 Southeast Florida Regional Climate Change Compact, 4 Institute for Sustainable Communities Timothy Kirby 1 , Dr. Michael Sukop 1 , Dr. Jessica Bolson 1 , Dr. Adam Henry 2 , Nancy Schneider 3 , Lauren Ordway 4 • Original dialogue around survey was to figure the Compact's role in Climate Change issues in South Florida • • This work was funded by NSF Sustainability Research Network (SRN) Cooperative Agreement 1444758 • We are also indebted to the Southeast Florida Regional Climate Change Compact and Institute for Sustainable Communities for their time and support Regional 1
v3-fos-license
2019-04-10T13:12:48.119Z
2018-12-13T00:00:00.000
104395991
{ "extfieldsofstudy": [ "Chemistry" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=89096", "pdf_hash": "1ffc38dca6ad6f9968f994d11757a103b50c9143", "pdf_src": "ScienceParseMerged", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41238", "s2fieldsofstudy": [ "Environmental Science" ], "sha1": "1ffc38dca6ad6f9968f994d11757a103b50c9143", "year": 2018 }
pes2o/s2orc
Forest Dwellers ’ Perception on Climate Change and Their Adaptive Strategies to Withstand Impacts in Mizoram , North-East India We studied the perception of forest-dependent communities on climate change with its associated risk and their adaptation strategies in Mizoram, Northeast India. A total of 360 respondents (household heads) were randomly selected from 24 villages across the three different agro-climatic zones prevalent. The community perceived awareness of climate change phenomena in the region with a positive correlation between age, education and occupation of the respondents. The overall perception of climate change in temperature was medium (0.49), while low for change in precipitation (0.26) and seasonal durability (0.23). The community showed overall low score of perception on risk of climate change (0.10) where risk on livelihood and socio-economic factors was higher than risk to environment or forest. Perception on impact of climate change was high for forest abiotic ecological factors (0.66) and flora and fauna (0.62), while medium on livelihood of forest-dependent communities (0.44). The majority (more than 75%) of the respondents agreed that human beings are involved and responsible for climate change. Adoption of adaptive strategies to cope climate change ranged from 0.07 to 0.91, amongst which zero tillage, use of traditional knowledge, forest fire prevention, soil and water conservation techniques, agroforestry practices and social forestry are popular. However, rain water harvesting and investments for crop insurance were adopted on low scores clearly implied by the educational and socio-economic status of the farmers in the majority. The study brings out the knowledge and perceptions to climate change by forest-dependent communities and their adaptive strategies to cope had been assessed. The How to cite this paper: Sahoo, U.K., Singh, S.L., Sahoo, S.S., Nundanga, L., Nuntluanga, L., Devi, A.S. and Zothansiama, J. (2018) Forest Dwellers’ Perception on Climate Change and Their Adaptive Strategies to Withstand Impacts in Mizoram, North-East India. Journal of Environmental Protection, 9, 1372-1392. https://doi.org/10.4236/jep.2018.913085 Received: November 7, 2018 Accepted: December 10, 2018 Published: December 13, 2018 Copyright © 2018 by authors and Scientific Research Publishing Inc. This work is licensed under the Creative Commons Attribution International License (CC BY 4.0). http://creativecommons.org/licenses/by/4.0/ Open Access Introduction Climate change is generally recognized as a major issue having negative impacts on the earth's geological, biological and ecological systems.The trend of global warming has been observed at an increase in global average temperature by 0.8˚C since 1900 [1].Forest ecosystems are integral to the global biogeochemical cycles and act both as sources and sinks of greenhouse gases (GHGs), which exert significant influence on the earth's climate.Impacts of climate change on forests change tree species composition, growth and productivity [2], affecting changes in forest area and competition between species [3], and damage causing natural disturbances.It was also recognized that protective functions of forests get affected by climate change as well.Climate change is expected to exacerbate the vulnerability of forest tribes and communities especially with adverse impact on their livelihood [4] [5].There are limited livelihood opportunities of the forest dwellers and marginal farmers [6] [7] [8] and therefore these people mostly draw on various non-timber forest products such as bamboo, cane/rattan, broom grass, anchiri (rhizome of Homalomena aromatica) and ethnomedicine related products for income generation.Forests are particularly sensitive to climate change, because the long life-span of trees depends on environmental stability.Unlike in agriculture, adaptation measures for forestry need to be planned well in advance expecting changes in the forests' growing conditions.Because the forests regenerated today will have to cope with the future climate conditions of at least several decades, often even for more than 100 years [9].The impact of climate change on forest and biodiversity has recently become an issue of increasing importance [10].Though there have been several reports on climate change on agriculture [11], studies pertaining to forest and biodiversity are somewhat limited. Climate change not only directly affects the species diversity, but may also interacts with other human stressors especially in fragile/hilly ecosystems like Mizoram causing disruptions to food webs and other ecological services.It is believed that climate change affecting the forests will continue resulting in a change in many of the services that the forest ecosystem is able to provide.Once a decline in forest ecosystem services sets in, the ability of forest-dependent people to meet their basic needs for food, clean water and other necessities declines, deepening poverty, deteriorating public health and increasing social conflict [12] [13].The forest dwellers, adjacent farmers and even considerable pro- change as they are dependent on its natural resources like forests for their livelihood, socio-economic and ecological well-being [14]. A great number of studies have been done on farm level adaptation to climate change across different countries [15]- [23].In India, farmers' perception of climate change and their adaptation strategies are evident in works done on dry forests of Kalakad-Mundanthurai Tiger Reserve [24], two villages of Uttarakhand [25], in four villages of Maharasthra and Andhra Pradesh [26], in dry land Tamilnadu [27], in Chotanagpur plateau of eastern India [28], in Himalayan landscape [29] [30], West Bengal [31].These studies reveal that farmers' adapta- 3). Data Collection Hypothesizing that mean temperature and precipitation would vary across agro-climatic zones across the state, which may have some impact on the outcome of Note: Increasing (+) and decreasing (−) trends significant at 95% level of significance are marked with "*" sign (compiled from source [32] and additional data set from published sources).the study; it was decided to select study villages from different districts in the state and cover all the three agro-climatic zones (Table 1).A total of 24 villages (6 in tropical, 12 in sub tropical and 6 in temperate sub alpine zone) were selected for the study.Secondary data was obtained by reviewing literature from both published and unpublished sources.Reconnaissance survey and primary data collection was done in all 24 selected villages across the three agro-ecological zones during 2017-18.The primary data were collected through structured interviews using questionnaires [34] and interviews were conducted face-to-face in a very friendly environment.Additional information was gathered from key informants through focus group discussions.Purposive sampling method was used for selection of the area under study, and random sampling method was used for selection of the respondents.To ensure adequate representation, the selection of respondents was done randomly taking 15 heads/households in each of the twenty four selected villages.A total of three hundred and sixty respondents were interviewed.The representative of the sample was initially tested using descriptive statistics to see the mean, mean and variance differences, stochastic dominance if any. Pre-tested well structured close ended personal interview schedule was designed.The questionnaire included the socio-economic profile of the respondents like age, gender, education, occupation, awareness about phenomena related to climate change, perception of the impact of climate change on forest ecosystem, perception of the impact of climate change on livelihood and adopted measures related to climate change.The knowledge questions were tested against the responses of "yes" (1) and "no" (0) indicating that the respondent having or not having awareness/knowledge on the specific question.The opinion statements were tested against a three point Likert scale [35] with responses of "agree" (+1), "undecided" (0) and "disagree" (−1).The adaptation practices were assessed against a three point Likert scale with 0, 1 and 2 scores for responses of "don't know", "not adopted" and "adopted", respectively. Statistical Analysis The collected data from questionnaires were processed, classified and tabulated. Both descriptive (frequency, per cent, mean, and standard error) and analytical (correlation) statistics were performed using Microsoft EXCEL 2007 and SPSS (Version 17).Pearson correlation was calculated between various demographic attributes and the respondents' awareness level on knowledge of climate change across all the 24 villages adjacent to the forests. Demographic Attributes and Awareness Level on Climate Change Demographic attributes of the study population (360 respondents) in 24 villages across three agro-climatic zones in Mizoram are presented in Table 3.In all agro-climatic zones, more than 50% of the respondents occur in the age groups of 30 -40 and 40 -50 years.Maximum respondents had secondary level (41.1%) variability observed over time, like many other places could nevertheless be attributed to human activity directly or indirectly [36].All the respondents were aware of climate change though with different levels (low, medium and high) and significant positive correlation with the age and education.Similar studies also reported that education had significantly impacted the perception on climate change [37] implying that climate change happenings might have different perceptions with variations in educations, all things being equal.Higher and technical education brings about exposure to new areas and access to information on improved technologies; hence more awareness campaigns needs to be organized as more than 60% of the respondents were only up to secondary level. Awareness and educational exposure trainings also need to be focused on lower age groups as the level of awareness is low to medium in respondents aged below 60 years, and moreover, they form the most vulnerable section of stakeholders not ready to negotiate/address climate change mitigation and adaptation in years to follow. In the present study most concerns were raised by forest dwellers on climatic variability are increasing temperature, irregularity in rainfall, dry spell during summer and decreasing winter.Similar observations have been observed by farmers' elsewhere [38] [39] [40].Majority of the respondents have no doubt in believing that human activity is playing significant role in climate change [41] and the level of perception is related to education [42].We, however, encountered a variation in perception across the agro-climatic zones, the lowest level of perception for humid mild tropical zone.Like many other reports, majority of the farmers' opined increased temperature bring changes to forest composition and increased incidents of pests and diseases.Climate also impacted health, crop and livestock and forest dwellers' income.However, they are unwilling to move out of the forest believing that nature as savior [31].The perception of the respondents on decrease of forest area match with the Forest Survey of India records [33].The causes of decrease in forest and woodland cover were perceived to be population pressure and increasing demand for fuel wood and construction materials.This implies that there will be decrease in forest products such as wood and NTFPs and forage.The decrease in forest cover would negatively impact the environment and reduce the ecosystem services such as soil and water conservation ability, carbon stock, pollination services etc. [43].Increasing run-off, as a result of intense but short duration rainfall, is causing poor accumulation of water in soil, thereby resulting in the drying up of water pools [30].Early ripening of crops is observed by these people, which is consistent with early onset of flowering and growing season; and further proliferation of weeds and pests are simultaneously observed in wild plants both by the local people and scientists [29]. Forest Dwellers' Perception of Climate Change Overall perception of the forest dwellers vis-à-vis events due to change in tem-perature is medium as the score is 0.49 (Figure 5).This is because 85% of the respondents perceived increase in average day and night temperature including mildness in winter (76.1%) and warming of winds (61.9%).However, the overall perception of the community of the change in precipitation related phenomena and of the change in regularity and duration of seasonal events is low with average perception scores of 0.26 and 0.23 respectively (Figure 5).But when considering some individual statements related to change in precipitation, interestingly almost all the respondents believed that rainfall get unpredictable day by day (85.0%) with changed intensity and pattern (76.1%) but generally arriving late (92.2%) and withdrawing early (71.9%) over the past few decades.Considering the individual statements relating to abnormalities of seasons, it is found that almost all of the respondents apprehended (more than 80%) climate related hazards like unpredictable seasons and believed that droughts, floods and dry spells have increased with proportionate decrease in the duration of winter.The current perception of the villagers of climate change impacts compared to the past 5 years is presented in Figure 6.Majority of the respondents perceived increase in the intensity of sun's heat (84.2%), followed by increased frequency of river drying up (65.7%) and increase in incidence of human diseases (64.3%). Also, a majority of the respondents perceived a decrease in intensity of rainfall (40.6%).Based on the average perception score of 0.10 after assigning scores to different risk perception statements, it is found that the level of perception of the community is low (Table 6) as majority of the people in developing countries have limited knowledge of physical processes leading to climate change [44].The community's perception on livelihood and socio-economic risks were higher than on risk to environment or forest.Perception scores with high risk as a result of climate change were observed in increasing poverty with reduced income (0.92), affecting livelihood (0.89) and creating social inequality between rich and poor (0.72).Respondents also associated risk of climate change on personal level with increase susceptibility of serious disease (0.60) as reported by studies from other developing countries [45].However, respondents believed climate change will not lead to increase in superstitious belief in God (−0.85); no one will migrate from their forest lands (−0.74); and starvation will not occur (−0.40).This clearly defines the indigenous people perceptions of love for their land, forest and their ecosystem as a whole and belief in the nature as their savior, whereby they express unwillingness to leave their land and migrate to earn or in a foreign land.Respondents also perceived the threat potential that climate change will might bring catastrophic impacts on biodiversity (0.52) while there may not be heavy inundation of forest land (−0.38). Impact of Climate Change on Forest Resources and Biodiversity The level of perception on impact of climate change on forest ecology in terms of abiotic ecological factors, flora and fauna records average score between 0.66 and 0.62 respectively.And the impact of climate change on livelihood of forest-dependent communities is medium whose average perception score is 0.44 (Table 7).Considering the specific impact statements on forest abiotic ecological factors, high impacts were observed in decrease in forest area cover (0.68), decrease in flow of streams/river (0.78), quick drying of water bodies (0.81) and quick drying of seasonal streams (0.87).Climate change had medium impacted the intensity of flash flood (0.49), decreased forest litter (0.50), decreased soil fertility (0.51) and increased soil erosion (0.61).Climate change highly impacted forest flora and fauna regarding to early ripening of fruits/seeds (0.69), changes in tree phenology (0.76), increased incidence of weed invasion (0.78) and change/decrease in fish species in forest river (0.84), whereas medium impact were observed in decrease in pollinator population (0.31), increase in mortality (0.40), decrease in forest biodiversity (0.55) and increase in insects, pests and diseases (0.62).Majority of the respondents perceived that climate change highly impacted their livelihood in terms of decrease in fish catch (0.90); increase in livelihood dependency on forest (0.78) and decrease in quantity fodder collection (0.73).Climate change had medium impact on livelihood by reduced intensity of grazing (0.48) and decrease in collection of ethno-medicinal plants (0.44).However, the villagers responded a merge perceived decrease in collection of fuel wood (0.03), NWFPs (0.06) and edible forest products (0.11) as it was considered forest is the only source that fulfils people's domestic energy needs and other requirements since these resources were traditionally freely accessible to them.The future availability of various resources would nevertheless be determined by the degree of climate change [10]. Adaptation Response to Climate Change Based on the survey, 31.9% of the respondents opined climate change is caused partly by nature and partly by human, followed by entirely human (30.3%) and mainly human (17.5%) as presented in Figure 7.A quarter of the respondents (24.7%) showed no concern of climate change, while the remaining showed concerned in the order: very much concerned (46.7%) > fairly concerned (25.5%) > concerned (3.1%).Considering the adaptive capacity based on awareness-adoption mean scores of each adaptation options (0.07 -0.91), the adaptive The need of the hour is therefore to adapt measures which will be people-centric and transparent and will be able to respond to short term risks.Besides, adaptation measures are required to address the present state of limitations and uncertainties about climate change impacts on forests and forest-dependent communities, in order that improved management and policy measures for wise/intelligent adaptation to climate changes are made sustainable. U . K. Sahoo et al.DOI: 10.4236/jep.2018.9130851374 Journal of Environmental Protection portion of underprivileged population are particularly at risk due to climate Figure 1 . Figure 1.Study sites and demarcation of different agro-climatic zones of Mizoram. Figure 2 . Figure 2. Monthly rainfall departure trend in Mizoram for 1951-2017 (compiled from source [32] and additional data from published sources). Figure 3 . Figure 3. Forest cover change rate in different forest types in Mizoram (compiled from source [33]). Figure 4 . Figure 4. Pearson correlation coefficient of various demographic attributes with the awareness level on statements of climate change in the study population.Values with ** indicate significance at p < 0.05. Figure 5 . Figure 5. Respondent's perception in change in climatic parameters. Figure 6 . Figure 6.Perceptions on impacts of climate change compared to last 5 years. It is equally important to assess the risks and their management by the community after evaluating the perceptions of climate change, as the study can further provide important guidelines for designing and implementing adaptive responses and policies[19] [20].Based on the wide varying adaptive capacity responses (0.07 to 0.91), the forest-dependent communities have adopted one or a combination of adaptation options.Individual attitudes to risk, policy and institutional barriers are factors to influence decision making vis-a-vis climate change adaptation, despite the degree and scope of scientific awareness are made available[46].Average adaptive capacity of the farmers in all agro-climatic zones to cope climate change is medium which might be due to their continued dependence on natural resources and constrains in socio-economic sectors[47] [48].However, indigenous farming knowledge and technology may help in conserving natural resources and combating climate change[43] [49][50].Agroforestry and perennial plantations could be major strategies to combat climate change and also by enhancing resistant approaches to climate impacts on the forest dwelling communities.Agroforestry has double potential to address climate change issues: for example greenhouse gas mitigation strategies to be adopted through carbon sequestration and sustainable adjustment to changing conditions[16].As most of the respondents were farmers and had very low income with poor economic conditions, they lack expertise despite their willingness to adapt to the effects of climate change.This could be the reason which farmers avoid to adopt some strategies in spite having awareness considering the involvement of financial investments and technical knowhow.This fact clearly indicates institutional interventions are necessary for capacity building facilities such as trainings, in field demonstrations, financial support and appropriate policy by the government.This will bring about a sustainable human environment interaction aimed with suitable adaptation strategies and improve livelihood.North-east India in general and Mizoram in particular has been lacking in effective climate change and adaptation related policies particularly for forestry sectors.Besides, the state has a very low technological and financial capacity in adapting to climate change.Nevertheless farmers' have been coping using their traditional methods[26].However, these are not sufficient.Forest dwellers' knowledge about perception and adaptation may be used to compliment specific policies to address their concerns and for a long term planning for climate sensitive resources.Diversified livelihood options should be explored through indigenous agroforestry farming methods integrated with animals for U. K. Sahoo et al.DOI: 10.4236/jep.2018.9130851388 Journal of Environmental Protection income generation, food production and social security.On the basis of local perceptions reported, adaptive measures should be formulated regarding cropping patterns, phenology and shift in distributional range of species and human diseases.Forest resources should be effectively managed with strategies that address land degradation, loss of biodiversity and ecosystem services which will both adapt to anticipate climatic conditions and valued by local communities. Forest is the most important land use in Mizoram occupying over 86.27% geographical area of the state.It provides numerous supporting, provisioning, regulating and cultural services to mankind.In view of the fact that climate change is a reality and together with the existing socio-economic processes such as ongoing shifting cultivation, deforestation, forest fragmentation and other form of habitat loss, population growth, urbanization etc. as unignorable facts, climate change would lead to significant change in the delivery of such services.The increasing temperature, changes in rainfall patterns, more intense and frequently occurring extreme events will continue to affect the future service provisions and livelihoods depended on forests.Some site specific impacts of climate change involving non-commercial products are expected to produce additional stresses on people having limited adaptive capacity.Forest structure is changing at a faster rate, many of the mixed and broadleaved forests are being now converted to single storied bamboo forests causing destruction of habitat to wildlife and unable to hold much of the services for mankind.The repeated forest fires have been accelerating the loss of plants and animal diversity from the ecosystems. tion strategies to climate change are depended on local resources and contexts. ture, community based forest enterprises, dependence on non-wood forest products (NWFPs) among others, it is expected that the impact of climate change on forest-dependent communities of Mizoram will have serious livelihood problems.Assessing the vulnerable and potential impacts of climate change on the forests and livelihood in Mizoram is therefore critical to its development and effective climate management measures.However, our understanding of deteriorating climate crisis and its impacts on the forest dwellers' livelihood, their perception and adaptation strategies are inadequate, whose understanding will provide a better location specific insights and generate additional information for relevant policy and decision making.Despite advances in physical and biological research, assessment of climate change form a socio-economic perspective is essential to prepare a roadmap for capacity building of local communities for effective adaptation and mitigation strategies.Therefore, an attempt was made in this study to identify perception and other socio-economic factors influencing adaptation strategies that are pursued by the forest-dependent communities in response to climate variability and adaptability.Besides, the study aims to identify the coping and adaptation measures adopted by communities within different agro-climatic zones in Mizoram. ).The State of Mizoram experienced cyclonic storms, cloudbursts, hailstorms and landslides annually owing to its geo-climatic conditions.Mizoram receives abundant rainfall during monsoon period but long dry spells in post monsoon and steep hillsides results in minimum underground water retention leading to dry perennial water Table 1 . Geographic and characteristic features of Agro climatic zones in Mizoram. Table 2 . Seasonal changes in climate of Mizoram between 1951-2017. Table 3 . Demographic attributes of the study population (n = 360). ever, a major portion of the respondents did not understand and were new to the statements regarding climate change awareness such as environmental pollution (30.3%) and variations in seasonal durability (32.8%).In all the agro-climatic zones, knowledge on climate change was fairly medium with an average score of 4.31 out of 8 (Table5).It was observed that the level of awareness vary significantly among the different age groups, however it did not differ significantly between the agro-climatic zones.Oldest respondents had an awareness level of 6.62 and the least in youngest group(2.53).Average awareness level values in low, medium and high categories were 2.00, 3.96 and 6.44 respectively out of 8.Relationship between various demographic attributes and level of awareness Table 4 . Awareness level score (out of 8 statements) on knowledge of climate change in different agro climatic zones of the study population.thepopulationstudy is depicted in Figure4.Significant positive correlations were observed between age, educational and occupational groups with the level of awareness.Though, not significant, the level of awareness was negatively correlated with gender groups indicating female have less awareness than males in the study population.Climate change is nevertheless a global environmental threat.Northeast India in general and Mizoram in particular is much less developed but having diverse agro-climatic zones and hilly terrain compared to the mainland India and thus, the region as a whole is prone to different kinds of climatic shocks.All climatic ± Standard error of Mean; different letters a, b, c, d indicate significant differences at p < 0.05; ACZ-I: Humid Mild Tropical Hill Zone; ACZ-II: Humid Mild Sub-tropical Hill Zone; ACZ-III: Humid Temperate Sub-alpine Hill Zone.among Table 5 . Distribution of respondent on the basis of different awareness statements of climate change in the study population (n = 360). Table 6 . Perception on risk of climate change by respondents in different agro climatic zones of the study population. Table 7 . responding to climate change is at medium level with average mean score of 0.69 (Table8).High level of adoption were observed with the adoption of zero tillage practices (0.91), indigenous traditional knowledge for insect control (0.90), forest fire prevention activities (0.88), per-monsoon dry Perception on impact of climate change by respondents in different agro climatic zones of the study population. Table 8 . Distribution of respondents according to knowledge-adoption statements.
v3-fos-license
2022-02-05T16:34:26.459Z
2022-01-31T00:00:00.000
246544172
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/2079-6374/12/2/87/pdf", "pdf_hash": "573a42a333d9b3ba3f9c76b74cb65bd05c135bc5", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41240", "s2fieldsofstudy": [ "Chemistry" ], "sha1": "31478fd08eb824dcefbb3ca9f80567141b2343ca", "year": 2022 }
pes2o/s2orc
Molybdenum Disulfide-Based Nanoprobes: Preparation and Sensing Application The use of nanoprobes in sensors is a popular way to amplify their analytical performance. Coupled with two-dimensional nanomaterials, nanoprobes have been widely used to construct fluorescence, electrochemical, electrochemiluminescence (ECL), colorimetric, surface enhanced Raman scattering (SERS) and surface plasmon resonance (SPR) sensors for target molecules’ detection due to their extraordinary signal amplification effect. The MoS2 nanosheet is an emerging layered nanomaterial with excellent chemical and physical properties, which has been considered as an ideal supporting substrate to design nanoprobes for the construction of sensors. Herein, the development and application of molybdenum disulfide (MoS2)-based nanoprobes is reviewed. First, the preparation principle of MoS2-based nanoprobes was introduced. Second, the sensing application of MoS2-based nanoprobes was summarized. Finally, the prospect and challenge of MoS2-based nanoprobes in future were discussed. Introduction As a powerful tool, a sensor has been employed to analyze chemical/biological molecules coupled with different detection methods, such as fluorescence, electrochemistry, electrochemiluminescence (ECL), colorimetry, surface enhanced Raman scattering (SERS) and surface plasmon resonance (SPR). To improve the analytical performance, many signal amplification strategies have been introduced into the construction of sensors, including DNA amplification technology, DNA walker, enzyme-assisted signal amplification and nanoprobes [1][2][3][4][5]. With the rapid development of nanomaterials, the nanoprobe has been considered as a promising signal amplification strategy to improve the performance of sensors. Since gold nanoparticles (AuNPs) were introduced into the construction of nanoprobes [6,7], different kinds of nanomaterials have been extensively employed to construct nanoprobes due to their high surface area, excellent electrical and optical properties, high catalytic ability, excellent chemical stability and easy functionalization [8][9][10][11][12], such as noble metal nanoparticles [13,14], metal oxides [15], graphene and its derivative [16,17], transition metal dichalcogenides [18][19][20], and so on. The outstanding properties of nanomaterials allowed nanoprobes to easily load a large number of recognition and signal units, which can efficiently amplify the detection signal. Furthermore, the high biocompatibility of nanoprobes paves a way to analyze target molecules in vivo. MoS 2 is an emerging material star, which is a member of transition metal dichalcogenides. Due to its typical graphene-like layered nanostructure, MoS 2 is also a potential candidate to construct the ideal nanoprobe due to its unique physical, chemical, and electronic properties, such as a large surface area, high conductivity, excellent quenching activity, accepted Raman enhancement effect and easy functionalization [21]. The recognition units Preparation of MoS 2 -Based Nanoprobes Generally, a MoS 2 nanosheet can load chemical/biological recognition units and signal molecules to form a nanoprobe via physical adsorption, chemical bond and noble metalmediated methods, respectively [22]. It should be noted that MoS 2 -based nanoprobes prepared by different methods exhibited different advantages and disadvantages, which is listed in Table 1. According to the sensing application, the suitable nanoprobe coupled with analytical techniques often brings a better analytical performance, such as higher sensitivity, better selectivity and longer storage stability. [23][24][25][26][27] chemical interaction stable The binding molecule needs to be modified, few choices of binding molecules [28,29] noble metal nanoparticles -mediated simple, facile, stable, wide variety of binding molecules, properties enhanced complicated preparation process [30][31][32][33][34][35][36] Physical Interaction A MoS 2 nanosheet possesses a graphene-like layered nanostructure with a large surface area. As a result, it is easy to nonspecifically adsorb chemical or biological molecules via van der Waals force and electrostatic interactions. Notably, a MoS 2 nanosheet also exhibits different affinity towards single-strand (ss) and double-strand (ds) DNA. Based on these properties, MoS 2 -based nanoprobes including DNA-MoS 2 , aptamer-MoS 2 and peptide-MoS 2 probes, have been designed. For example, Zhu et al. firstly developed a fluorescence sensing platform by adsorbing DNA on the surface of a MoS 2 nanosheet as a nanoprobe [23]. A general platform for the construction of sensors was developed by combining the different affinity of the MoS 2 nanosheet towards ssDNA and dsDNA with its high fluorescence quenching efficiency. Five years later, Zhu and co-workers explored the possibility to construct MoS 2 -based fluorescence nanoprobes by adsorbing hairpin DNA [24]. Besides DNA, rhodamine B isothiocyanate (RhoBS) and antibodies also can be loaded on the surface of the MoS 2 nanosheet to form nanoprobes via physical adsorption and hydrophobic interactions, which can be used to determine silver ions and Escherichia coli by fluorescence and the SPR method, respectively [25,26]. Chemical Interaction Recognition and signal units assembled on the MoS 2 surface via chemical interaction is another efficient way to form MoS 2 -based nanoprobes. A popular method is to bind recognition and signal units with MoS 2 via classical thiol-metal coordination (typical Mo-S coordination). A typical example was given by Li et al., who designed a MoS 2 -based fluorescence nanoprobe for caspase-3 activity detection and images of cell apoptosis by efficiently conjugating two peptides with polydopamine-decorated MoS 2 nanosheets [28]. Since poly-cytosine (poly-C) DNA was proved as a high-affinity ligand for 2D nanomaterials [37], Xiao et al. [29] constructed a MoS 2 -based nanoprobe by assembling poly-C-modulated diblock molecular beacons on the MoS 2 surface. Experimental results suggested the length of poly-C could efficiently affect the analytical performance of the nanoprobe due to the regulation of the surface density [29]. Noble Metal Nanoparticles-Mediated As we know, noble metal nanoparticles have excellent advantages, including high catalytic activity, high electrical conductivity, large surface area and excellent biocompatibility, which have been widely used in sensing fields [38,39]. MoS 2 nanosheets have been proved as an ideal substrate to hybridize with noble metal nanoparticles [40,41]. As a result, the synergistic effect of noble metal nanoparticle-decorated MoS 2 nanocomposites brings faster electron transfer, higher catalytic activity, higher quenching efficiency and larger loading capacity, which have been considered as promising candidates to construct a nanoprobe. As a result, the designed nanoprobe not only retains the inherent characteristics of the hybrid element, but also brings better performance and enlarges its application fields. For instance, Su and co-worker prepared AuNP-decorated MoS 2 nanocomposites (MoS 2 -AuNPs) to construct electrochemical nanoprobes for biological molecules' detection with accepted results due to the signal amplification [30,32]. The recognition and signal units can efficiently co-immobilize on the MoS 2 surface via noble metal-mediated nanoparticles, such as an Au-S bond. Inspired by these exciting results, other noble metal nanoparticles were also successfully supported on the surface of molybdenum disulfide to construct a high-performance nanoprobe for sensing application [42][43][44]. MoS 2 -Based Nanoprobes for Sensing Applications MoS 2 -based nanoprobes can efficiently amplify the analytical performance due to their large loading amount, excellent electron transfer ability, high fluorescence quenching ability, and high Raman enhancement effect. As we know, different detection methods possess their inherent advantages and disadvantages (Table 2). Therefore, MoS 2 -based nanoprobes coupled with suitable analytical methods is a best way to construct sensors for obtaining high-performance target molecules' detection. Herein, the recent progresses of MoS 2 -based nanoprobes coupled with electrochemical, ECL, colorimetric, SERS, fluorescence, and SPR methods is summarized (Table 3). Electrochemical Sensors MoS 2 -based nanoprobe is a promising candidate to construct electrochemical sensors due to its high conductivity and high loaded capacity. To further improve the electronic properties of MoS 2 -based nanoprobes, the introduction of noble metal nanoparticles into nanoprobes has become a popular method. Therefore, gold nanoparticles (AuNPs), platinum nanoparticles (PtNPs), silver nanoparticles (AgNPs), and Au@AgPt nanocubes have been selected to form MoS 2 -based nanocomposites, which were further used to construct high-performance nanoprobes. For example, Su et al. used AuNPs-decorated MoS 2 nanocomposites to construct nanoprobes [32]. They utilized [Fe(CN) 6 ] 3−/4− and [Ru(NH 3 ) 6 ] 3+ as signal molecules to design a dual-mode electrochemical sensor for microRNA-21 (miRNA-21) detection. As shown in Figure 2a, the MoS 2 -based nanoprobes can efficiently amplify electrochemical responses by differential pulse voltammetry (DPV) and electrochemical impedance spectroscopy (EIS). Notably, the detection limit of this sensor obtained from EIS (0.45 fM) is lower than that obtained from DPV (0.78 fM), which is ascribed to the unique properties of 2D nanoprobes. This exciting finding opened a new way to construct electrochemical sensors. After three years, the same group developed a MoS 2 -based multilayer nanoprobe by using a DNA hybridization reaction ( Figure 2b). Compared with a classical MoS 2 -based single-layer nanoprobe, the designed electrochemical sensor showed an ultrawide dynamic range (10 aM-1 µM) and ultralow detection limit (38 aM) for miRNA-21 detection. The big structure of a MoS 2 -based multilayer nanoprobe and a large amount of negative DNA loaded on a multilayer nanoprobe both greatly hindered the electron transfer between [Fe(CN) 6 ] 3−/4− and the electrode surface, leading to the impedance value of this sensor obviously increasing with the addition of trace miRNA-21 [46]. To further amplify the detection performance, Bai's group coupled a MoS 2 -based nanoprobe with enzyme-assisted target recycling amplification to sensitively analyze the Sul1 gene. Due to the synergistic effect of two amplification strategies, the developed electrochemical sensor can determine 29.57 fM Sul1 gene with high selectivity [47]. Similarly, Ji et al. designed an electrochemical sensor for Pb 2+ analysis based on a MoS 2 -based nanoprobe and hemin/G-quadruplex DNAzyme [33]. The specificity of a DNAzyme combined with the high conductivity of MoS 2 -AuPt nanocomposites means this sensor has a lower detection limit for Pb 2+ analysis (38 fg mL −1 ). A MoS 2 -based nanoprobe has been also employed to construct electrochemical immunosensors. For example, Li et al. constructed an immunosensor by using CeO 2 -MoS 2 -Pb 2+ -Ab 2 as a signal probe [36]. Ingeniously, Pb 2+ can adsorbs and aggregates on the surface of a CeO 2 -MoS 2 nanocomposite, which can not only anchor antibodies, but also generate and enhance electrical signals. This novel design of a MoS 2 -based nanoprobe achieved the purpose of the sensitive detection of CEA. To further improve the analytical performance, Su et al. [31] constructed an enzyme-assisted signal amplification strategy for carcinoembryonic antigen (CEA) analysis by taking the advantages of MoS 2 -AuNPs nanocomposites and the catalytic activity of enzymes (Figure 2c). In this work, MoS 2 -AuNPs can not only accelerate electron transfer due to its high conductivity, but also can load a large number of enzymes and antibodies to achieve multiple signal amplification. Therefore, the proposed immunosensor detected down to 1.2 fg mL −1 CEA with high selectivity and good stability. Similarly, Gao et al. developed a signal probe by combining gold@palladium nanoparticle-loaded molybdenum disulfide with multi-walled carbon nanotubes (Au@Pd/MoS 2 @MWCNTs) to efficiently analyze the hepatitis B e antigen (HBeAg) [48]. With the addition of HBeAg, a classical sandwich immunosensor was formed (Figure 2d). The introduced signal probe contained Au@Pd nanoparticles, which can efficiently catalyze hydrogen peroxide (H 2 O 2 ) to generate high electrochemical signal. Therefore, the sensor got a low detection limit of 26 fg mL −1 with the help of signal probe amplification. Other MoS 2 -based electrochemical nanoprobes were also used to detect cardiac troponin I, HBsAg, and CEA due to their outstanding signal amplification effect, respectively [49,50,74]. ECL Sensors A few layers of MoS 2 the nanosheet possess a direct bandgap and a large surface. These properties made the MoS 2 -based nanoprobe a potential candidate to construct electrochemiluminescence (ECL) sensors. Usually, a MoS 2 -based nanoprobe is used as a co-reaction promoter to efficiently amplify the detection signal, called a "signal-on" detection mechanism. An example was offered by Li et al., who constructed a ECL sensor for mucin 1 (MUC1) analysis by coupling a target recycling signal amplification strategy and a MoS 2 -based nanoprobe [52]. The prepared MoS 2 nanoflowers can heavily load N-(aminobutyl)-N-(ethylisoluminol) (ABEI)-decorated AgNPs as signal amplifiers, which can catalyze ABEI-H 2 O 2 to improve the detection intensity. As shown in Figure 3a, the added MUC1 triggered the signal amplification process, leading to the designed ECL aptasensor having a wide linear range (1 fg mL −1 to 10 ng mL −1 ) and low detection limit (0.58 fg mL −1 ) for MUC1 determination. Another ECL MoS 2 -based nanoprobe was constructed by MoS 2 @Au nanocomposites [53]. With the assistance of exonuclease IIIdriven DNA walker, a sensitive ECL sensor was developed for 8.9 pM sialic acid-binding immunoglobulin (Ig)-like lectin 5 analysis. A MoS 2 -based nanoprobe was also used to construct "signal off" ECL sensors by utilizing the high quenching ability of MoS 2 nanostructures. For example, Yuan and coworker reported a ECL sensor for concanavalin A (Con A) determination according to the signal-off sensing mechanism [54]. The as-prepared MoS 2 nanoflowers highly quenched the ECL signal of the Ru complex, making the ECL response decrease with the increasing ConA concentration, ranging from 1.0 pg mL −1 -100 ng mL −1 (Figure 3b). According to the quenching properties of MoS 2 -based nanoprobes in ECL sensing application, several ECL sensors were constructed for beta-amyloid (Aβ), CA19-9 antigen and human epididymal specific protein 4 detection, respectively [55][56][57]. All experimental data suggested the introduction of MoS 2 -based nanoprobes can efficiently improve the analytical performances, such as linear range, detection limit, analytical time, etc. Colorimetric Sensors Previous works proved that MoS 2 nanostructures have peroxidase mimicking activity with high chemical and thermal stability [74]. For example, Zhao et al. found that sodium dodecyl sulfate-conjugated MoS 2 nanoparticles (SDS-MoS 2 NPs) can efficiently catalyze a 3,3,5,5-tetramethylbenzidine (TMB) and hydrogen peroxide (H 2 O 2 ) reaction strategy, exhibiting peroxidase-like activity for the detection of glucose [75]. To improve the peroxidase-like activity of MoS 2 nanostructures, the formation of MoS 2 -based nanocomposites is a universal method. These nanocomposites offer the opportunity to develop high-performance colorimetric nanoprobes due to their better catalytic activity, such as MoS 2 -carbon nanotubes [76], MoS 2 -g-C 3 N 4 [58], MoS 2 -graphene oxide [59], MoS 2 -Au@Pt [77], etc. According to this concept, Peng et al. used a MoS 2 -graphene oxide (MoS 2 -GO) nanocomposite instead of a biological enzyme to colorimetricly detect H 2 O 2 and glucose [59]. The synergistic effect of MoS 2 and graphene oxide made this designed colorimetric sensor analyze H 2 O 2 and glucose in serum samples by the naked-eye (Figure 4a). Compared with graphene, noble metal nanostructures hybridized with a MoS 2 nanosheet can bring outstanding peroxidase-like activity. A typical example was offered by Su and co-workers, who designed a colorimetric sensor for cysteine analysis based on a MoS 2 -Au@Pt nanoprobe [77]. The enzyme-mimicking activity made this sensor show a wide linear range and low detection limit for cysteine detection. Moreover, this colorimetric sensor can determine cysteine in medical tables. Similarly, Singh et al. utilized the highlyefficient peroxidase-like activity of Fe-doped MoS 2 nanomaterials to colorimetricly detect glutathione in buffer and human serum [34]. The satisfactory results further proved the excellent application of MoS 2 -based nanoprobes in the colorimetric sensing field. Besides peroxidase-like activity, another reason for MoS 2 -based nanocomposites in the colorimetric sensing application is the high catalytic activity. By utilizing this property, Su et al. constructed a colorimetric nanoprobe by assembling an anti-CEA on the surface of a MoS 2 -AuNPs nanocomposite [30]. The assembled amount of anti-CEA greatly influenced the catalytic activity of the MoS 2 -AuNPs nanocomposite, which can be used to recognize and detect CEA by catalyzing the reaction of 4-nitrophenol (4-NP) and sodium borohydride (NaBH 4 ). Corresponding with the solution color and adsorption intensity, the developed can analyze 5 pg mL −1 -10 ng mL −1 of CEA with high selectivity (Figure 4b). This potential colorimetric sensing application inspired more researchers to synthesize different kinds of MoS 2 -based nanocomposites with high catalytic activity, such as AuNP or PtNP decorated Ni promoted MoS 2 nanocomposites [78], multi-element nanocomposites composed by noble metal nanoparticles, polyaniline microtubes, and Fe 3 O 4 and MoS 2 nanosheets [79]. SERS Sensors As a graphene-like 2D layered nanomaterial, a MoS 2 nanosheet also exhibits an excellent Raman enhancement effect due to the chemical enhancement mechanism [80]. Decoration with noble metal nanoparticles, the synergistic effect of chemical enhancement and electromagnetic enhancement makes the MoS 2 -noble metal nanoparticles' nanohybrids exhibit a better Raman enhancement effect. Therefore, MoS 2 and its nanocomposites are often employed as SERS-active substrates to construct sensors for target molecules' detection [81,82]. Besides SERS-active substrates, MoS 2 -based nanohybrids have also been used to construct nanoprobes for sensing application. For example, Jiang et al. [64] developed a MoS 2 -based immunosensor for the carbohydrate antigen 19-9 s (CA19-9) detection by using a MoS 2 nanosheet as a SERS-active substrate and a MoS 2 nanoflower as a SERS tag ( Figure 5). Expectedly, this sandwich design exhibited a desirable enhancement effect on CA19-9 analysis, resulting in a wide linear range (5 × 10 −4 -1 × 10 2 IU·mL −1 ) and low detection limit (3.43 × 10 −4 IU·mL −1 ). More meaningfully, this designed immunosensor showed accepted results for CA19-9 detection in clinical patient serum samples, which was in agreement with the conventional chemiluminescent immunoassay. Similarly, Medetalibeyoglu et al. also reported a sandwich-type immunosensor for CEA detection by using 4-mercaptobenzoic acid assembled AuNPs-decorated MoS 2 nanoflowers (MoS 2 NFs@Au NPs/MBA) as SERS tag [65]. Coupled with Ti 3 C 2 T x MXene-based SERS-active substrate, this immunosensor detected as low as 0.033 pg mL −1 of CEA, with high selectivity, stability and repeatability. More interestingly, a MoS 2 -based SERS nanoprobe is also a powerful tool for label-free SERS imaging. For example, Fei et al. offered an example of a MoS 2 -based nanoprobe for SERS imaging in living 4T1 cells [66]. Experimental results suggested that a MoS 2 -based nanoprobe may be the promising alternative because of its intrinsic vibrational bands in the Raman-silence region of cells. Fluorescence Sensors The tunable layer thickness of the MoS 2 nanosheet leads to its indirect to direct bandgap transition, which generates excellent optical properties. Especially, the outstanding quenching ability towards organic dyes suggests that a MoS 2 nanosheet can be employed as a nanoquencher to construct fluorescence sensors. Zhu et al. had given a first example of a fluorescence sensor for targetting DNA and other small molecules by using a MoS 2 nanosheet as a sensing probe [23]. The different affinity of the MoS 2 nanosheet towards ssDNA and dsDNA makes the labeled 5-carboxyfluorescein (FAM) close to or far from the surface of the MoS 2 nanosheet, resulting in the fluorescence signal recovering with the formation of dsDNA (Figure 6a). This exciting finding inspired more and more researchers to develop fluorescence sensors for target molecules' detection by using MoS 2 -based sensing nanoprobes. A typical design is coupling an aptamer with a MoS 2 -based nanoprobe to analyze nucleic acids, proteins, thrombin, metal ions, kanamycin, ochratoxin A, and so on [67,[83][84][85]. For example, Kong et al. utilized the high-efficient quenching ability of a MoS 2 nanosheet to develop a fluorescence sensor for prostate specific antigen (PSA) analysis [68]. The structure of the aptamer was changed with the recognition of the PSA, leading to the aptamer-PSA product releasing from the MoS 2 nanosheet and the fluorescence recovering. Under optimal conditions, this designed sensor can detect as low as 0.2 ng mL −1 of PSA with high selectivity. To further improve the analytical performance, several signal amplification strategies coupled with MoS 2 -based nanoprobes were introduced into the construction of fluorescent sensors. For example, Xiang et al. reported a fluorescence sensor for streptavidin (SA) detection by coupling exonuclease III (Exo III)-assisted DNA recycling amplification with MoS 2 -based nanoprobes [69]. As shown in Figure 6b, probe 1 was not degraded by Exo III because of the binding of SA and biotin. Subsequently, the protected probe 1 hybridized with probe 2, which can be digested by Exo III. The continually released FAM led to a strong fluorescence signal due to the signal amplification, producing a low detection limit of 0.67 ng mL −1 for SA detection. Similarly, Xiao et al. combined duplex-specific nuclease (DSN)-mediated signal amplification with MoS 2 -based nanoprobes to develop a fluorescence for microRNA (miRNA) detection [24]. In the presence of miRNA, molecular beacons adsorbed onto the MoS 2 nanosheet changed to DNA-RNA heteroduplexes and were released from the MoS 2 nanosheet due to the hybridization reaction. The formed DNA-RNA heteroduplexes were digested by the DSN and the target miRNA was released to trigger the next hybridization reaction. Under optimal conditions, this sensor showed a wide dynamic range (10 fM-10 nM), low detection limit (10 fM) and high selectivity for let-7a analysis. In the same year, Xiao et al. also constructed a poly-cytosine (poly-C)mediated MoS 2 -based nanoprobe coupled with a DSN signal amplification strategy for miRNA detection [29]. The introduction of a unique poly-C tails design led to a lower detection limit (3.4 fM) than classical molecular beacon-loaded MoS 2 -based nanoprobes. Other signal amplification strategies have also been introduced into the construction of fluorescence sensors based on MoS 2 -based nanoprobes, such as catalytic hairpin assembly (CHA), a hybrid chain reaction (HCR), rolling circle amplification (RCA), etc., [86][87][88][89][90]. A MoS 2 -based fluorescence nanoprobe is also a potential tool for the detection of intracellular biomolecules due to its excellent biocompatibility, such as ATP, microRNA, etc., [91][92][93]. For example, Ju and co-worker assembled a chlorine e6 (Ce6) labelled ATP aptamer onto a MoS 2 nanoplate to develop an intracellular nanoprobe for ATP detection and imaging based on the favorable biocompatibility [94]. It was noted that this designed MoS 2 -based nanoprobe not only sensitively and selectively analyzed ATP in living cells, but also could achieve controllable photodynamic therapy. Inspired by this exciting work, Li et al. immobilized two peptides onto a polydopamine (PDA)-functionalized MoS 2 nanointerface to construct a fluorescence nanoprobe for caspase-3 activity detection [28]. Caspase-3 was activated with the cell apoptosis, leading to the cleavage of a peptide labeled with fluorescence dye and the trigger of "turn on" fluorescence imaging. According to this design, the developed fluorescence biosensor showed a lower detection limit of 0.33 ng mL −1 compared with some previous reports. For the purpose of trace biomolecules analysis, Zhu et al. developed an ultrasensitive fluorescence sensor for intracellular miRNA-21 detection and imaging based on MoS 2 nanoprobes by assembling three Cy3-labelled molecular beacons onto MoS 2 nanosheets [95]. As shown in Figure 6c, the added miRNA-21 triggered a CHA reaction to form "Y"-shaped DNA structures with multiple Cy3 molecules. This interesting design obtained an ultralow detection limit (75.6 aM) for miRNA-21 detection compared to a general strand displacement-based strategy (8.5 pM). The excellent analytical performance was also proved by the intracellular imaging of miRNA-21 in human breast cancer cells. SPR Sensors MoS 2 and its nanocomposites have been considered as ideal substrates for the construction of SPR sensors due to the unique properties of a MoS 2 nanosheet, such as high charge carrier mobility and easily functionalization of noble metal nanoparticles [25,96]. As expected, MoS 2 -based SPR sensors are widely used to rapidly, label-free detect biomolecules or real-time and in-situ monitor the biological reaction. For example, Chiu et al. assembled carboxyl-functionalized MoS 2 sheets (MoS 2 -COOH) onto a gold surface to construct a SPR immunosensor for monitoring a bioaffinity interaction [95]. Experimental data showed that the SPR angles can be amplified by the MoS 2 -COOH chip, which was almost 1.9 folds and 3.1 folds than MoS 2 and traditional SPR chips when the bovine serum albumin (BSA) concentration was 14.5 nM. Unfortunately, most of the works focused on the development of MoS 2 -based SPR substrates. To explore the potential application of a MoS 2 -based nanoprobe in SPR sensing field, Wang and co-workers developed a SPR biosensor for microRNA-141 (miRNA-141) analysis based on MoS 2 -AuNPs nanocomposites [73]. As shown in Figure 7, a classical sandwich structure was formed in the presence of miRNA-141. The localized plasmon of AuNPs supported onto MoS 2 nanosheets easily generated the electronic coupling by associating with Au film. As a result, an ultralow detection limit of 0.5 fM for miRNA-141 detection was obtained due to this signal amplification effect. Moreover, this designed SRP biosensor exhibited high selectivity for miRNA-200 family members' determination. Conclusions and Perspective During the past decade, MoS 2 as an emerging material has aroused more and more scientists' interests to construct MoS 2 -based nanoprobes due to its inherent advantages, including the large-scale preparation, tunable bandgap, excellent biocompatibility, easy functionalization with inorganic/organic groups, and outstanding optoelectronic properties. The introduction of MoS 2 -based nanoprobes means sensors coupled with different analytical methods have been successfully employed in environmental monitoring, food safety, biochemical analysis, disease diagnosis, and even homeland safety. With the assistance of MoS 2 -based nanoprobes, the developed sensors exhibited high sensitivity, selectivity, and stability for the detection of chemical and biological molecules. Though great advances in sensing application were obtained, MoS 2 -based nanoprobes still face some challenges in practical application. First, high-quality and large-scale preparation of MoS 2 nanosheets and their nancomposites should be solved. It is the basic to construct a high-performance MoS 2 -based nanoprobe. The high-quality of the MoS 2 nanosheet often brings a high-performance MoS 2 -based nanoprobe. Controllable and large-scale preparation of MoS 2 nanosheets can ensure the repeatability of MoS 2 -based nanoprobes. Second, the recognition unit or signal amplification unit should be efficiently assembled onto the MoS 2 nanosheet and its nanocomposites. The assembled amount and spatial configuration of the recognition unit or signal amplification unit greatly affects the analytical performance. Third, the preparation mechanism of MoS 2 -based nanoprobes should be further studied. It is important to design a high-efficient nanoprobe for the construction of sensors. Finally, the best combination of the MoS 2 -based nanoprobe and detection method is another important influence parameter for obtaining better analytical performance. We believed that a MoS 2 -based nanoprobe will eventually be used in practical applications in the future with our joint efforts.
v3-fos-license
2024-04-28T06:17:03.492Z
2024-04-26T00:00:00.000
269408049
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GREEN", "oa_url": "https://www.researchsquare.com/article/rs-2534158/latest.pdf", "pdf_hash": "4e9ae026f8636e8087033c2d016ce7c840290650", "pdf_src": "ScienceParsePlus", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41241", "s2fieldsofstudy": [ "Sociology" ], "sha1": "fd1135f9950cac4ec507a418b3291824e87d6c52", "year": 2024 }
pes2o/s2orc
The Purpose of Internet Use and Face-to-Face Communication with Friends and Acquaintances among Older Adults: A JAGES Longitudinal Study Whether and what type of internet use increases face-to-face communication (FFC) remains unclear. We aimed to investigate the mode of internet use that increases the FFC among older adults after three years. Background Social isolation, the objective state of having few social relationships or infrequent social contact with others, has become a serious public health issue.Life transitions and events in old age (e.g., retirement; loss of spouse, partner, or friends; migration to or from children; disability; or loss of mobility) are likely to affect older adults and are important risk factors for social isolation [1].The prevalence of social isolation is high worldwide, at 24% in the USA [2], 10-43% in North America [3], and 20% in the United Nations Loneliness around the world: Age, gender, and cultural differences in loneliness [4].In Japan, the prevalence of social isolation has increased from 21% before the Coronavirus-induced disease 2019 (COVID-19) pandemic to 28% after the pandemic [5].With increasing longevity and aging populations worldwide, social isolation among older adults is expected to increase further.Social isolation is associated with increased mortality [6], cognitive decline [7,8], cardiovascular disease [9], abuse [10], and depression [11].Given the negative impact of social isolation on health and well-being, several countries, including the United Kingdom and Japan, have implemented policies for the prevention and action against isolation and loneliness and appointed a Minister for Loneliness.Tackling social isolation and loneliness is considered a primary strategy for promoting global health and well-being. Social interaction with friends is an important component of social isolation [12].Social interactions can be classi ed into face-to-face communication (FFC) and non-face-to-face communication (non-FFC), such as the use of letters, telephone, e-mail, and social networking services (SNS).In a cross-sectional study, non-FFC achieved the same level of communication effectiveness as the FFC.[13].In another cross-sectional study, FFC had a moderating effect on loneliness and happiness, similar to non-FFC [14].In a longitudinal study, FFC and non-FFC individuals had a lower risk of mental health decline than non-FFC individuals alone [15].In another longitudinal study, FFC and/or non-FFC individuals with friends, neighbors, and workmates had a lower risk of new long-term care insurance certi cation than those who did not interact [16].An interventional study suggested that FFC with acquaintances is related to better well-being in older adults [17]. With technological advances in the past 10-15 years, there has been growing interest in internet-based interventions in social interactions [18][19][20][21].According to a survey by the Ministry of Internal Affairs and Communications in 2021 in Japan, approximately 60% of older adults in their 70s and 28% of older adults aged 80 and over used the internet [22].Some interventional studies suggest that the use of the internet for communication may increase the frequency of FFC with friends or acquaintances [23,24].Some of the possible mechanisms are as follows: Internet use may strengthen social support networks by crossing social and spatial barriers [25], increase social contact and reduce loneliness [26], and enrich and complement telephone and face-to-face social participation [27]. There are some inconclusive aspects of the relationship between internet use and FFC.First, the FFC and non-FFC groups were not adequately differentiated.For example, although internet use for communication, including social media, has increased social contact, the de nition of social contact includes FFC and non-FFC, and the two are not distinguished.Second, the purpose of internet use is expected to affect people's behavior, health, and well-being differently.Although internet use for communication has been associated with increased social networks [26, 28, 29], higher psychological well-being [30], and lower levels of social isolation [31], problematic SNS use can also increase social isolation and negatively impact relationships [28,32].Although internet use for informational purposes has been associated with higher well-being [30,33] or reduced loneliness [29], it may undermine existing social networks and further increase loneliness [29].Internet use for instrumental purposes has been associated with increased well-being [33] and better Quality of Life (QOL) among older male adults [34]. Whether and what type of internet use increases FFC remains unclear.Therefore, we conducted a longitudinal study assessing the association between the purpose of internet use in 2016 and the frequency and number of FFCs among older adults aged 65 years and over in 2019.We hypothesized that internet use for communication increases the frequency of FFC. Sample We used data from the 2016 and 2019 waves of JAGES.The JAGES is a repeated nationwide populationbased gerontological cohort study in Japan that focuses on the social determinants of health and functional disabilities.The JAGES is a self-administered questionnaire survey of adults aged 65 years or older who are independent of both physical and cognitive functions and who are not certi ed for eligibility for the bene t of the long-term care insurance system [35,36].A census was conducted for all residents in municipalities with fewer than 5,000 eligible residents, while random sampling was used in large municipalities with 5,000 or more eligible residents.The 2016 survey consisted of a common set of questions and eight modules, and the participants were randomly assigned to one of the eight modules. One of the eight modules included a section on the purpose of internet use. Figure 1 shows a owchart of participant inclusion and exclusion.Among the 22,295 participants in 34 municipalities (response rate 70.2%) who responded to the module, which included questions on the purpose of internet use, 12,656 participants were untraceable because they were certi ed for eligibility for the bene t of the long-term care insurance system, dead, did not respond to, or did not consent to the 2019 survey.Among 9,600 participants who responded to the 2019 survey, 866 were excluded because of gender discrepancies, age discrepancies, reduced activities of daily living (ADL), and missing values of ADL.Finally, 8,734 participants (47.6% male, mean age 73.1 years) were included in the study. Measurements The Frequency and Number of FFC with Friends or Acquaintances In response to the question, 'How often do you meet with your friends or acquaintances?', participants selected one of the following options: "almost every day," "twice or three times a week," "once a week," "once or twice a month," "several times a year," and "not participate at all."We created binary variables by integrating them into "more than once a week" and "less than once a week" based on a previous study [37].As a sensitivity analysis, we assessed the number of friends or acquaintances.In response to the question, 'How many friends or acquaintances have you met in the past month?Count the number of times you met the same person as one,' participants selected one of the following: 'one-two,' 'three-ve,' 'six-nine' and 'ten or more.'We classi ed them to "one or more," "three or more," "six or more" or "ten or more," as binary variables of "yes" or "no." The Frequency and Purpose of Internet Use First, we asked participants how often they had used the internet in the past year.Participants selected one of the following: "almost every day," "two or three times a week," "several times a month or less," or "do not use at all."The respondents who selected "almost every day," "two or three times a week," and "several times a month or less" were further asked about the purpose of their internet use: "communicating with friends and family," "LINE (Messaging application widely used in Japan, Taiwan, Thailand, and Indonesia), Facebook and Twitter," "searching for information other than health or medical care," "searching for information on health and medical care," "searching maps and tra c information," "purchasing products and services," and "bank transactions, stock and securities trading."Participants could select more than one purpose when they were engaged in them.Each category was analyzed as a binary variable: "yes" or "no."Participants who answered "No" included both those who had never used the internet and those who had used the internet but not for relevant purposes. Control Variables We adjusted for a series of demographic, physical, psychological, and social factors in 2016.Age and sex were included as demographic factors [38].The relationship between social isolation and gender is slightly more noticeable among males than females of all ages [21].There is also a gender gap in internet use [39], and the frequency of internet use decreases with age [40].We analyzed age as a continuous variable and sex as a binary variable: male and female.Decreased physical function and comorbidities are risk factors for decreased social interactions [38].Older adults with limited activity and comorbidities due to physical problems may have barriers to accessing the internet [41].Conversely, older adults with physical problems may use the internet to seek information or communicate with others.We adopted a 5-item self-report measure of instrumental activities of daily living (IADL-5) as an indicator of IADL (score range:0-5) [42].We created a binary variable of no decline (5 points) and decline (0-4 points).The comorbidity question consisted of 17 diseases related to atherosclerotic diseases and major medical diseases, including cancer, dementia, musculoskeletal diseases, and sensory system diseases.Participants were considered to have comorbidities if they had any of them.We created a binary variable for comorbidity: the absence or presence of comorbidities.Depression is associated with decreased social interaction [43].Depression may also be associated with decreased internet use [44].We assessed depressive symptoms using the Geriatric Depression Scale (GDS), which consists of 15 questions, with higher scores indicating more depressive symptoms [45].Participants with a GDS of 5 or higher were considered to have depression [46].Self-rated health (SRH) is a subjective measure of health status.SRH has been associated with increased social interaction [47] and internet use [48,49].We also created a binary variable for SRH.Participants responded to the question, "How is your current health status?".(1) "Excellent" or (2) "good" was considered good, while (3) "fair" or (4) "poor" was considered poor."Socioeconomic status, marital status, and living arrangements contributed to social interactions [38] and were also associated with poor access to the internet [40].Therefore, we created binary variables: educational attainment (less than 9 years or more than 10 years), occupational status (currently employed or unemployed), equivalized household income (less than 2 million yen or more than 2 million yen), marital status (married or unmarried), and living arrangement (living alone or not).Social interaction at baseline was also related to social interactions at the follow-up.The frequency of meeting friends or acquaintances in 2016 may be related to the frequency of interactions in 2019.Social interaction is associated with internet use [50].We created the binary variables "more than once a week" and "less than once a week."Low social support is associated with higher social isolation [51].Receiving Emotional support is expected to promote internet use among older adults [44].Internet use may not only promote emotional support, but also maintain and strengthen existing relationships with geographically distant friends or acquaintances [52].We asked participants, 'Do you have someone who listens to your problems and frustrations?'We created a binary variable of "yes" and "no" for this question. Statistical Analysis Since the incidence of outcomes was more than 10% in our analysis, the odds ratio could not be accurately estimated by logistic regression analysis [53].We used modi ed Poisson regression models to calculate the cumulative incidence ratio (CIR) and 95% con dence interval (CI) for meeting friends or acquaintances more than once a week.The purposes of internet use were simultaneously included in the modi ed Poisson regression model. As internet use is interrelated with FFC [26, 50, 54, 55], we hypothesized that the frequency of FFC in 2016 would in uence the association between internet use and FFC frequency in 2019.We created product terms for the frequency of meeting friends or acquaintances in 2016 and the purpose of internet use.The product terms were added to a modi ed Poisson regression analysis along with the CIR and 95% CI.The purpose of internet use was simultaneously included in the modi ed Poisson regression model. Two sensitivity analyses were conducted.First, we assumed that some older adults met only a few friends and acquaintances frequently.We replaced the outcome with a binary variable of the number of friends or acquaintances who met in a month.We asked, "How many friends or acquaintances have you met in the past month?Count the number of times you met the same person as one."The number of friends or acquaintances in 2019 was categorized as "1 or more," "3 or more," "6 or more," or "10 or more," as a binary variable of "yes" or "no."We analyzed the data in two ways: with and without a decrease in the number of friends or acquaintances who met in a month.First, we tested the probability of increasing the number of friends or acquaintances from zero to one or more, two or less to three or more, ve or less to six or more, and nine or less to ten or more between 2016 and 2019, respectively.Second, we tested the probability of maintaining more than one, three, six, and ten friends or acquaintances between 2016 and 2019.We conducted a modi ed Poisson regression analysis and calculated the CIR, 95% CI, and P value.Second, we modi ed the statistical analysis methods to verify their robustness.We conducted a multiple regression analysis with the frequency of meeting friends or acquaintances as continuous values in the following order: "almost every day," "a few times a week," "once a week," "once or twice a month," "a few times a year," and "not attending at all."We calculated the coe cients, 95% CI, and P values. We conducted multiple imputation using the multivariate normal method, assuming that all the data were missing at random.Data were missing in 1.8% for the variables of the purpose of internet use in 2016, 16.8% of income in 2016, 14.3% for GDS in 2016, and 14.1% for employment status in 2016.The missing data for the other variables were less than 10%.We created 20 imputed datasets and combined the effect estimates using Rubin's rule [56]. All data were analyzed using STATA 17.0 software (STATA Corp. LLC, College Station, TX, USA).Continuous variables are expressed as mean (standard deviation [SD]), and categorical variables are reported as percentages. Results Compared to internet non-users, older adults who used the internet tended to be younger, female, without comorbidity, had less IADL decline, had higher equivalized household incomes, had higher educational attainment, had better SRH and better GDS, were married, and lived with someone.The most common purpose of internet use was communicating with friends and family (71.0%), while only 13.8% of internet users used SNS (Table 1). Table 2 shows the characteristics of older adults who used the internet for each purpose in Japan in 2016.Internet use for communication and SNS was more common among females or those with a higher frequency of meeting friends or acquaintances in 2016.Internet use for communication was more common among those with a low income or who lived alone.SNS was more common among those who were single, currently employed, or who received emotional support than other purposes.Health information was more common in those who did not have comorbidities than for other purposes.Banking, stocks, and securities trading were more common among males, who had a lower frequency of meeting friends or acquaintances in 2016, higher incomes, higher education levels, and lived with someone for other purposes. The modi ed Poisson regression models showed that internet use for communication with friends, friends, or acquaintances in 2016 was associated with an increased probability of meeting friends or acquaintances more than once a week in 2019 (CIR:1.08;95% CI = [1.01-1.16],P =.029, Reference: Internet non-users or older adults who did not use the internet for the relevant purposes), while any other purpose, including SNS, in 2016 was not associated with an increased probability of meeting friends or acquaintances more than once a week in 2019 (Table 3). We created interaction terms because the frequency of meeting friends or acquaintances in 2016 may have interacted with the purposes of internet use and the control variables and in 2016.A statistically signi cant association was found only in the product term of internet use for communication and the frequency of interaction with friends or acquaintances in 2016 (Supplementary table 1). Table 4 shows the association between the purpose of internet use and the frequency of meeting friends or acquaintances in 2019, strati ed by the frequency of meeting friends or acquaintances in 2016 (modi ed Poisson regression model).Among those who met friends or acquaintances less than once a week in 2016, internet use for communication was associated with a statistically signi cant increase in meeting friends or acquaintances more than once a week in 2019 (CIR:1.20,95%CI [1.04-1.39],P =.014, Reference: older adults who did not use the internet for communication).Among those who met friends or acquaintances more than once a week in 2016, internet use for communication was not associated with a statistically signi cant increase in meeting friends or acquaintances more than once a week in 2019 (CIR:1.05,95%CI [0.97-1.13],P =.237). Discussion In 2016, internet use for communication with friends or family increased the frequency and number of FFCs with friends or acquaintances in 2019, especially among those whose frequency and the number of FFCs with friends or acquaintances decreased in 2016.These results are consistent with previous studies showing that internet use for communication is associated with improved social relationships [18,33,57].The ndings of this study are valuable because non-FFC through internet use for communication may increase FFC in a longitudinal study in Asian countries. In our study, internet use for communication with friends or family members was associated with increased FFC with friends and acquaintances in 2019.Several previous studies have found that internet use for communication may increase social relationships and interactions.In a qualitative study, online content facilitates communication and enriches social engagement and FFC [27].Internet communication enhances existing social relationships, complements, induces, and facilitates FFC [28,58,59], and makes it easier to meet new people [57].Older adults use the internet to maintain and strengthen existing social relationships with family and friends and to enhance social support [60].In an observational study, internet use for communication was associated with increased social contact [26].Our results are consistent with those of the previous studies.Similar results are not expected for other purposes of internet use, such as SNS, informational usage, and instrumental usage. The ndings of our study suggest that internet use for communication may increase FFC with friends or acquaintances, particularly among older adults who had less frequent FFC with friends or acquaintances in 2016.When older adults have fewer social relationships, they subjectively interpret the differences and discrepancies among them and feel lonely, which negatively affects network building, unrealistic desires, high expectations of relationships, di culty in coping with stress, and consequently intensifying the dilution of social relationships [61].In short, they became trapped in a vicious cycle of diluted social relationships and loneliness.Internet use enables individuals to overcome social and spatial barriers [25].Internet use also reduces social isolation among older adults by connecting with the outside world, receiving social support, participating in activities of interest, and increasing self-con dence [62].Consequently, the vicious cycle of loneliness and diluted social relationships can be halted.We thought these to be a part of the mechanisms that contribute to promoting social relationships, especially among those with less frequent FFC. Our study showed that SNS use was not associated with increased FFC with friends or acquaintances.Some possible reasons for this may be as follows.First, only 13.8% of internet users used SNS in our study, which may have resulted in insu cient statistical power.Second, internet use for communication and SNS overlap and cannot be completely separated.Third, existing SNS applications such as Facebook and Twitter mainly target the younger generation and do not take into account the needs of older adults [20].Fourth, the impact of SNS on FFC was debatable; in an observational study, problematic SNS use was associated with increased perceptions of social isolation among older adults [32].SNS use for more than one hour per day was associated with reduced health among older adults [63].In a review assessing the impact of communication technologies on social interaction, some studies reported that SNS increased social interaction, while others did not [23] Our study had several limitations.First, we were unable to identify the causality of these relationships fully.Although we addressed reverse causality by stratifying the frequency of FFC at baseline and adjusting for a series of potential confounders, unmeasured confounders might still exist.Second, the type of online communication used was not identi ed.Some may talk online, others may send an e-mail or message, and others may use the chat function of SNS.It remains unclear whether speci c types of online communication can be used to increase the frequency of FFC.Third, the questions on SNS referred to the name of the application and did not refer to the content of the services.Understanding the usage rate and content of SNS applications in each country is also essential.The most common SNS application used in Japan is LINE [64], primarily used for chatting and telephone/video calls.However, a cross-sectional survey conducted in Japan in 2016 [65] showed that only 23.8% of older adults in their 60s used LINE and that the usage rate is expected to decline with increasing age; this should not signi cantly affect our conclusions. Conclusions We used three years of longitudinal data from many municipalities across Japan to investigate the relationships between the purpose of internet use and the frequency and number of FFCs with friends or
v3-fos-license
2014-10-01T00:00:00.000Z
1995-04-01T00:00:00.000
9626461
{ "extfieldsofstudy": [ "Business", "Medicine" ], "oa_license": "pd", "oa_status": "GOLD", "oa_url": "https://ehp.niehs.nih.gov/doi/pdf/10.1289/ehp.95103s39", "pdf_hash": "a10dabe6fd643324f0f4b0e2cc9656b0aa790ce0", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41242", "s2fieldsofstudy": [ "Biology" ], "sha1": "a10dabe6fd643324f0f4b0e2cc9656b0aa790ce0", "year": 1995 }
pes2o/s2orc
The role of specimen banking in risk assessment. The risk assessment process is described with a focus on the hazard identification and dose-response components. Many of the scientific questions and uncertainties associated with these components are discussed, and the role for biomarkers and specimen banking in supporting these activities are assessed. Under hazard identification, the use of biomarkers in defining and predicting a) biologically adverse events; b) the progression of those events towards disease; and c) the potential for reversibility are explored. Biomarker applications to address high-to-low dose extrapolation and interindividual variability are covered under dose-response assessment. Several potential applications for specimen banking are proposed. Background As citizens, we are all concerned that our health, and the health of our children could be compromised or endangered by exposure to toxic chemicals and other potential health hazards in the air we breathe, and in our food and drinking water. Public concern over this potential for harm resulting from exposure to environmental pollutants has led to a demand for protection from environmental risk, either real or imagined. This demand has prompted public health officials, environmental scientists, and regulatory agencies to pursue processes to define, explain, and mediate environmentally related health risks (1,2). The U.S. Environmental Protection Agency (U.S. EPA) has responded to public demand by adopting a paradigm that was proposed initially by the National Academy of Sciences (NAS) (3). This approach to risk assessment provides a format and data for estimating the potential adverse health effects of human exposures to environmental hazards that, in turn, provides the cornerstone to risk management decisions ( Figure 1). In this article, selective components of this risk assessment process are described as well as some of the The views expressed in this paper are those of the authors and do not necessarily reflect the views or policies of the U.S. Environmental Protection Agency. The U.S. government has the right to retain a nonexclusive, royalty-free license in any, and to any, copyright covering this paper. Address correspondence to Dr scientific questions and uncertainties that accompany these components. The role for biomarkers and specimen banking in this process will be assessed. The discussion focuses on issues related to human exposure and health assessment rather than the broader role of biomarkers in toxicology (e.g., mechanistic studies in animals). Although much has been written on the contribution of biomarkers to risk assessment, the challenge to participants in this symposium is to define the potential contributions to be gained from specimen banking activities. Although this paper is focused on risk assessment, it should be recognized that information required for this process is also critical for numerous other collateral and interrelated activities and actions that support risk management decisions. Biomarkers and specimen banking may also contribute to these activities, and, in fact, their role may be even more apparent in these other contexts. For example, information derived from activities that may be referred to as monitoring (exposures) or surveillance (health status and trends) are essential to establishing baseline (reference) values, directing pollution prevention options, assessing the efficacy of corrective actions, or anticipating/detecting emerging environmental problems. Such data when combined with risk assessment activities may also help define and prioritize the legitimate environmental risks for the public. This approach can, in turn, ensure that public and private attention, expertise, and resources are directed appropriately. An appreciation of the interplay between these collateral activities and risk assessment should be incorporated into defining the role and criteria for biomarkers and specimen banking activities purported to support these processes. For this paper, biomarker will be defined "as any measurable biochemical, physiological, cytological, morphological, or other biological parameter obtainable from human tissues, fluid, or expired gases, that is associated (directly or indirectly) with exposure to an environmental pollutant" (4). The Risk Assessment Process Simply defined, risk assessment is the attempt to understand the relationship between human exposures and potential health effects. This understanding requires identifying the factors that result in human exposure and then defining the cascade of events that must occur to create a health risk. This analysis entails delineating the pharmacokinetic and pharmacodynamic processes that govern this cascade. As presented in Figure 2, biomarkers research has separated historically this continuum into exposure and health compartments. More contemporary efforts have acknowledged that biomarkers of dose can serve as the common denominator for linking these Environmental Health Perspectives events. Future efforts should approach understanding these events from a continuum perspective. The NAS risk assessment paradigm defines four components that can be overlaid on this continuum ( Figure 3 Conventionally (and conveniently) biomarkers can be defined by three, interrelated categories, namely, biomarkers of exposure, effect, and susceptibility that can be related to the components of the risk assessment process. These relationships are discussed in the following sections. Since much of this symposium is devoted to human exposure, the focus of this paper will be primarily on biomarkers as they relate to hazard identification and dose-response assessment. Biomarkers and Hazard Identification Hazard identification defines whether an agent can cause an adverse effect and its relevance to human health and disease. This evaluation examines all available data including human, test species, and in vitro data with close scrutiny to dose-response and Do-R =spon. Aseesemen dose-effect relationships. Conclusions as to the hazard potential for a given agent are based upon a weight-of-evidence summation. There are several issues that must be addressed in hazard identification in which human biomarker data could contribute (Table 1). Perhaps most critical is understanding the biologic significance of biomarker(s) of effect whose occurrence can be measured at very low exposure levels. Because of increasing evolution and sophistication in measurement methods and instrumentation, changes in baseline levels can be detected more readily. Given an appropriate study design, these changes may be found to be statistically significant. Less certain is whether these biomarkers represent an adverse, or potentially adverse event, i.e., their value in predicting human health disease or dysfunction. (For the remainder of this paper, "disease" will be used to imply either a dysfunctional state or actual disease.) An example of a success story is blood lead levels in children and the demonstrated relationship to neurotoxicity that has provided key information for the existing lead standard. On the other hand, the biologic significance of moderate changes in plasma and red blood cell levels of acetylcholinesterase (AchE) remains uncertain. Although widely accepted as a biomarker of exposure to certain dasses of pesticides, the role of peripheral levels of AchE in predicting toxicity to the central nervous system (CNS) is less dear. Ideally, biomarker(s) of effect should provide insight into current health status and, if present, the stage of the disease. An understanding of the potential for reversibility associated with a decrease or discontinuation of exposure is equally important. Biomarkers of reversibility however, must distinguish between true recovery (absence of pollutantinduced effects) and the failure to detect adverse effects as a result of adaptation or biologic compensation which may mask existing impairment. There is also a need to understand the relationship between the current biomarker and silent processes that may underlie the [eventual] appearance of disease. The common denominator for a biomarker of effect that provides information on the latency, stage and progression, and reversibility is an understanding of the putative mechanisms of the disease under study. Potentially, biomarkers may also play a role in determining if a threshold exists, and, if so, what level of exposure is necessary to exceed that threshold and pose a health risk. Equally important is whether this threshold varies for different populations (e.g., young vs adult, rural vs urban). Biomarkers that can identify/distinguish these populations may impact dramatically health risk assessment and risk management decisions. Biomarkers and Dose-Response Assessment Critical to any risk assessment is an understanding between exposure (dose) and the occurrence/magnitude of adverse effects. Ideally, this relationship would be approximately linear (i.e., increasing risk with increasing exposure). However, depending on the target and its inherent properties to respond to toxicity (e.g., repair), a matrix of exposure and effects scenarios is more likely ( Table 2). The situation becomes more complicated when an individual operates concurrently under more than one exposure situation (e.g., chronic, low-level exposure with periodic high excursions) and experiences multi-chemical exposures. The use of biomarkers of dose and pharmacokinetic modeling offers great promise for better defining exposure (dose)-response relationships. Henderson et al. (5) have proposed that a suite of biomarkers be employed to reflect recent as well as past and, potentially, cumulative exposures. Such a suite would accommodate varying rates of disposition (e.g., different half-lifes) of the parent compound, its metabolites, and any other surrogate markers that reflect an interaction between the agent and a biologic target (Figure 4). Yet, as seen in Figure 5, even an accurate estimate of dose may not predict effect status. Again, an understanding of the pharmacokinetic behavior of an agent must be synthesized with hypotheses/insights into the processes and mechanisms of the disease in question to provide biologically plausible dose-response assessments. Although such biologically based models are desirable, such approaches, to date, have had limited application, primarily focused on cancer and favoring a no threshold hypothesis (i.e., the interaction of a single molecule in a single cell will result in an adverse effect). The prospect of nongenotoxic, carcinogenic mechanisms has suggested that thresholds may, in fact, exist for certain environmental carcinogens. A threshold is assumed to exist for most noncancer health effects. That is, there is a range of exposures from zero to some finite level that can be tolerated with essentially no adverse effect. These assessments most often rely on defining a no observable adverse effect level (NOAEL) or a lowest observable adverse effect level (LOAEL) from the available data. This estimate is then adjusted downward by application of a series of uncertainty factors to (Table 3) produces what is now widely termed a reference dose (RfD) or reference concentration (RfC) depending on whether the exposure route is oral or inhalation, respectively. For purposes of this paper, discussion will focus on the uncertainties associated with high-to-low dose extrapolation and the across human (interindividual) variability. Irrespective of whether biologic modeling or an RfD/RfC approach is employed, uncertainties associated with these two factors will be present. Biomarkers may have a substantial role in determining the necessity and/or magnitude ofthese uncertainty factors. High-to-Low Dose Extapolation Although human health data for the exposure situation of concern is what is desired, the majority of data on which human risk assessments are based is derived from test species or humans in elevated exposure settings (e.g., occupational). The risk assessor is then required to determine risk for individuals operating in environments characterized by much lower exposures. The shape and slope of the curve of a dose-response The tendency historically has been to accept these assumptions with research then focused on identifying biomarkers of mechanism and dose present at low exposure levels. This approach assumes implicitly a linear relationship between dose and risk. Perhaps a more systematic approach would be to identify biomarkers nearer the dose range of the experimental data and then progress in a descending, steplike fashion toward the human exposure range of concern. Thus, initial efforts would compare biomarkers from humans with exposures dosest to the experimental data ( Figure 6). Such data would most often come from the occupational setting. High-to-low dose extrapolations may assume initially that the highest exposed individuals are biologically representative of the general population and differ only in terms of exposure. Clearly, other factors pose limitations to this overall generalization. For example, the healthy worker effect in occupational settings may produce exposure-response data that underestimate health risk for the general population even at lower exposures. Conversely, if the highest exposed also represents groups with compromised health status (e.g., the poor, the elderly), extrapolation may overestimate the effect(s) for the general population. Interndividual Variability The examples presented above may be considered to be a subset of the many factors associated with interindividual variability in response (i.e., biomarker) in a given environmental setting. Other terms often used Exposure (Dose) Figure 6. Overlay of a possible human exposure distribution on the extrapolated dose-risk curve (dashed line). The band demarcated by vertical lines represents a human subgroup with the highest exposures. The question mark reflects uncertainty as to the shape of the actual curve below the observed data. Volume 103, Supplement 3, April 1995 I 0 2 z L) a 0 U to account for interindividual variability are differences in sensitivity or susceptibility of the individual or subpopulation to a specific environmental insult. Whether these phenomena, in fact, reflect the same, underlying biologic processes is debatable and certainly has ramifications for interpretation of biomarker data. However, this question could serve as the basis for the entire symposium and will not be addressed in this paper. The major premise is that, although individuals may experience similar environmental exposures, individual differences in pharmacokinetic or pharmacodynamic processes may greatly influence the dose that reaches the target site and/or the degree of response. A number of factors including age, diet, and health status will obviously influence these processes. Increased or decreased responsiveness (susceptibility) may also be acquired wherein previous exposures sensitize the individual to subsequent exposures. An immunologic basis is likely for this phenomena. However, genetic predisposition seems to be the major determinant. For example, inherited differences in metabolic capabilities (e.g., polymorphism for activating/deactivating enzymes) can greatly influence the concentration and maintenance of the biologically effective dose at the target site. Similarly, genetic differences in repair or compensatory mechanisms, reserve capacity, and other biologic processes may influence the magnitude of the toxic response. The existence of interindividual variability in response implies that the individuals at greatest health risk may not be synonymous with those that experience the greatest exposures. The interplay between these two distributions is not well understood (Figure 7). Biomarkers that provide such insights will greatly assist efforts to quantify human risk estimations. The Role of Tissue Banking Based upon the preceding discussion, biomarkers of exposure, effect, and susceptibility would appear to have major roles in improving risk assessments. How the retention and preservation of these samples (specimen banking) may further enhance these estimations is IC Figure 7. Interplay between exposure and responsiveness population distributions. less dear. Moreover, the application will usually be retrospective, i.e., banking specimens today that may improve, refine, or reaffirm a risk assessment addressed in the future. Such an application places a tremendous burden on the population sampling design for a specimen bank since it is critical that individuals/ groups sampled today be representative of the exposed population in which disease is observed in the future. Some potential, interrelated applications can be offered that have implications for hazard identification and dose-response assessment. a) Reaffirm biologic significance/predictive validity: This application requires retrospective comparisons, namely, determining the relationship between previously obtained biomarker(s) and current exposure/health status. The ability of specific biomarkers to predict disease progression and reversibility may also be ascertained. Such evaluations would allow greater confidence to be placed on predinical, low dose biomarkers as the basis for a risk assessment in the absence offrank disease. b) Provide historical baseline (reference) values: The ability to ascertain whether an agent has elevated health risk can be strengthened by comparison to concurrent control values which have been placed in the context of historical values. This comparison of control cohorts may allow for the discrimination and quantification of temporal versus pollutantinduced changes in a given health measure. c) Reassess mechanistic hypotheses: As noted previously in this paper, identifying and understanding pharmacokinetic and pharmacodynamic mechanisms are key to ensuring more biologically sound risk assessments. Specimen banking may allow the retrospective testing of hypotheses regarding putative mechanisms for diseases, especially those with long latencies. Again, the current and future cohorts must be similar enough to allow such linkages to be valid. This application is facilitated if the specimens were obtained on the actual target tissues (e.g., lung, liver, etc.), or if concentrations in biologic fluids have been demonstrated to truly reflect target dose. d) Confirming exposure-dose effects linkages under changing exposure scenarios: As exposure conditions of a population or subgroup change over time a corresponding change in health status (i.e., biomarker values) should occur if previously hypothesized associations and attendant risk estimations are valid. e) Identifying new high risk groups: Factors that may elevate the risk to an environmental pollutant for certain individuals or subgroups (e.g., increased exposures; increased susceptibility) will impact the risk assessment for that agent. Banked specimens may provide the referents to aid such identification. Conclusions The concept of tissue banking is to provide for the long-term storage of biologic specimens. The premise is that a bank of tissue samples, collected and archived appropriately, provides scientifically preserved and documented samples for retrospective and prospective cohort studies. This resource, by providing human material, would also seem to hold great promise for reducing many of the uncertainties associated with assessing the health risks associated with exposure to environmental pollutants. The design and implementation ofa specimen bank should benefit from participation of diverse areas within the scientific community (e.g., epidemiologists, toxicologists, industrial hygienists, statisticians, risk assessors, etc). This broad input is critical to determine whether a design for specimen banking can be developed that will accommodate divergent interests/ needs within the public health community. To that extent, the compatibility of risk assessment needs relative to other applications will require further exploration.
v3-fos-license
2016-05-17T21:01:52.675Z
2013-01-01T00:00:00.000
754806
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GREEN", "oa_url": "https://europepmc.org/articles/pmc3907155?pdf=render", "pdf_hash": "fd2b60da664563e828ff82aad6f5029a0a1ca17c", "pdf_src": "MergedPDFExtraction", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41243", "s2fieldsofstudy": [ "Biology", "Medicine" ], "sha1": "50c2b0c068b3630334163c236fcfde44276b8a29", "year": 2013 }
pes2o/s2orc
Excess CD 40 L does not rescue anti-DNA B cells from clonal anergy CD40L, a member of the tumor necrosis factor (TNF) ligand family, is overexpressed in patients with systemic lupus erythematosus and in lupus mouse models. Previously, we demonstrated that B cells producing pathogenic anti-Sm/RNP antibodies are deleted in the splenic marginal zone (MZ), and that MZ deletion of these self-reactive B cells is reversed by excess CD40L, leading to autoantibody production. To address whether excess CD40L also perturbs clonal anergy, another self-tolerance mechanism of B cells whereby B cells are functionally inactivated and excluded from follicles in the peripheral lymphoid tissue, we crossed CD40L-transgenic mice with the anti-DNA H chain transgenic mouse line 3H9, in which Ig λ1+ anti-DNA B cells are anergized. However, the percentage and localization of Ig λ1+ B cells in CD40L/3H9 double transgenic mice were no different from those in 3H9 mice. This result indicates that excess CD40L does not perturb clonal anergy, including follicular exclusion. Thus, MZ deletion is distinct from clonal anergy, and is more liable to tolerance break. Takeshi Tsubata ( ) Corresponding author: tsubata.imm@mri.tmd.ac.jp Aslam M, Kishi Y and Tsubata T. How to cite this article: Excess CD40L does not rescue anti-DNA B cells from clonal anergy [v2; ref 2014, :218 (doi: ) status: indexed, ] http://f1000r.es/2rs F1000Research 2 10.12688/f1000research.2-218.v2 © 2014 Aslam M . This is an open access article distributed under the terms of the , which Copyright: et al Creative Commons Attribution Licence permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Data associated with the article are available under the terms of the (CC0 1.0 Public domain dedication). Creative Commons Zero "No rights reserved" data waiver This work was supported by MEXT KAKENHI Grants Number 23390063 (TT) and 24790464 (YK). Grant information: The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. Competing interests: No competing interests were disclosed. 17 Oct 2013, :218 (doi: ) First published: 2 10.12688/f1000research.2-218.v1 28 Jan 2014, :218 (doi: ) First indexed: 2 10.12688/f1000research.2-218.v2 Referee Status: Introduction Antibodies to nuclear antigens such as DNA and the RNA-related Sm/RNP antigen are characteristically produced in patients with systemic lupus erythematosus (SLE), a prototype of systemic autoimmune diseases, and play a role in the development of this autoimmune disease [1][2][3] . How B cells reactive to nuclear antigens are regulated has been extensively studied using transgenic (Tg) mice expressing auto-antibodies against DNA and RNA components, especially the anti-DNA H chain-Tg mouse lines 3H9 and 56R 4-9 . Studies using these auto-antibody-Tg mice demonstrated that self-reactive B cells that produce autoantibodies to nuclear antigens are deleted by apoptosis (clonal deletion) 10 , are functionally inactivated (clonal anergy) 11 or change antigen specificity by immunoglobulin (Ig) V gene replacement (receptor editing) 8,12 , in the bone marrow before they migrate to the peripheral lymphoid organs. These self-tolerance mechanisms appear to be involved in the prevention of autoantibody production in normal individuals. CD40 is a member of the tumor necrosis factor (TNF) receptor family expressed in immune cells including B cells and dendritic cells 13 . Upon interaction with its ligand, CD40L (CD154), expressed mainly by activated T cells, CD40 transmits survival and activation signals in B cells 13,14 . In both patients with SLE and in SLE mouse models, CD40L is overexpressed by T cells and ectopically expressed in B cells [15][16][17] , and this excess CD40L expression appears to play a role in development of SLE, as treatment with antagonistic anti-CD40L antibody markedly reduces the severity of the disease in both humans and mice 18 . Using CD40L/56R double transgenic mice expressing both the anti-DNA H chain 56R and CD40L in B cells, we previously demonstrated that anti-Sm/RNP B cells are regulated by a novel tolerance mechanism in peripheral lymphoid tissue, i.e., deletion in splenic marginal zone (MZ), and that the MZ deletion is perturbed by excess CD40L 19 . In 56R mice, B cells that produce anti-Sm/RNP antibody appear in the splenic MZ, and are subsequently deleted there by apoptosis. When 56R mice are crossed with CD40L-Tg mice in which CD40 signaling is constitutively generated in B cells 20 , MZ deletion of anti-Sm/RNP B cells is perturbed, resulting in autoantibody production 19 . As anti-Sm/RNP antibodies are implicated in the pathogenesis of SLE 1,2 , MZ deletion appears to be important for preventing the development of SLE through deletion of pathogenic self-reactive B cells. Hence, a defect in MZ deletion by excess CD40L 19 could play a role in the development of lupus by inducing the production of pathogenic anti-Sm/RNP antibody. Some self-reactive B cells including a part of anti-DNA B cells are silenced by clonal anergy in which B cells persist in the peripheral lymphoid organs but are unresponsive to antigen stimulation. Anergized self-reactive B cells are excluded from follicles or the MZ of the spleen. Instead, they are located in the red pulp and the T cell zone of the spleen, especially in the border between the T cell zone and the follicles, and undergo apoptosis 21 . Ig λ1 L chain + B cells in the anti-DNA H chain-Tg mouse line 3H9 are reactive to DNA and are anergized 9,22,23 . Previously, we demonstrated that excess CD40L does not induce anti-DNA antibody production in 3H9 mice 19 . Nonetheless, it is possible that anti-DNA B cells in these mice are rescued from anergy by excess CD40L but are tolerized by some other mechanism as there are multiple tolerance mechanisms at different developmental stages of B cells. To address whether excess CD40L can reverse the anergy of self-reactive B cells, we crossed CD40L-Tg mice with 3H9 mice and examined the percentage and localization of Ig λ1 + B cells. Our results demonstrated that excess CD40L does not expand anergized anti-DNA B cells or reverse their follicular exclusion, indicating that excess CD40 does not reverse anergy of self-reactive B cells. As excess CD40L does perturb MZ deletion of anti-Sm/RNP B cells, clonal anergy appears to be distinct from MZ deletion, although both of them induce apoptosis of self-reactive B cells in peripheral lymphoid tissue. Mice The conventional Tg mouse line expressing the H chain of the anti-DNA antibody 3H9 on the BALB/c background 5 was a kind gift of Dr. M. Weigert (The University of Chicago). We previously generated CD40L-Tg mice on the C57BL/6 background 20 . CD40L-Tg mice were crossed with 3H9-Tg mice to generate wild type, 3H9-Tg and CD40L/3H9 double Tg mice on (BALB/c × C57BL/6)F1 background. Mice were genotyped by PCR reaction using specific pairs of primers for the CD40L and 3H9 transgenes, respectively 14,19 . All mice were housed and bred at our specific pathogen-free facility. Groups of 3 mice were kept in conventional shoebox-type polycarbonate cages, which were changed every 7 days. All animals were provided with food (CE-2, CLEA Japan, Inc.) and water ad libitum and were maintained on a 12-hour light/dark cycle. All procedures followed the guidelines of Tokyo Medical and Dental University for animal research and were approved by Institutional Animal Care and Use Committee, Tokyo Medical and Dental University. For each of 3H9, CD40L and CD40L/3H9-Tg mice, 3 or 4 female mice at 11-34 weeks of age were analyzed by flow cytometry and immunohistochemistry, respectively. Mice were euthanized by cervical dislocation after CO2-induced unconsciousness and whole spleens were removed aseptically. Flow cytometry Spleens were finely minced over a wire mesh, and spleen cells were collected and suspended in PBS containing 2% FCS (Cell Culture Bioscience, Japan) and 0.1% NaN 3 (Nacalai Tesque, Inc., Japan). Amendments from Version 1 We thank the reviewers for their helpful comments. We have revised our manuscripts according to the suggestions, which has strengthened our paper. Specifically, we provided additional information relating to the experimental animals (including information on the number of mice we used in each group), added additional graphs in Figure 1 to show statistical analysis comparing groups, and replaced Figure 2C with a more representative picture. We also modified the introduction to stress that our current results are not easily foreseeable by our previous publication and that this work is a significant addition to the literature. See referee reports REVISED that in wild type mice (p < 0.05 and 0.01, respectively), but λ1 + cells constitute the majority of the λ + cells in these mice as well as wild type mice ( Figure 1B lower panels). Excess CD40L does not reverse follicular exclusion of anergized anti-DNA B cells To address whether excess CD40L reverses follicular exclusion of anergized anti-DNA B cells, we examined spleen sections of 3H9 and 3H9/CD40L Tg mice using an anti-λ antibody but not anti-λ1 antibody, as the anti-λ1 antibody did not yield any staining when used for immunohistochemistry. Nonetheless, most of the B cells stained with anti-λ antibody in these mice express λ1 ( Figure 1B, lower panels) and are thus reactive to DNA. In wild type spleen, λ + cells were located mostly in the follicle where B220 + B cells accumulate ( Figure 2A). In contrast, λ + cells were found mostly in the border between T cell zone and follicle, T cell zone and red pulp in both 3H9 and CD40L/3H9 spleens ( Figure 2B and C), indicating that anti-DNA B cells are excluded from follicles in CD40L/3H9 mice as well as 3H9 mice. Thus, excess CD40L does not reverse follicular exclusion of anergized anti-DNA B cells. Discussion In this study, we crossed CD40L-Tg mice with anti-DNA H chain-Tg 3H9 mice, and demonstrated that the percentage and location of λ1 + anergic anti-DNA B cells are not altered in CD40L/3H9 double Tg mice compared to those in 3H9 mice, suggesting that anergy of λ1 + anti-DNA B cells is not reversed in CD40L/3H9 mice. This result is consistent with our previous finding that CD40L/3H9 mice do not produce anti-DNA antibodies in sera, whereas another anti-DNA H chain Tg mouse line 56R in which self-reactive B cells are deleted at MZ did produce autoantibodies in sera when crossed with the same CD40L Tg mice 19 . Thus, excess CD40L does not perturb anergy of anti-DNA B cells. Statistical analysis Spleen cells were compared between animals. Statistical analysis of data by two-tailed Student t test was performed using Prism 5.0 software (GraphPad). p < 0.05 was considered statistically significant. Results Excess CD40L fails to expand anergized anti-DNA B cells To address whether excess CD40L perturbs clonal anergy of anti-DNA B cells, we crossed CD40L Tg mice with 3H9 mice, because B cells expressing Ig λ1 L chain and 3H9 H chain are reactive to DNA and show characteristics for anergy including follicular exclusion, shortened life span and failure of antibody production 23 . When spleen B cells of wild type, 3H9 Tg and CD40L overexpressing CD40L/3H9 double Tg, mouse lines were analyzed by flow cytometry, the percentage of λ1 + B cells in total B cells expressing a B cell marker B220 was markedly reduced in 3H9 mice compared to wild type mice ( Figure 1A) (p < 0.01) as λ1 + B cells in 3H9 but not wild type mice are self-reactive. The percentage of λ1 + B cells in CD40L/3H9 double Tg mice was equivalent to that in 3H9 mice, indicating that excess CD40L does not expand anergized anti-DNA B cells. When we used anti-λ antibody that reacts to multiple Ig λ chain subtypes such as λ1 and λ2, the percentage of total λ + cells were only slightly higher than that of λ1 + cells in wild type, 3H9 and CD40L/3H9 mice ( Figure 1A and B upper panels), suggesting that most of the λ + cells express the λ1 subtype in all these mice and are thus reactive to DNA. This is confirmed by determining percentage of λ1 + cells in total λ + cells. In both 3H9 and CD40L/3H9 mice, the percentage of λ1 + cells in total λ + cells is slightly reduced compared to A and B) and Ig λ chain (B), and B220 + cells were analyzed by flow cytometry. Percentages of λ1 + cells in total B220 + cells (A), percentages of λ + cells in total B220 + cells (B, upper panel) and percentages of λ1 + cells in total λ + cells (B, lower panel) are indicated (left panels). Representative data from three independent experiments. Graphs show mean ± SD, three mice per genotype (right panels). * p<0.05, ** p<0.01, ns = not significant. Using the same CD40L-Tg mice that we used in the present study, we previously demonstrated that excess CD40L inhibits apoptosis of anti-Sm/RNP B cells in MZ, and that excess CD40L induces autoantibody production 19 . This suggests that MZ deletion is distinct from clonal anergy, although both anergic B cells and B cells that are deleted in MZ appear in peripheral lymphoid tissue and are eliminated by apoptosis. Like CD40L, B cell activating factor (BAFF), another member of the TNF ligand family, induces B cell survival and is overexpressed in patients with SLE 26-28 and its mouse models 29,30 . Anergic B cells show increased dependency on BAFF for survival, and this increased dependency appears to be involved in rapid elimination of anergic B cells by competition with nonself-reactive B cells 31,32 . In the presence of non-self-reactive B cells, anergic B cells may not be able to interact with a sufficient level of BAFF required for their survival, due to competition for a limited amount of BAFF 32 . Nonetheless, excess BAFF fails to fully reverse anergy of self-reactive B cells. Lesley et al. 31 demonstrated that excess BAFF expands anergic B cells but fails to reverse follicular exclusion. Thien et al. 32 demonstrated that excess BAFF expands and reverses follicular exclusion in only anergic B cells with intermediate affinity but not those with high affinity. In contrast, cognate T cell help perturbs follicular exclusion and induces autoantibody production in anergic self-reactive B cells 33 . Thus, reversing clonal anergy requires strong T cell help, and excess BAFF or CD40L alone may be insufficient. In contrast, we previously demonstrated by crossing the CD40L-Tg mice with another anti-DNA H chain Tg mice 56R that excess CD40L perturbs MZ deletion of self-reactive B cells and induces autoantibody production, suggesting that MZ deletion is more sensitive to a tolerance break than clonal anergy 19 . As excess CD40L is found in patients with SLE and various SLE model mice, MZ deletion is likely to be defective in lupus, and its defect may be involved in development of lupus. Author contributions TT conceived the study. MA, YK and TT designed the experiments, and carried out the research. MA and TT wrote the manuscript. Competing interests No competing interests were disclosed. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. The paper by Aslam, Kishi and Tsubata addresses an interesting question about the effects of bystander stimulation through CD40L on immunological self tolerance of B cells. Previous studies from this group showed that transgene-enforced CD40L expression on B cells promoted a lupus-like autoimmunity in mice with a polyclonal B cell population. A later study showed that autoreactive B cells in a transgenic mouse with a restricted antibody repertoire biased to DNA reactivity (mice carrying 3H9-56R DNA-reactive H-chain) were promoted to central tolerance in the presence of excess CD40L, whereas 3H9-H mice lacking that mutation, while still expressing a repertoire biased to autoreactivity, failed to show a clear break in tolerance. Grant information The present study is a rather minor addition to this body of work, providing additional data on the fate of B cells expressing a λ1 light chain partner, which have high anti-DNA affinity when expressed with 3H9H. The data show that 3H9H/λ1 B cells are not rescued in their development or altered in their anatomical localization (follicular exclusion) when developing in an environment with excess B cell-expressed CD40L. The data are consistent with the earlier, more extensive study, but do provide some new information. However, the data shown have no indication of reproducibility or provide more than a single mouse per group. There is no molecular characterization of the cells to verify that the receptor expression expected is indeed confirmed. Overall, this is an incremental contribution to the literature. I have read this submission. I believe that I have an appropriate level of expertise to state that I do not consider it to be of an acceptable scientific standard, for reasons outlined above. No competing interests were disclosed. The paper is well constructed and all sections are clear. However, I believe that the data could be presented in a manner more helpful to the reader and provide a firmer foundation for their conclusions. I think that these changes (especially those concerning figure 2) are necessary to substantiate their conclusions and abstract. Both figure legends should contain information on how many mice were used in each group. This information is essential for all publications involving animal work and is necessary to allow proper judgment of data. Whilst the methods do state that the mice are caged in groups of 3 this is within the details of animal husbandry and, as such, it is not clear if this related to the experimental group size. In addition to the representative flow cytometry plots provided Figure 1A and B would benefit from a graph that shows the percentage of splenic Ig λ1 L chain B cells of each mouse in the experiment. This would provide an opportunity to be made aware of the variation between the mice within each group as well as provide an opportunity to be made aware of the variation between the mice within each group as well as compare between groups. If there are sufficient mouse numbers a statistical analysis comparing groups would also be useful. I am puzzled by the choice of images for Figure 2. Firstly, due to the shape of the follicle it is difficult to compare the location of Ig λ L chain cells in 2C with those in 2A and B. As the follicular exclusion of λ L chain B cells within the CD40L/3H9 mouse is a key conclusion of this paper it would be helpful for the reader if 2C was replaced with a representative white pulp area that provides a clearer demarcation between follicle and the T zone. Secondly, I would suggest that Figure 2B makes it appear that the 3H9 mice have substantially fewer Ig λ L chain B cells than the CD40L/3H9 mice shown in Figure 2C. If it is representative of all 3H9 spleens I do not think that this corroborates the conclusions drawn from Figure 1. As they stand the images make my suggestions regarding an extra graph for Figure 1, and the mouse numbers used, more compelling. Without these changes I cannot truly say that the title and conclusions are appropriate. However, these are quick changes and I trust that it is just a bad image choice. I have read this submission. I believe that I have an appropriate level of expertise to confirm that it is of an acceptable scientific standard, however I have significant reservations, as outlined above. No competing interests were disclosed. transgene, 56R, led to B cell sequestration in the splenic marginal zone (MZ) and death by apoptosis. CD40L overexpression was able to inhibit apoptosis in the MZ and allow expression of certain H/L pairs, most notably those with anti-Sm specificity. Here, the authors use a closely related anti-DNA H chain, 3H9, which is of lower affinity for DNA compared to 56R. Dozens of labs have used the 3H9 and the 56R H-chain transgenic mouse lines which have become emblematic for the type of H chain that is immunodominant and -with different L chains -has the potential to encode a range of self-specificities, including anti-DNA, anti-phospholipid, anti-nucleosome and also anti-apoptotic cell antibodies. Earlier work by the Erikson lab had established that 3H9VH in combination with λ1 gives a B cell receptor that induces anergy and leads to a characteristic extrafollicular localization. Aslam . observe that the 3H9/λ1 combination produces the same number of splenic B et al cells whether or not it is crossed to CD40L and that the splenic location of the single and double transgenic B cells is identical. cells whether or not it is crossed to CD40L and that the splenic location of the single and double transgenic B cells is identical. The significance of this work is that it draws an important distinction between tolerance mechanisms that result in anergy vs. MZ deletion. Only the latter is sensitive to rescue by CD40L signals. This serves to clarify important characteristics of splenic B cell regulation. It will be informative to follow these studies with additional approaches to test whether B cell signalling pathways are perturbed by the presence of CD40L, whether increased CD40L expression on B or T cells is involved in suppressing apoptosis in the MZ, and whether CD40L overexpression could affect other types of 56R "incomplete editing" such as in the case of intracellular Golgi retention that is observed with 56R and Vk38c and λX L chains. The title is appropriate for the content of the article and the abstract represents a suitable summary of the work. The design, methods and analysis of the results from the study have been explained and they are appropriate for the topic being studied. The conclusions are sensible, balanced and justified on the basis of the results of the study and enough information has been provided to be able to replicate the experiment. I have read this submission. I believe that I have an appropriate level of expertise to confirm that it is of an acceptable scientific standard. No competing interests were disclosed. Competing Interests:
v3-fos-license
2021-11-18T16:26:25.615Z
2021-11-16T00:00:00.000
244301052
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/2079-9292/10/22/2805/pdf", "pdf_hash": "cee2edb18c01b5f703587f3022feeab39e67114a", "pdf_src": "Adhoc", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41246", "s2fieldsofstudy": [ "Engineering", "Medicine" ], "sha1": "1e1e8fef974359896b705f9ea6f9a2405af7245d", "year": 2021 }
pes2o/s2orc
Joint Vital Signs and Position Estimation of Multiple Persons Using SIMO Radar : Respiration rate monitoring using ultra-wideband (UWB) radar is preferred because it provides contactless measurement without restricting the person’s privacy. This study considers a novel non-contact-based solution using a single-input multiple-output (SIMO) UWB impulse radar. In the proposed system, the collected radar data are converted to several narrow-band signals using the generalized Goertzel algorithm (GGA), which are used as the input of the designed phased arrays for position estimation. In this context, we introduce the incoherent signal subspace methods (ISSM) for the direction of arrivals (DOAs) and distance evaluation. Meanwhile, a beam focusing approach is used to determine each individual and estimate their breathing rate automatically based on a linearly constrained minimum variance (LCMV) beamformer. The experimental results prove that the proposed algorithm can achieve high estimation accuracy in a variety of test environments, with an error of 2%, 5%, and 2% for DOA, distance, and respiration rate, respectively. Introduction Non-contact vital sign (VS) detection, such as respiration rate, built upon a radio frequency (RF)-based system has attracted a great deal of interest in recent years. It is used in a growing field of applications, such as healthcare and clinical assistance, sleep and baby monitoring, survivor rescue and research operations, through the wall human detection, and human activity classification [1][2][3][4][5][6][7][8][9]. Recently, the impulse response ultra wide-band (IR-UWB) radar has gained more attention thanks to its high-range resolution, good penetration capability, and low power consumption [10,11]. The use of RF signals for VS waveform extraction lies in the detection of the chest motion associated with the VS activities of a human subject. Specifically, it is based on the emission and reception of electromagnetic (EM) waves. Notably, the phase variation on the received signal is exploited for this purpose [12]. To do so, several radar types have been investigated, including unmodulated continuous wave (CW) [13], frequencymodulated continuous-wave (FMCW) [14], step frequency continuous-wave (SFCW) [15], and IR-UWB [4]. Meanwhile, many signal processing techniques have been proposed, covering clutter rejection, human localization, and VS estimation [16][17][18][19][20][21]. Despite the great advances in the field, most research has focused on a single target for VS measurement. To better deal with complex and critical scenarios, such as people buried under debris, multiple-person VS motioning plays an important role in addressing these scenarios [5,8]. To date, two approaches have been mainly used to achieve the VS measurement of multiple persons. The first solution requires the range information associated with the human targets to separate their VS signals. In this context, a single-input single-output (SISO) radar is used for this purpose, namely FMCW, SFCW, or IR-UWB platforms [22]. Nevertheless, it is still difficult to obtain reliable separation when the persons are located at the same distance from the radar. The second one separates the VS signals based on the incident angle information. To this end, spatially distributed antennas have been exploited in designing the radar system, such as single-input multiple-output (SIMO) [8] and multiple-input multiple-output (MIMO) radars [6]. In particular, digital beamforming (DBF) plays an important role in beam scanning and beam focusing tasks thanks to its low-complexity structure, good control flexibility, and high accuracy [8]. In this work, a SIMO IR-UWB radar prototype is presented. The proposed system is based on a single IR-UWB radar in which eight spatially distributed antennas are connected to its receiving channel through a UWB RF switch. To deal with the array signal processing-based model, we propose a proper pre-processing step to obtain a multivariant narrow-band signal via the generalized Goertzel algorithm (GGA) [23]. Further, we consider an incoherent signal subspace method (ISSM) for direction of arrival (DOA) estimation [24]. In particular, we use the incoherent multiple signal classification (IMUSIC) to estimate the incident angle of each person [25,26]. Then, we perform beam focusing at each estimated angle for VS separation using a linearly constrained minimum variance (LCMV) beamformer [27]. Finally, the respiration rate is estimated for each person by analyzing the spectrum of the separated signals. In summary, the main contributions of this work are as follows: 1. The proposed SIMO IR-UWB radar system allows us to simultaneously measure the VSs of spaced persons at the same distance from the radar. It offers a low-cost and good solution for the non-contact Vs measurement of multiple persons in sleep and baby monitoring scenarios; 2. The chest motions of multiple targets are accurately separated with the combination of the IMUSIC and LMCV algorithms. The respiration rate estimation is significantly enhanced by forming individual narrow beam focusing on the chest regions associated with each person; 3. The experimental results are presented to investigate the performance of the proposed system, showing that our proposed method outperforms the state of the art. The remainder of this paper is organized as follows. Section 1 presents the mathematical model of the IR-UWB SIMO radar and the problem setup. In Section 2, we tackle the estimation procedure of position and respiration rate for each person. Experimental results and numerical analyses are described in Section 3. Section 4 concludes this work. Mathematical Model UWB impulse radar has been widely used for VS detection thanks to its high-range resolution. In this context, it transmits very short pulses, usually in the order of picoseconds, to the human subject. As the electromagnetic wave returns from the chest's body, the VS motions, which introduce a small displacement (typically in the order of millimeters) on the chest, will produce a phase variation in the received signal [12]. Received Signal for SISO Radar Consider a finite number of persons L with incident angles θ θ θ = [θ 1 0 , θ 2 0 , ..., θ L 0 ] over a limited region, where the received signal, from L persons, can be simplified as: where t and τ denote the slow and fast time indexes, respectively. p(τ) represents the transmitted pulse. α l and τ l (t) are the attenuations and the time of arrival (TOA) associated with the l th person, respectively. These letters depend on the radar cross-section (RCS) of the human subject and its distance from the radar d l 0 . Assuming that the human chest remains stationary during the coherent processing interval (CPI), usually in the order of milliseconds, the TOA in Equation (1) is expressed as: where d l (t) depicts the chest displacement of the l th person, which can be expressed as [4]: where d l r and f l r represent the maximum displacement of the chest and the respiration frequency associated with the l th person, respectively. Received Signal for SIMO Radar In our scheme, we consider a uniform linear array (ULA) composed of M antenna receivers. Depending on the incident angle of each person, the received signal at the m th antenna can be expressed as: where w m (t, τ) represents the measurement noise. Using the first antenna as a reference, τ l m (t) in Equation (4) can be derived as: where d depicts the inter-distance of the antenna array. Figure 1 illustrates the receiving signal model of the SIMO radar. Position and VS Estimation In this section, we first establish an appropriate SIMO radar based on a single UWB impulse transceiver P440 for VS and position estimation. Then, we develop a method to transom the collected UWB data into multiple narrow-band signals. Next, we explain how to estimate the respiration rate of each person based on their estimated incident angles. Figure 2 illustrates the basic modules of the SIMO radar system adopted in this work. Specifically, the impulse UWB transceiver P440 is used to transmit and receive data. In particular, it transmits a Gaussian impulse RF signal with a carrier frequency of 4.3 GHz and a bandwidth of 2.2 GHz. It acts as a short-range coherent radar using the monostatic radar module (MRM). On the other hand, a broadband RF switch HMC321A (covering a band from DC to 8 GHz) is used to select one specific antenna receiver at each scan to build an array receiver. Specially, we use a Raspberry Pi 4 to control and automate this process. Finally, the collated data are transferred to the PC via a WiFi connection for signal processing. Algorithm Scheme The joint estimation approach exploits the spatial and spectrum, in the fast time domain, information of the received signals for estimating the human positions and separating their VS waveforms. Moreover, the temporal information, in the slow time domain, is used for respiration rate estimation. Figure 3 illustrates the process of the proposed method. Pre-Processing In this section, we introduce a pre-processing step to transform the IR-UWB signals to multiple narrow-band signals. Specifically, we use the GGA to evaluate the Fourier transform (FT) at specific frequency indexes. Then, a complex band pass filter is applied to remove the clutter response and negative frequencies and enhance the VS waveform [4]. Applying the FT to Equation (4), in the fast time domain, reads: where P(ω), C m (ω), and W m (t, ω) represent the FT of the transmitted pulse, the clutter response, and the measurement noise, receptively. Inserting Equation (5) into Equation (6) results in: in Equation (7) can be expressed, using the Bessel functions, as: where J n depicts the first kind of Bessel function of order n. By further taking into consideration that d l r is small, in the order of millimeters, and regarding the range of ω, in the order of GHz, the quantity β l r (ω) is close to zero. Hence, Equation (8) can be simplified, retaining only the Bessel functions from up to the second order, as: Inserting Equation (9) into Equation (7) reads: in which: At this point, we apply the CBPF [28], presented in our previous work [4], to remove C m (ω) and enhance the VS waveform. The result of such a filter can be expressed as: where S l (t, ω) = α l J 1 (β l r (ω))P(ω)e jβ l 0 (ω) e j2π f l r t , which represents the source signal associated with the l th person, and N m (t, ω) depicts the filtered noise of W m (t, ω). Notice that, for each ω, we obtain an array signal processing-based model, which can be written in matrix form as: in which: In view of this, the model presented in Equation (13) is established for several frequency indexes Ω Ω Ω = [ω 1 , ω 2 , ..., ω k , ..., ω K ], which are chosen from the IR-UWB radar frequency band. To do so, we use the GGA instead of FFT because we require a few spectral components. In such cases, the algorithm is significantly faster. DOA Estimation Initially, we perform the IMUSIC method on the received signal after the pre-processing step to estimate the DOA of each person with a high angular resolution. It is common to assume that the number of sources is known, or previously estimated by using MDL or AIC methods [29,30]. Firstly, the ideal covariance matrix is estimated using the sample covariance matrix for each frequency bin: where (.) H represents the conjugate transpose. Then, the noise covariance matrix can be estimated, based on the eigenvalue decomposition ofR(ω k ), as: where Q n (ω) is the estimated noise subspace obtained using the SVD onR(ω k ) for each frequency bin. Secondly, the estimatedθ l 0 of the l th person can be obtained by evaluating the peaks of the following spatial spectrum function [25]: Distance Estimation In order to estimate the distance of each person accurately, we use the well-known LCMV beamformer, which is widely used in acoustic array processing [31]. First, a weighted vector is designed according to certain criteria. In particular, the LCMV beamformer intends to make the desired directionθ l 0 gain associated with the l th person constant while minimizing the total output power with certain constraint conditions: in which: Based on the work in [8], the weight vector is expressed as: Thus, the final output signal associated with the l th person can be expressed as: Using Equation (12), Equation (20) can be rewritten as: where s l (t) = A l e j2π f l r t , A l = α l J 1 (β l r (ω k ))P(ω k ), and e l (t, ω k ) represents the estimation error. The matrix form of Equation (21) can be expressed as: in which:Ŝ Notice that Equation (23) represents an array signal processing based-model. Thus, the subspace methods can be applied to estimate d l 0 . In our work, we use the MUSIC method in which the number of sources is fixed to one [32]. This procedure is repeated for each l. To this end, the VS signal of the l th person can be extracted by applying a simple beam focusing technique as follows: Respiration Rate Estimation In this section, we focus on the estimation of respiration rate from the extracted VS waveformŝ l (t). Based on Equation (23),ŝ l (t) can be expressed as: s l (t) ≈ A l e j2π f l r t (24) Applying the FT to Equation (24) leads to: where δ represents the Dirac function. Thus, f l r can be easily estimated by finding the peak ofŜ l ( f ), which is calculated using FFT. Experimental Results In this section, we describe the experiments that were carried out to assess the performance of the proposed method. Three human subjects participated in these experiments, in which an accelerometer was attached to their chest as a ground truth device for breathing rate estimation. Several scenarios were considered to ensure the accuracy of the obtained results. A record of 30 min was achieved for different scenarios; each record was divided into several realizations of 180 s (22.5 s for each channel). The errors were calculated as follows: where N r represents the number of realizations that equal 10 in our experiment. The exact values of θ l 0 and d l 0 were set manually during the experiment. Figure 4 depicts an example of such scenarios. Once the radar data were successfully recorded, they were transferred to the PC via a WiFi connection for signal processing. Figure 5 depicts the collected data of the first realization associated with each channel after the pre-processing step. Position and Respiration Rate Estimation of One Person The first experiment concerned a position and breathing rate estimation scenario for one person. Several assays were performed. The person was exposed directly in front of the radar at different positions with a normal breathing rate. In this context, the incidence angles were set to −45 • , 0 • , and 45 • , the nominal distance to 1 m from the radar, and the exact value of the breathing rate was obtained by the accelerometer attached to the person's chest. Based on the IMUSIC procedure, the angles were estimated with an error of 2%. As an example, Figure 6 shows the estimated angles for the first realization. After applying the LMCV beamformer, the distance was calculated using the MUSIC algorithm with an error of 5%. Figure 7 depicts an example result of such a process. Using the beam focusing technique, the respiration rate was estimated with an error of 1.73%. Figure 8 shows the result of such a scheme for the first realization. Position and Respiration Rate Estimation of Multiple Persons In this section, we evaluate the separation procedure. Two sets of experiments were conducted. Firstly, two human subjects were asked to stand at different positions. Specifically, the angles were set to θ 1 0 = 0 • ; θ 2 0 = 45 • , and the nominal distance to d 1 0 = d 2 0 = 1 m for both of them. Secondly, we increased the number of persons to three, in which the angles were set to θ 1 0 = −30 • ; θ 2 0 = 0 • ; θ 3 0 = 45 • , and the nominal distances to d 1 0 = 1 m; d 2 0 = 2.5 m; d 3 0 = 1.5 m. Figure 9 depicts the estimated angles using the IMUSIC algorithm for both experiments. Applying the LCMV allowed us to separate the VS signals. First, we estimated the distance of each person using the MUSIC algorithm [32]. The results are shown in Figure 10. Then, the respiration rate could be estimated with high accuracy and approximately 2.63% error, as depicted in Figures 11 and 12. As already mentioned, we used an accelerometer device to measure the respiration rate, which served as a reference signal for the comparison with our method. This allowed us to quantify the performance of the proposed system for the non-contact VS monitoring of multiple persons. Up to now, we have used the error presented in Equation (26) as a criterion, which is good for DOA and distance evaluation, since we manually set up the true values for both of them. However, this is not the case for the respiration rate, in which the true value is extracted from the accelerometer device. In order to assess the comparability between our method and the accelerometer devicebased method, we used Altman and Bland (B&A) plot analysis, which is an effective method to describe the agreement between measurements. The B&A graph plots the difference in two paired measurements against the mean of the two measurements [33]. Figure 13 depicts the result of such a plot, with a confidence interval limit of −20% (from −10% to 10%), which is an acceptable error for clinical applications. The differences between measurements of the same substance are not significant. This can be represented by the mean difference, which is 0.43% for one person and 0.41% for multiple persons. Discussions To further verify the discrimination performance of the proposed scheme, a human subject was asked to stand at several distances from the radar. Two scenarios were considered in our laboratory. In the first scenario, the person was exposed directly in front of the SIMO radar. At each assay, the distance was changed with a step size of 0.5 m. In the second scenario, the person stood behind a wall, and the same assays were conducted with a step size of 0.25 m. The wall was made up of reinforced concrete with a 20 cm width. As can be seen from Figure 14, our system is capable of monitoring human subjects up to 3 m without obstacles and 0.5 m with an obstacle. Note that the maximum distance can be changed depending on the power of the transmitted signal, the material nature, and the size of the obstacle. Furthermore, our system was compared with the state-of-the-art research based on spatially distributed array approaches in terms of angle resolution, maximum distance, and respiration rate error. As shown in Table 1, the proposed system has the minimum respiration rate error. It also has the minimum angle resolution; this is due to its eight receiving antennas. Conclusions In this work, we aimed to devise a non-contact-based solution for multiple-person monitoring that can ensure adequate measurement and preserves the person's privacy. To this end, by combining an IR-UWB radar sensor and subspace methods, this study proposes a SIMO scheme that can identify each individual with its corresponding respiration rate. To interpret the collected UWB radar data using the subspace methods, we propose a pre-processing step, based on GGA, that converts the UWB data to multiple narrow-band signals that contain the spectrum and spatiotemporal features of each person. An ISSM is established for the proposed phased array system for DOA and distance estimation. Meanwhile, a separation procedure is introduced, based on LCMV, for individual identification and respiration rate estimation. The experimental results prove that the system can automatically find the direction and the distance of multiple human subjects and effectively detect their respiratory rates.
v3-fos-license
2018-12-30T14:11:11.651Z
2018-07-16T00:00:00.000
57127720
{ "extfieldsofstudy": [ "Psychology", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mededpublish.org/MedEdPublish/PDF/1749-10053.pdf", "pdf_hash": "95730c4b6eec272ca29a96457df831dbe754da78", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41249", "s2fieldsofstudy": [ "Art", "Medicine", "Education" ], "sha1": "e352460dbd381b6ec6afcdb7ee26e2befc22bb62", "year": 2018 }
pes2o/s2orc
Using the visual arts to teach clinical excellence This article was migrated. The article was marked as recommended. Introduction: The authors conducted a review of the literature to identify curricula that incorporate the visual arts into undergraduate, graduate, and continuing medical education to facilitate the teaching of clinical excellence. Methods: The authors searched the PubMed and ERIC electronic databases in May 2017, using search terms such as “paintings,” “visual arts,” and “medical education,” along with terms corresponding to previously defined domains of clinical excellence. Search results were reviewed to select articles published in the highest impact general medicine and medical education journals describing the use of visual arts to teach clinical excellence to all levels of medical trainees and practicing physicians. Results: Fifteen articles met inclusion criteria. Each article addressed at least one of the following clinical excellence domains: communication and interpersonal skills, humanism and professionalism, diagnostic acumen, and knowledge. No articles described the use of the visual arts to teach the skillful negotiation of the health care system, a scholarly approach to clinical practice, or a passion for patient care. Conclusions: This review supports the use of visual arts in medical education to facilitate the teaching of clinical excellence. However, research designed specifically to evaluate the impact of the visual arts on clinical excellence outcomes is needed. Introduction Medical schools are increasingly recognizing the role of the arts and humanities in the professional formation of clinically excellent physicians (Rodenhauser, Strickland and Gambala, 2004).The arts and humanities allow trainees to explore the diversity of the human experience and to reflect on an individual patient's experience with illness or grief (Mullangi, 2013).Exposure to the humanities has also been correlated with reduced burnout among medical students (Mangione et al., 2018).A recent systematic review on the use of the creative arts in health profession education found these curricula promote learner engagement, foster the discovery and creation of meaning, and can lead to better medical practice (Haidet et al., 2016).Thus, medical schools are increasingly interested in ways to integrate the arts into their curricula, and some have already done so (Bramstedt, 2016).Although there has not been a recent systematic review of the literature specifically focused on the incorporation of visual arts into medical school curricula, a 2002 survey of arts-based programs at U.S. medical schools found that the visual arts had been incorporated into 18 required courses, 36 elective courses, and 29 extracurricular activities, among the 83 medical schools that responded to the survey (Rodenhauser, Strickland and Gambala, 2004). The integration of the visual arts into medical education serves both explicit and implicit functions (Bardes, Gillers and Herman, 2001).Explicitly, the visual arts assist in the development of clinical skills, including the observation, analysis and communication of visual information (Bardes, Gillers and Herman, 2001;Shankar, Piryani and Upadhyay-Dhungel, 2011).Implicitly, the visual arts add a subjective dimension to the objective study of the pathophysiological model of disease, helping students to recognize the individual patient's experience with illness instead of viewing them as "elaborate machines" (Bardes, Gillers and Herman, 2001).This subjectivity challenges students' discomfort with ambiguity, encourages them to confront their own emotions, disrupts assumptions, and fosters an awareness of multiple perspectives (Haidet et al., 2016;Kumagai, 2017).Representational art allows learners to focus on identifying recognizable forms and contextual information; while abstract art fosters the development of pattern recognition skills, fosters increased tolerance of ambiguity, and provides learners with the freedom to follow their own imagination and emotions (Jasani and Saks, 2013;Karkabi, Wald and Cohen Castel, 2014). Although medicine is considered to be in part a visual science, physicians often miss the importance of "seeing patients" (Boisaubin and Winkler, 2000).The visual narratives represented in paintings can assist in developing insight into the less obvious aspects of a patient's experience (Arnold, Lloyd and von Gunten, 2016), encouraging the exploration of the human dimensions of illness and suffering (Kumagai, 2012). Achieving clinical excellence involves mastery of the "art" of medicine (Christmas et al., 2008).Cognizant of the value of such clinical mastery, the Johns Hopkins School of Medicine established the Miller-Coulson Academy of Clinical Excellence (MCACE) to recognize and reward the work of clinically excellent physicians (Christmas et al., 2008;Wright et al., 2010).Based on a systematic review of the literature and qualitative research, the MCACE defined clinical excellenceas achieving a level of mastery in the following domains as they relate to patient care: (1) communication and interpersonal skills, (2) professionalism and humanism, (3) diagnostic acumen, (4) skillful negotiation of the health care system, (5) knowledge, (6) scholarly approach to clinical practice, (7) exhibiting a passion for patient care, (8) explicitly modeling this mastery to medical trainees, and (9) collaborating with investigators to advance science and discovery (Wright et al., 2010). In the present paper, the authors hypothesized that the incorporation of the visual arts into medical education curricula could be used to teach clinical excellence.To test this hypothesis, the authors conducted a review of the literature to identify examplesdrawn from some of the highest impact general medicine and medical education journalsof curricula that use the visual arts in undergraduate (UME), graduate (GME), and continuing medical education (CME) curricula to teach clinical excellence. Methods One of the authors (CC-N), a medical informationist, designed and executed a search of the PubMed and ERIC electronic databases in May 2017 to identify a body of published articles relevant to the topic of interest.Controlled vocabulary and keyword terms including "paintings," "visual arts," and "medical education" were combined with terms corresponding to each domain of clinical excellence, as defined by the MCACE (Wright et al., 2010).The authors excluded domains 8 and 9, "explicitly modeling this mastery to medical trainees" and "collaborating with investigators to advance science and discovery," since these domains are limited to clinicians working in academic settings.Search results were refined by date (2000-present), publication type (journal article), and language (English only), and then limited to a predetermined subset of 40 of the highest-impact general medicine and medical education journals (Appendix 1) to achieve a snapshot of the key literature on this topic.Journal impact factors were determined by searching the 2016 edition of Journal Citation Reports® (Clarivate Analytics, 2016).Retrieved citations were exported to RefWorks reference management system for organization, deduplication, and title and abstract scanning. One of the authors (EG) reviewed the articles and selected those that met the following inclusion criteria: describes required and/or elective medical education curricula incorporating the examination of paintings (representational or abstract) and/or the creation of original artwork for learners at the UME, GME, or CME level.Duplicate titles, those with clearly irrelevant subject matter, and those for which the full text was not available were excluded from full-text review.Each article advancing to full-text review was read and summarized by one of the authors (EG), with particular focus on descriptions of curricula and, when applicable, the outcome measures used to evaluate impact. In addition, EG assessed each curricula as to the level of its outcomes (reaction, learning, behavior, and results) using the Kirkpatrick's Model (Kirkpatrick and Kirkpatrick, 2006).Level 1 of the hierarchy (reaction) measures "the degree to which participants [found] the training favorable, engaging and relevant"; Level 2 (learning) measures the degree "to which participants change [d] attitudes, improved[d] knowledge, and/or increase [d] skills"; Level 3 (behavior) measures the degree to which participants "change[d] their behavior" once they were back on the job; and Level 4 (results) measures the degree to which targeted outcomes occurred (Kirkpatrick and Kirkpatrick, 2006). Results The search yielded 67 citations, 15 of which met the inclusion criteria. The results of the review are summarized in Table 1. What follows is a summary of key findings of the 15 articles identified by this review on the use of visual arts to teach clinical excellence, organized by the MCACE-defined clinical excellence domain. Communication and interpersonal skills Communication and interpersonal skills are fundamental to clinical excellence (Christmas et al., 2008).Being able to form strong relationships with patients, be team players, remain flexible, and simplify medical concepts to ensure patient understanding play a major role in how the public perceives physicians (Doukas et al., 2015).In addition, reflective capacity, defined as "the critical analysis of knowledge and experience to achieve deeper understanding, guiding future behavior," is essential for effective communication, positive physician-patient relationships, and the accurate collection of clinical information (Karkabi and Cohen Castel, 2011;Karkabi, Wald and Cohen Castel, 2014).Reflection is seldom intuitive for learners or teachers, thus the determination of how best to teach reflection in medical education is critical (Karkabi and Cohen Castel, 2011;Karkabi, Wald and Cohen Castel, 2014).The integration of the visual arts into medical education supports the development of reflective capacity, by providing a space to contemplate difficult issues, such as what it means to be a doctor, death and dying, and ethical dilemmas (Rodenhauser, Strickland and Gambala, 2004;Karkabi, Wald and Cohen Castel, 2014).This review identified 5 articles that describe the use of visual arts to teach the communication and interpersonal skills domain of clinical excellence.Each of these articles described curricula whose goals are to increase reflective capacity of medical students in relation to the doctor-patient relationship (Boisaubin and Winkler, 2000;Karkabi and Cohen Castel, 2011;Kumagai, 2012;Karkabi, Wald and Cohen Castel, 2014;Arnold, Lloyd and von Gunten, 2016).The first describes a semester-long weekly UME curriculum in which students discussed the themes depicted in representational artwork as they relate to the practice of medicine (Boisaubin and Winkler, 2000).Students reflected on an individual's experience of mental illness through discussion of Gericault's portraits of the "insane," on body image and obesity through discussion of Ruben's nude painting The Fur ("Het Pelsken"), and on end-of-life care and physician-assisted suicide through discussion of Käthe Kollwitz's lithographs on the theme of death. The second article describes a three-hour UME workshop in which students viewed two paintings -Luke Fildes' The Doctor and Pablo Picasso's Science and Charityeach portraying scenes of a doctor-patient relationship (Karkabi and Cohen Castel, 2011).Students wrote a first-person account from the perspective of a character in one of the paintings, followed by a reflection on their own patient encounters. The third describes a longitudinal two-year UME curriculum that involved ongoing conversations between students and volunteer patients centered around what it is like to live with chronic illness and negotiate the health care system (Kumagai, 2012).Students produced original artwork based on their interactions with those patients, the creation of which served three main purposes, to explore: (1) art as an expression of identity, tangible expressions of the students' attempts to take the perspective of the patient, (2) art as critique of the status quo (e.g., power structures in health care, traditional doctor-patient relationships) and (3) art as interpretation of the conversations between students and patients. The fourth describes a multinational CME curriculum in which practicing physicians first viewed an abstract painting like Rothko's Red and Orange and Jackson Pollock's Full Fathom Five (Karkabi, Wald and Cohen Castel, 2014).This served as a visual prompt for a writing activity in which each learner was asked to reflect on a particularly challenging and/or meaningful clinical situation.Learners reported that the viewing of abstract paintings helped prepare them emotionally for reflective writing. The fifth article describes a required CME curriculum, for which physicians who had recently completed a one-year fellowship in palliative care were asked to reflect on their experiences dealing with the death of patients and to create original artwork (Arnold, Lloyd and von Gunten, 2016).Qualitative coding of 75 images revealed 2 categories of underlying visual metaphors (portraits and landscapes), representing the transient nature of life and death.The authors suggest that the visual narratives reflect a positive and hopeful viewpoint of death and dying (rather than ones associated with anxiety, pain, or suffering), which may be attributed to the development of personal and professional skills gained during the fellowship. Professionalism and humanism The MCACE defines professionalism and humanism as generosity with patients and with one's time, being honest, nonjudgmental and caring (Christmas et al., 2008).Historically, medical education has taught professionalism through lectures and clinical vignettes (Winter and Birnberg, 2006).However, learners who have participated in arts-based courses report that this method of learning helped them develop professionalism skills.In addition to professionalism, humanismthe recognition of each patient as a person of inherent valueis essential to clinical excellence.However, developing medical learners' compassion for their patients is an ongoing challenge in medical education (Karkabi and Cohen Castel, 2006).Empathy -the ability to understand the perspective of the patient and to communicate this understanding with the patient -is central to humanism (Yang and Yang, 2013).Paintings can serve as mirrors of the human condition and have been shown to deepen appreciation of human suffering and enhance empathy for patients among medical learners (Karkabi and Cohen Castel, 2006).This review identified 4 articles that describe the use of visual arts to teach the professionalism and humanism domain of clinical excellence (Karkabi and Cohen Castel, 2006;Winter and Birnberg, 2006;Shankar, Piryani and Upadhyay-Dhungel, 2011;Yang and Yang, 2013), three of which specifically describe curricula which incorporated the visual arts to better understand the nature of human suffering and to deepen compassion for sufferers (Karkabi and Cohen Castel, 2006;Shankar, Piryani and Upadhyay-Dhungel, 2011;Yang and Yang, 2013).The first article describes a 2-hour CME curriculum which involved the examination of 3 paintings (Rembrandt's The Return of the Prodigal Son, Edvard Munch's Death in the Sickroom, and Sir Luke Filde's The Doctor), writing a short story about the paintings, and presenting these stories to the group (Karkabi and Cohen Castel, 2006).The 18 participants who provided feedback at the end of the curriculum indicated a change in their attitudes towards compassion for the sufferer, and a positive sentiment towards the activity. The second article describes a GME seminar on professionalism during which Thomas Eakins' Agnew Clinic and Norman Rockwell's Norman Rockwell Visits a Family Doctor were used to guide discussions on the meaning of professionalism (Winter and Birnberg, 2006). Residents were asked open-ended questions about the professional behaviors portrayed in each of the paintings and to compare these behaviors to those described by the Association of American Medical Colleges and the National Board of Medical Examiners.Residents reported that this curriculum engaged them emotionally, fostering discussion of the professional ideals that first inspired them to pursue medicine as a career.They also reported that this creative approach to teaching was a more effective method than didactic lectures and isolated clinical vignettes in teaching professionalism. The third article describes a required bi-weekly 7-month UME curriculum in Nepal involving the analysis of paintings such as Alice Neel's City hospital and Vincent Van Gogh's Portrait of Dr. Gachet to improve empathy in learners for the sufferer (Shankar, Piryani and Upadhyay-Dhungel, 2011).Students reported that they had difficulty extrapolating the context depicted in Western paintings to Nepal and that they would have preferred a course based on paintings by Nepalese artists that better reflected their own experiences.29.5% of respondents (n=23/78) believed that the incorporation of the visual arts in medical education promotes empathy. The fourth article describes a required 4-hour GME curriculum involving the interpretation of paintings related to medicine, illness, and human suffering, and which used the Jefferson Scale for Physician Empathy (JSPE) to measure the quantitative effects on empathy (Yang and Yang, 2013).No significant differences between the pre-test and post-test JSPE scores were measured, which the authors suggest may be attributed to the small sample size (n=110) and the fact that the duration of the workshop was only four hours. Diagnostic acumen Physicians considered to be "skillful diagnosticians" are thorough, exercise outstanding judgment, and are often called to solve puzzling cases (Christmas et al., 2008).Being a skillful diagnostician requires mastery of the observation, description, and interpretation of visual information, skills often considered the "special province of the visual arts" (Bardes, Gillers and Herman, 2001).In the visual arts, the "art of looking" is made explicit through an emphasis on the intense and detailed observation and description of visual information (Bardes, Gillers and Herman, 2001;Doukas et al., 2012).A detailed examination of paintings can teach medical learners this skill of "slow looking," and assist them to distinguish between primary observable and confirmable visual information vs. secondary and derived inferences (Bardes, Gillers and Herman, 2001).In the medical field, visual literacy has been defined as "the capacity to identify and analyze facial features, emotions, and general bodily presentations, including contextual features such as clothing, hair and body art" (Bramstedt, 2016).Heightened visual literacy can assist physicians in reaching a diagnosis, making the invisible visible (Wellbery and McAteer, 2015;Bramstedt, 2016) and is particularly valuable in situations when patients are unable to communicate their symptoms (Bramstedt, 2016).This review identified 5 articles that describe the use of visual arts to teach the diagnostic acumen domain of clinical excellence (Bardes, Gillers and Herman, 2001;Dolev, Friedlaender and Braverman, 2001;Miller, 2010;Jasani and Saks, 2013;Bramstedt, 2016).The first article describes a UME curriculum facilitated by a collaboration between an art museum and a medical school (Bardes, Gillers and Herman, 2001).The program consisted of three sessions during which students focused on the evaluation of the human face as the pre-eminent expression, not only of health and disease, but also of emotion and character.Students examined painted portraits that were part of the museum's collection including Titian's Portrait of a Man in a Red Cap, Rembrandt's Nicolaes Ruts, and Ingres's Comtesse d'Haussonville. The second article describes a similar UME curriculum that aimed to teach observational and descriptive skills through the analysis of the museum's paintings (Dolev, Friedlaender and Braverman, 2001), such as Henry Wallis's The Death of Chatteron and J.M.W. Turner's Dort or Dordrecht: The Dort Packet-Boat from Rotterdam Bacalmed (Yale School of Medicine, 2015).Discussion was facilitated using the following guiding questions: "Is the figure sleeping?";"Where in the house is this scene located?"; "What is the time of day?"; "How old is the figure?";"What does his fisted left hand and arm position indicate?";and "What was his cause of death?" The third article describes a required UME curriculum that involved viewing paintings, such as Giovanni Bellini's St. Francis in the Desert, and discussing observations in a museum setting (Miller, 2010).The author reports on her own experience as a participant in the curriculum suggesting that it not only sharpened her observational skills, but that her role in the exercise was transformed from an observer to a "profoundly engaged participant in this work of art," which she felt was facilitated by the non-judgemental group dialogue. The fourth evaluated a three-hour long UME curriculum that was part of a required week-long course (Jasani and Saks, 2013).A student researcher employed a four-step method for teaching clinical observational skills to students through the analysis of eight paintings: (1) observation of visual findings using the three visual thinking strategies (VTS) questions, (2) interpretation of the works, (3) reflection on the validity of their evaluations, and (4) communication of their ideas.Visual thinking strategies (VTS) is a technique used to enhance visual literacy in medical learners, using three questions to focus observations during the examination of paintings: "What do you see?"; "What makes you say that?"; and "What else do you see?" (Jasani and Saks, 2013). The fifth article describes a UME curriculum of an optional 50-minute workshop using the VTS questions to improve visual literacy, and a 7-week required mixed media art project and reflective essay on a health-related topic (Bramstedt, 2016).Of the 66 individuals who completed the voluntary feedback survey, 54.6% supported the addition of arts education to the medical school curriculum.All 3 cohorts (2014)(2015)(2016) of learners exposed to this arts-based curriculum reported reflective capacity as the skill which improved the most, followed by observational skills.However, 43.8% and 40.6% from cohort 1 (2014) and cohort 3 (2016), respectively, indicated that the curriculum had no impact on their skills.Bardes, Gillers andHerman (2001), Dolev, Friedlaender andBraverman (2001), and Jasani and Saks (2013) compared observational skills within and/or among learners before and after exposure to curricular interventions.Bardes, Gillers and Herman found that medical students were more precise in their descriptions and able to make a greater number of inferences after exposure to the curriculum (the authors did not report whether this difference was statistically significant) (Bardes, Gillers and Herman, 2001).In the pre-test, students described the features of a middle-aged woman portrayed in a photograph objectively with reference to her jewelry, features and make-up, while they made a greater number of inferences in the post-test describing the same photograph (e.g., the subject appeared "sad," "worried," "ill," etc.).Dolev, Friedlaender and Braverman found no significant differences in pre-intervention test scores among individual medical students exposed to either a museum visit, an anatomy lecture, or a clinical tutorial session in both the 1998-1999 cohort and 1999-2000 cohort, while post-intervention scores differed significantly between groups in both cohorts (Dolev, Friedlaender and Braverman, 2001).Dolev, Friedlaender and Braverman also found that the museum groupas a whole had significantly higher post-test percentage improvement compared to the clinical and control groups in the 1998-1999 cohort, and a significantly higher post-test percentage improvement score compared to the clinical group in the 1999-2000 cohort (this cohort did not use a lecture group because preliminary data revealed no change in students' observational performance).Jasani and Saks found that the mean number of observations between pre-and post-tests was not significantly different (Jasani and Saks, 2013).Qualitative analysis revealed that in comparison to the pre-test written responses, the post-test responses showed decreased use of subjective terminology by 65%,such as "normal" or "healthy;" increased scope of interpretations by 40%; increased speculative thinking by 62%; and increased use of visual analogies by 80% after exposure to the curriculum (whether these differences were significant was not reported). Skillful negotiation of the health care system This clinical excellence domain involves the health care systems in which physicians practice medicine, with excellent clinicians distinguished based on their ability to practice evidence-based medicine and use resources appropriately with consideration of economic factors and time constraints (Christmas et al., 2008).This review identified no article on the use of visual arts to teach clinical excellence in the skillful negotiation of the health care system domain of clinical excellence. Knowledge Outstanding knowledge and lifelong learning are central to clinical excellence (Christmas et al., 2008).This review identified one article describing the use of visual arts to teach the knowledge domain of clinical excellence (Finn, White and Abdelbagi, 2011).This article describes a one-hour UME curriculum that employed body painting as a creative method for teaching anatomy (Finn, White and Abdelbagi, 2011).Medical students painted each other's body surfaces to facilitate the learning of spatial relations of underlying anatomy and of clinical signs.The authors suggest that this method of teaching encourages active learning, appeals to all learning styles, and improves knowledge retention. Scholarly approach to clinical practice This clinical excellence domain describes physicians who apply evidence thoughtfully to patient care decisions, and who are committed to improving patient care systems and disseminating clinical knowledge (Christmas et al., 2008).This review identified no study describing the use of visual arts to teach the scholarly approach to clinical practice domain of clinical excellence. Exhibiting a passion for patient care Clinically excellent physicians must have a passion for, enthusiasm about, and enjoyment of clinical medicine (Christmas et al., 2008).Although this review did not identify a study that explicitly articulated the use of visual arts to teach the exhibiting a passion for patient care clinical excellence domain, all of the 15 identified studies imply that creative approaches to teaching medicine, including the incorporation of the visual arts, can help foster interest in and a passion for patient care, which may not be evoked by traditional ways of teaching. Discussion Each article identified by this review describes a curriculum that uses the visual arts to teach at least one of the following domains of clinical excellence: communication and interpersonal skills, humanism and professionalism, diagnostic acumen, and knowledge.No article describes a curriculum to teach the skillful negotiation of the health care system, scholarly approach to clinical practice, and exhibiting a passion for patient care domains of clinical excellence.The lack of articles within these 3 domains may reflect an absence of such articles in the literature, or a limitation of this review's search strategy, including search terms and limiters.This review did identify several barriers to the incorporation of the visual arts into medical education.The introduction of new medical humanities courses can be met with resistance by learners and faculty, especially those who believe that such courses lack scientific rigor in an environment in which biomedical sciences predominate and lack supporting theorentical frameworks (Kumagai, 2012;Mullangi, 2013;Kumagai, 2017).More rigorously designed studies that yield stronger evidence to support the value and long-term benefits of curricula that use the visual arts to enhance clinical skills are clearly needed to address this challenge (Kumagai, 2017). Kumagai suggests that incorporating the arts and humanities into medical education may "threaten to reproduce dominant values and perspectives" (a cultural elitism), which may in turn exclude certain individuals (Kumagai, 2017).This review suggests that when the visual arts are incorporated into medical curricula, those learners with backgrounds in or with a particular interest in the arts tend to participate more in discussion (Boisaubin and Winkler, 2000).Since many of the articles identified by this review describe curricula offered only as electives, the learners who chose to participate may have been more likely to have a greater interest or background in the visual arts.This may have positively skewed the results in studies that measured outcomes. Finding skilled teachers to facilitate these curricula can also be a barrier to implementation, given that few medical schools have faculty with expertise in the medical humanities (Boisaubin and Winkler, 2000).Conflicting views exist in the literature regarding the degree to which expertise in the visual arts is required to effectively teach arts-based medical curricula, as well as regarding how much prior knowledge is required to fully benefit learners.Some suggest that the instructor and/or learners should either have a background or expertise in the visual arts in order to effectively teach or learn from these courses (Boisaubin and Winkler, 2000).Others suggest that a visual arts-based observation training program for medical students can be developed and implemented without specifically trained faculty (Jasani and Saks, 2013). A gap in the literature also exists around which institution-level barriers must be addressed in order to successfully incorporate the visual arts into medical education.For example, institutional commitment in terms of allocation of curricular time and funding may be required if visual arts-based medical curricula are to be successful (Kumagai, 2012).Future research should also examine the degree of faculty development necessary to effectively teach these curricula, and explore the possibility of partnerships between medical schools and art institutions, and among departments of medicine, art history, and fine arts.A stronger emphasis on tailoring arts-based content to match the experience of learners is also required and should be addressed on a situation-by-situation basis. While this review was able to identify a number of medical schools that have incorporated the visual arts into their curricula, few educators have evaluated the success of these curricula with objective learning measures.In many cases, outcomes were limited to self-report of learner reaction to the curriculum or of their learning of knowledge and skills, rather than more objective behavioral and targeted outcome measures.Most of the articles describe curricula of relatively short duration and report no long-term outcomes.Future research should focus on longitudinal studies that measure behavioral and targeted outcomes over a longer duration of learners' medical training/practice to determine any enduring impact of the use of visual arts on the teaching of clinical excellence (Kumagai, 2012;Jasani and Saks, 2013). Conclusion This review supports the use of visual arts in medical education to teach clinical excellence in the domains of communication and interpersonal skills, professionalism and humanism, diagnostic acumen, and knowledge.Future studies specifically designed to assess the impact of the use of the visual arts to teach clinical excellence in these and other domains on physician behavior are needed. Take Home Messages Medical educators can incorporate the viewing of representational and abstract paintings, as well as the creation of original art, into their curricula. The visual arts can be used to teach: (1) communication and interpersonal skills; (professionalism and humanism; (3) diagnostic acumen; and (4) knowledge. It is feasible to combine the examination of paintings with guided group discussions using visual thinking strategies and/or reflective writing, although little is known about which of these methods are most effective to teach clinical excellence. Jonathan McFarland Sechenov University, Moscow This review has been migrated.The reviewer awarded 5 stars out of 5 I read this article with growing interest, for as Trevor Gibbs mentioned in another review, I completely agree that we need to go beyond the "feel-good factor" of teaching the humanities.This fascinating article starts us on the journey of explaining how to implement the Humanities (in this case the visual arts) into the medical curriculum, which is crucial now.For me, of particular interest was Table 1, which is a summary of the articles identified on the use of visual arts in the teaching of clinical excellence, as it is very interesting to observe which paintings have been used at different institutions, what tasks the students were made to make and for what reasons.For example, students at a workshop in Israel had to investigate the Dr-patient relationship using Luke Filde's and Picasso's famous paintings, The Doctor and Science and Charity, both by reflecting their own experience with patients but also by going into the paintings and personifying one of the characters.A brilliant and beautiful idea.This is just one example that this paper unearths.So, for many reasons, I would wholeheartedly recommend it for all those who are interested in exploring how to bring the Humanities back into the medical curriculum, and why.Nevertheless, as the authors themselves comment in their conclusion further research designed to specifically evaluate the impact of the visual arts on clinical excellence outcomes is still needed.The journey has begun but there is still some way to go.Thank you. Competing Interests: No conflicts of interest were disclosed. Reviewer Report 16 July 2018 https://doi.org/10.21956/mep.19610.r29144programmes that include visual arts alongside other media such as poetry, literature, film etc.Should these have been included?If not, why not.For systematic reviews on niche subjects (where there are perhaps less than 20 papers in existence) I would have appreciated more than one to finding papers, for example including a snowballing approach (searching up and down reference lists from the papers they identified in the original search).Finally, I particularly liked the analysis across the themes for clinical excellence, it gave a clear picture of the strengths of the visual arts in general and each programme in particular. Competing Interests: No conflicts of interest were disclosed. P Ravi Shankar American International Medical University This review has been migrated.The reviewer awarded 5 stars out of 5 I read with great interest this review article on the use of the visual arts to teach clinical excellence.The methodology for the selection of the studies included in this review has been comprehensively described and the authors provide the salient details of the various studies included.I and my colleagues have been using paintings for over ten years first in Nepal and later in the Caribbean.Western paintings are more readily available compared to paintings from developing countries.Students in Nepal were able to relate to paintings better when compared to literature from a western context.Some of the situations depicted in paintings are universal while others are more culture specific.The Caribbean student body is very diverse with students from North America, Africa and Asia.Many North American students are of Asian or African descent.With increasing access to technology students have access to visual arts from a number of countries.However, in many countries admission to medical school is restricted to students with a science background and these students often lack knowledge of and an appreciation of the arts.One of the challenges with regard to the medical humanities is obtaining information about their long-term impact.The subject has been well established in developed nations and studies on long term impact are beginning to emerge.Methodological and other challenges, however, remain.Obtaining resources and curricular time for the medical humanities continues to be a challenge in developing nations.The literature review was carried out in May 2017 and was limited to the PubMed and the ERIC databases.Also articles published in high impact journals were considered.As the authors mention this may have resulted in studies and initiatives not being included in this review.Despite problems with quality and there being no specific criteria for inclusion of journals, the Google Scholar database provides access to a Figure 1 . Figure 1.Flowchart for search strategy and review on use of visual arts to teach clinical excellence Table 1 . Summary of articles identified on use of visual arts to teach clinical excellence on the human face.All three sessions of the program took place at the museum.Session 1 (the"pre-test")involved examining a patient's photograph and then a painted portrait.The second session involved examining and presenting on a painted portrait.The third session (the "post-test") involved noting comparisons and contrasts between patient photographs Eden Gelgoot, BSc, is a master's student in the Department of Psychiatry at McGill University.Her interdisciplinary educational background in the arts and sciences, as well as her interest in patient-centered health care delivery motivated her to pursue this research.Arnold, B. L., Lloyd, L. S. and von Gunten, C. F. (2016) Physicians' reflections on death and dying on completion of a palliative medicine fellowship.Journal of pain and symptom management.51(3), pp.633-639.This is an open access peer review report distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any provided the original work is properly cited. This is an open access peer review report distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
v3-fos-license
2024-02-09T16:07:29.123Z
2024-02-01T00:00:00.000
267560828
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/2304-8158/13/4/512/pdf?version=1707268613", "pdf_hash": "3e91706b995abd82d053e17a33158e11462ff700", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41251", "s2fieldsofstudy": [ "Agricultural and Food Sciences" ], "sha1": "389a243324ec68fa851350cf6e3ff6c687cf2b98", "year": 2024 }
pes2o/s2orc
Oil Frying Processes and Alternative Flour Coatings: Physicochemical, Nutritional, and Sensory Parameters of Meat Products The frying process changes can be desirable and undesirable, involving the physicochemical, nutritional, and sensory aspects, depending on the food and oil properties and the frying process. In this context, alternative flours emerge as a strategy for adding value to the food since they are rich in fiber, vitamins, and minerals, contributing to the variability of ingredients and the full use of food, including residues such as seeds and husks. This narrative review aims to gather current scientific data addressing the alternative flour coatings on breaded meat, mainly chicken, products to evaluate the effects on fried products’ nutritional value, physicochemical parameters, and sensory attributes. Scopus, Science Direct, Springer, and Web of Science search bases were used. This review showed that alternative flours (from cereals, legumes, fruits, and vegetables) used as coatings increase water retention and reduce oil absorption during frying, increase fibers and micronutrient content, which are not present in sufficient quantities in commonly used flours due to the refining process. These flours also reduce gluten consumption by sensitive individuals in addition to favoring the development of desirable sensory characteristics to attract consumers. Therefore, frying processes in oil promote a reduction in humidity, an increase in oil absorption and energy content, and a decrease in vitamin content. In this context, coatings based on alternative flours can reduce these adverse effects of the frying process. Introduction Frying is a culinary technique that consists of preparing food in oil at high temperatures, which acts as a means of mass and heat transfer, causing changes that characterize the final product.It is one of the oldest cooking methods, known and used worldwide for its practicality [1,2].It can occur by partial (shallow frying), where the food is fried in an amount of oil that is not enough to cover the food, or total (deep frying) immersion of food in oil, where the amount of oil used must completely cover the food [3,4]. When the food absorbs oil, aspects related to the quality of the food itself change.In this sense, the reactions between the food and the oil during the frying process can cause changes in the final product [5,6].Oil absorption directly impacts the increase in energy, one of the most significant transformations related to the frying process.Several factors related to the process and/or food, such as water content, crust microstructure, product geometry, frying time, temperature, and oil quality, can influence oil absorption [7][8][9].The changes during frying affect the physicochemical, sensory, and nutritional parameters of food quality [10].Therefore, understanding the mechanisms of oil absorption during frying is essential for developing alternatives that promote reducing fat content while Foods 2024, 13, 512 2 of 16 maintaining the favorable characteristics of these foods.Thus, there is a growing interest in new strategies to reduce oil absorption in fried foods [11]. Coating of fried foods is an effective strategy for reducing oil absorption [12].Wheat flour is the most used flour for this purpose, but there are many alternative flour sources that have a dense nutritional composition, a range of health-promoting bioactive compounds, and dietary fibers with diverse structures [13]. Dietary fibers are considered functional because they offer several health benefits in addition to the nutritional value inherent to their chemical composition, and they may play a beneficial role in reducing the risk of chronic degenerative diseases.Therefore, demand for these foods is growing [14,15].Thus, alternative flours of cereals, legumes, fruits, and vegetables added in culinary preparations can be included as an ingredient in the diet, being a way to encourage healthy eating habits and a strategy for the development of new products with beneficial effects on health and increasing the possibilities of partial replacement of refined wheat flour with alternative flours [16]. Since oil absorption is a surface phenomenon [17], several studies have been conducted to reduce this problem using different coating materials.The breading process essentially refers to the coating applied to the outer surface to provide the food's crispness, flavor, color, and appearance [18,19].In addition, coatings can change the microstructure of the crust, forming a physical barrier against oil absorption [12].Different coating formulations can be used in meat products to improve sensory attributes, provide visual and structural qualities, and modify the amount of fat absorbed during frying.Wheat flour is the main component of most coating batters, but alternative flours can also be used to provide different flavors or textures [20]. Considering the importance of discussing the use of other types of coatings based on alternative flours, a narrative review was carried out to gather scientific data available about the use of flour obtained from cereals, legumes, fruits, and vegetables as coatings for meat products to evaluate the effects promoted in the moisture and lipid content, oil absorption, and the nutritional and sensory quality of the final product.An electronic search was conducted from 2021 to 2023 in Scopus, Science Direct, Springer, and Web of Science databases.The selection criteria were as follows: the manuscripts should be in English, available in full in the cited databases, and published in the last two decades.Descriptors and related terms were inserted in databases, and the papers involving vegetable flour coatings, fried meat products, and the aspects involving physicochemical, nutritional, and sensory parameters were selected. Oil Frying Process Frying is a typical process used for a long time in food manufacturing.Different peoples over the centuries have consumed various fried products.Knowledge about the frying process was common in the 14th century [10]; it is believed to have been first performed by the ancient Chinese.Still, some authors suggest that the current frying process arose and developed near the Mediterranean area under the influence of olive oil [20,21]. The technique of frying is simple, quick, and low-cost.Nowadays, it is employed in restaurants, the food industry, and households due to the characteristics acquired by fried foods [22,23].In the frying process, food is cooked in oil at temperatures above boiling.Specific conditions such as temperature, pressure, utensils, food, and frying oil are identified as components of this process [20]. Several phenomena, such as heat, moisture, and oil transfer, coincide during the entire process.Interactions between oil and food promote chemical reactions capable of modifying the sensory and nutritional characteristics of the food during the frying process [10].In addition, the oil used also undergoes constant changes.Therefore, to avoid the degradation of the oil, the frying temperature should not exceed the temperature of 180 • C [24,25]. At the end of the frying process, fried preparations have a high oil content and an increased lipid and energy content.However, these products have high palatabil-ity, which makes them very popular.Thus, a wide variety of fried products has been developed [22,26]. Types of Oil Frying Two types of frying are commonly used in food preparation, superficial, or shallow frying, and deep fat frying.The techniques differ in the amount of oil used and the heat transfer method [3,4,24]. Shallow frying is indicated as most suitable for foods with a large contact surface [3].In this case, the food is placed in a frying pan containing a small amount of oil, which can vary according to the irregularities of the food's surface.This layer of oil is responsible for transferring heat to the surface of the food, which, in this case, happens by conduction [10,24].On the other hand, in this type of frying, this transfer is not stable in all parts of the food, requiring it to be turned over for even cooking [24]. The high temperature in the surface layer is responsible for the food's water loss, causing temperature variations and the formation of the distinctive color characteristic of these products, resulting from the Maillard reaction [3,24].Time and temperature must be carefully monitored to prevent food burning.Shallow frying is commonly used to prepare fish, meat, and various vegetables [20]. Deep frying, widely used domestically, commercially, and industrially, plays an important role in the market [27].It is considered a rapid dehydration process in which water is removed from food by fast heating in oil.The process is characterized by total immersion in oil, depending on the product and the desired result.The food is subjected to a temperature varying from 150 to 190 • C [10,24].In this case, heat and mass transfer occur continuously and simultaneously. Heat transfer can occur by convection between the oil and the food surface and conduction within the food structure [10].Deep frying aims to produce food for immediate consumption or further processing [21].This process allows foods to be fried until desired properties, such as colors and flavors, are achieved [20]. It is worth highlighting that this narrative review focused on conventional frying methods, shallow frying, and deep frying, but it is known that there are other methods, such as air-fryer cooking.However, as it is a new method, there are few published studies related to meat products and comparing physicochemical, nutritional, and sensory aspects with conventional methods.According to the literature, air fryers appear similar to pan frying without oil in nutritional and physical characteristics due to a longer cooking time and because they appear healthier than pan frying with oil [28]. Mechanisms Related to Oil Absorption One of the most significant changes resulting from the frying process is the amount of oil absorbed by the food.Over the years, several studies have analyzed the mechanisms of oil absorption in different foods [6,29].Some theories that can describe these mechanisms start from the idea that the oil migration into the food occurs through empty pores or capillaries in the food's substrate and crust (Figure 1) [8,17,22]. Brannan et al. [17] described three mechanisms considered to be the main ones in oil absorption: (1) water replacement, (2) cooling phase effect, and (3) surfactant theory. Water replacement (Figure 1A) also occurs during the frying process when the high temperature promotes the evaporation of the water contained in the food, generating a positive pressure gradient that replaces evaporated water with oil [30].First, the water from inside the food migrates to the surface and then evaporates, allowing the crust to remain permeable.This process favors oil entry into the food to replace the water.However, the oil cannot penetrate the food easily during frying due to the constant vaporization and pressure buildup within the food [10].Brannan et al. [17] described three mechanisms considered to be the main ones in oil absorption: (1) water replacement, (2) cooling phase effect, and (3) surfactant theory. Water replacement (Figure 1A) also occurs during the frying process when the high temperature promotes the evaporation of the water contained in the food, generating a positive pressure gradient that replaces evaporated water with oil [30].First, the water from inside the food migrates to the surface and then evaporates, allowing the crust to remain permeable.This process favors oil entry into the food to replace the water.However, the oil cannot penetrate the food easily during frying due to the constant vaporization and pressure buildup within the food [10]. The cooling phase effect (Figure 1B) occurs after the frying process is completed when the food is removed from the frying medium and loses heat.At this point, the steam reduction, with a consequent reduction in the internal pressure, results in a "vacuum effect", causing the oil trapped on the surface to penetrate through the pores while the food is cooling and vaporizing is decreasing [17].In addition, when food is removed from the hot oil, the temperature difference causes an increase in capillarity, causing the oil to advance into the open pore spaces [8].This phenomenon, known as the surface phenomenon, happens during the food cooling process.It is also believed to be responsible for the balance between adhesion, which is the oil's ability to cling to the surface of the product, and oil draining when the food is removed from the frying medium [8,31]. Also known as the solidification mechanism, the oil's ability to adhere to the product happens during cooling.As the temperature decreases, the oil viscosity increases, which leads to greater adhesion to the surface of the food.This solidification starts immediately after the product is removed from the hot oil [29]. The surfactant theory is based on hydrolytic reactions caused by high temperature and water loss during the frying process.These reactions are responsible for forming surfactants and polar compounds that accelerate oil degradation and reduce the interfacial tension between oil and food.Thus, the contact between the two increases, causing excessive oil absorption [17,30]. Although the latter theory was discarded by Dana and Saguy [30], it is believed to be explained by evidence that oil absorption during frying largely depends on the contact angle and interfacial tension of oil and water.Surfactants are also related to differences in the surface and interior of the food caused by aged oil.As the contact time between the two increases, more heat is passed to the food, resulting in greater surface dehydration and water migration from the interior to the outside of the food.Thus, high concentrations of surfactants result in oil-soaked products with an over-fried exterior and an undercooked interior [32].The cooling phase effect (Figure 1B) occurs after the frying process is completed when the food is removed from the frying medium and loses heat.At this point, the steam reduction, with a consequent reduction in the internal pressure, results in a "vacuum effect", causing the oil trapped on the surface to penetrate through the pores while the food is cooling and vaporizing is decreasing [17].In addition, when food is removed from the hot oil, the temperature difference causes an increase in capillarity, causing the oil to advance into the open pore spaces [8].This phenomenon, known as the surface phenomenon, happens during the food cooling process.It is also believed to be responsible for the balance between adhesion, which is the oil's ability to cling to the surface of the product, and oil draining when the food is removed from the frying medium [8,31]. Also known as the solidification mechanism, the oil's ability to adhere to the product happens during cooling.As the temperature decreases, the oil viscosity increases, which leads to greater adhesion to the surface of the food.This solidification starts immediately after the product is removed from the hot oil [29]. The surfactant theory is based on hydrolytic reactions caused by high temperature and water loss during the frying process.These reactions are responsible for forming surfactants and polar compounds that accelerate oil degradation and reduce the interfacial tension between oil and food.Thus, the contact between the two increases, causing excessive oil absorption [17,30]. Although the latter theory was discarded by Dana and Saguy [30], it is believed to be explained by evidence that oil absorption during frying largely depends on the contact angle and interfacial tension of oil and water.Surfactants are also related to differences in the surface and interior of the food caused by aged oil.As the contact time between the two increases, more heat is passed to the food, resulting in greater surface dehydration and water migration from the interior to the outside of the food.Thus, high concentrations of surfactants result in oil-soaked products with an over-fried exterior and an undercooked interior [32]. Despite many theories, several factors are responsible for the transformations during the frying process.Changes in the product and the oil quality further complicate the phenomenon of oil absorption, which remains a complex mechanism not yet fully understood [33]. Food Properties Size, shape, contact surface, and composition of the food are conditions that can determine the amount of oil absorbed.In addition, the mass transfer phenomenon is responsible for replacing water with oil in food.Therefore, the amount of water strongly correlates with oil absorption [11,33].Water evaporation occurs on the surface resulting in structural changes, such as dehydration of the crust and formation of pores, leading to the increased entry of oil into the food [29].On the other hand, the contact surface and the product roughness are associated with surface permeability since these characteristics allow greater adherence of oil to the crust [11]. It is believed that the initial amount of water and solid content in the food are factors that can influence the absorption of oil [33].However, oil absorption appears to be more closely related to moisture loss than to initial moisture, with the amount of oil absorbed being directly proportional to the moisture lost [7].Furthermore, it has been suggested that oil absorption is greater when the contact surface increases and the product thickness decreases.This may be related to the greater amount of water in the food and, consequently, more significant oil transfer, given the need for a longer frying time [33]. Oil Properties During the frying process, the oil can change due to high temperature and the presence of oxygen, which can directly affect the physical properties of the oil, such as viscosity, interfacial tension, color, and density [21,22].In addition, thermal and oxidative degradation can occur, promoting the formation of volatile and nonvolatile products, which will compromise food quality due to contamination by toxic substances, sensory changes, and increased oil absorption [34]. The type of oil used also influences oil uptake by fried products; however, it may be more related to the quality of the oil [7,35].Studies claim that the amount of unsaturated fatty acids in the oil can influence its absorption by food and that the fatty acid composition of oils is highly related to their viscosity and surface tension, affecting the wettability of oil and food during the frying process, heat transfer and mass transfer rate, drainage of oil during the post-frying and cooling phases, and, therefore, the oil content absorbed by the final products [36]. Higher viscosity makes it harder for oil to drain from the product's surface.Viscosity increases with the formation of dimers and polymers in aged oils and a decrease in the contact angle due to the formation of polar compounds [7].The increase in viscosity can contribute to the increasing amount of oil on the surface of the food.In contrast, the decrease in the contact angle could increase the oil's wetting properties, resulting in a higher content of absorbed oil [21,37]. Oil reuse is one of the factors that increase oil viscosity during frying.In addition to viscosity, other physical changes caused by repeated heating of the oil are the darkening of color, which occurs due to the development of pigments during oxidation, and the thermal decomposition of fatty acids that diffuse into the oil during frying and foaming.Moreover, other chemical changes, such as an increase in free fatty acids, are due to the cleavage and oxidation of double bonds to form carbonyl compounds and low molecular weight fatty acids [38].Studies have shown that the oil and the food can deteriorate throughout the frying oil reuse cycles, reducing healthy components such as omega 3, omega 6, cis, and vitamin A [39]. Frying and Pos-Frying Properties The frying temperature is one of the most critical parameters.Ghaderi, Dehghannya, and Ghanbarzadeh [40] concluded that a higher temperature used in the frying process (up to 190 • C) reduces oil absorption.This increase in temperature also promotes physicochemical transformations that accelerate the degradation of the oil.Despite this, oil degradation increases viscosity, resulting in greater adhesion to the surface, consequently hindering the drainage process during the cooling time and preventing oil entry into the food [21]. Other aspects are related to post-frying conditions, precisely, the cooling process.For example, oil absorption might decrease if the food is taken out of the oil while the temperature is still rising.Likewise, if the food is shaken vigorously immediately after Foods 2024, 13, 512 6 of 16 the oil is removed, absorption can be limited due to the draining of the liquid oil on the product's surface [41]. Alteration of the Physicochemical, Sensory, and Nutritional Properties of Foods Related to the Frying Process Many physicochemical transformations occur with foods during the frying process, promoting structural changes at macro and micro levels and contributing to fried products' unique flavor and texture [10].The chemical composition determines aspects of the product, including sensory and nutritional characteristics [20].The effect of frying on food involves the oil, which influences the quality of the food, and the direct impact of temperature on the fried product [3].The heat transfer during frying is responsible for foods' protein denaturation, gelatinization, and water vaporization.The mass transfer is characterized by the loss of starch, soluble matter, and water, with consequent diffusion of oil into the food [8]. The nature and rate of product decomposition depend, among other things, on the composition and time of oil use, type of oil, temperature, and duration of the frying process, and the characteristics of the food to be fried [3].The hydrolysis, oxidation, and polymerization processes that occur with the oil produce volatile and nonvolatile compounds that give the typical flavor of fried products, making them more attractive [7].The fat used in frying interacts with food ingredients to develop the characteristic texture of fried foods, as well as help to transport, intensify, and release flavors [42].As a naturally palatable agent, fat replaces water in fried foods, promoting a softening and wetting effect, enhancing the flavor, and contributing to the crispness of the product, making it more pleasant. Browning, the crust formation, which results from dehydration of the food surface, and the formation of components that characterize the flavor of fried foods occur during the frying process and develop from combinations between the Maillard reaction and the absorption of oil components [3].The characteristic flavor results from the lipid degradation products originating from the frying oil.The attractive golden-brown color relates to products undergoing frying at the ideal temperature and time conditions.At the same time, those with a light golden color may have been fried at a lower temperature in a shorter time than ideal and may be undercooked inside [42].In addition to color, the Maillard reaction in these foods is also responsible for aroma, flavor, and texture [43]. The frying process can also change the nutritional composition of foods, leading to increased energy and loss of essential micronutrients.The protein content of foods is usually increased during frying due to dehydration [5].The increase in food energy is one of the main reasons for concern about fried food, and this happens due to the absorption of oil from frying when oil replaces the spaces left by the loss of moisture [44].In addition, the change in lipid composition may be a much more critical factor.The formation of trans-fatty acids is very common in producing these foods, and this lipid fraction is related to an increased risk of cardiovascular diseases.Likewise, potentially carcinogenic compounds have been identified [5]. Aladedunye and Przybylski [45] evaluated the effect of frying temperature on canola oil degradation.They observed a significant increase (p < 0.05) in the content of total polar compounds and trans-fatty acids as a function of temperature and frying time.The polyunsaturated fatty acids in the oil were reduced in direct proportion to temperature and frying time.The thermal and oxidative degradation rates were dramatically higher at the highest temperature tested (215 • C), forming greater amounts of potentially harmful components.The authors concluded that raising the frying temperature above 195 • C may cause the isomerization of polyunsaturated fatty acids.The number of trans isomers may increase in such a way as to nullify the claim that fried products are free of trans fats. On the other hand, when heated to high temperatures, oils can undergo reactions such as hydrolysis, oxidation, thermal-oxidative changes, and polymerization, which affect the quality of the oil and, consequently, the quality of fried foods [38,46].Qualitative changes in the frying oil, which may indicate these reactions, can be visually perceptible through the intensification of color, formation of smoke, and consequent formation of compounds harmful to health, such as acrolein, and an increase in viscosity [47]. About the micronutrients, the amount of minerals appears to be preserved and, in many cases, may increase due to the concentration of nutrients.On the other hand, vitamins are thermosensitive, and loss can occur through oxidation, with some vitamins and antioxidant compounds being eliminated in the process.Oil absorption can also affect the composition, texture, size, and shape of the food, resulting in a loss of nutrients, especially vitamins [11].Many vitamins are sensitive to oxidation and high temperatures.Still, this loss may depend more on the type of food and the internal temperature as higher temperatures reach the surface of the food.During frying, the oxidation of unsaturated fatty acids leads to the loss of vitamin E. Vitamin A can be lost to the oil during the process, which can vary with the frying time [48]. Piñeiro et al. [49] evaluated the effects of salting and frying on the content of watersoluble (B2, B3, B9, and B12) and fat-soluble (A, D, and E) vitamins in swordfish (Xhiphias gladius).The samples were fried in olive oil at 120 • C for 7 min.The authors observed that, due to the loss of water from the samples during salting, there was a significant decrease in the values of vitamin B2, while the content of the other vitamins remained constant.However, water-soluble and fat-soluble vitamins showed great thermolability in the frying process, with high retention percentages ranging from 50 to 100%.Vitamins B2 and B3 showed stability during frying, while vitamin B9 showed a loss of about 80%.Vitamin B12, on the other hand, showed a loss after salting and an increase after the frying process.Among the fat-soluble vitamins, vitamin A was significantly reduced after frying, while vitamins D and E remained stable.The stability of the vitamin E content after frying can be explained by olive oil, which is rich in this vitamin. Coatings Based on Flours from Plant Sources The coating or breading of foods has been studied as an alternative for reducing oil absorption in fried foods.Some researchers have already observed that using these coatings can positively reduce the oil absorbed in the frying process.These products give a crispy outer surface with an attractive color and act as a protective layer, preserving the food's moisture and flavor [21]. The food coating system can be divided into batter and breading [43].The batter is a liquid mixture composed of water, flour, starch, and seasonings, in which the food is immersed before the frying process to form a crispy crust, leaving the fried food with a unique flavor [50].It can be used as a single coating or combined with breading, acting as an adhesion coating between the food and the coating flour [43].Based on the literature, the thickness of the coating layer is crucial as it is directly related to sensory acceptance by consumers.More dense dough generally contains a higher percentage of flour and, thus, a greater coating pickup.Furthermore, batter consistency is an essential quality attribute of coatings, influencing their performance during frying [43,51]. Breading can be defined as the coating applied to a previously moistened or battercoated food.It is a type of flour, usually derived from cereals, which may or may not be seasoned [50].Traditionally, this type of coating is mainly composed of wheat flour or a product derived from cereal flour, such as breadcrumbs [43].These flours, however, are rich in gluten, which can retain gases and expand during frying, producing a cellular structure and providing a spongy, porous, and desirable coating, essential for a crispy texture, but which also facilitates moisture loss and absorption of oil [51].However, other types of flour derived from grains, such as rice and corn, can also be used as breading.Flours based on other plant sources, such as soybeans, potatoes, chestnuts, cassava, and almonds, are used to add value or have a lower carbohydrate content and a unique texture.Because they do not contain gluten in their composition, they are alternatives for individuals with celiac disease or non-celiac gluten sensitivity [43].In addition to people who have food intolerance or food allergies, there has been an increase in consumer awareness about the relationship between nutrition and health and, consequently, an increase in demand for health-promoting foods in recent years [52].Alternative flours are ingredients rich in functional compounds that are intended to produce a positive effect on consumer health [53].In this context, fruit and vegetable by-products, such as pomace, peel, pulp, and seeds, are good sources of phytochemicals, presenting antioxidant, anticancer, hypoglycemic, antimicrobial, cardioprotective, neuroprotective, anti-inflammatory, and immunomodulatory properties [52].In meat products, the addition of vegetable products can improve the functional properties and nutritional and sensory qualities of the final products [54,55]. Cereals Most breaded foods are made from processed cereals, and many studies have evaluated the effects of using flours produced from grains (Table 1).Gamonpilas et al. [56] investigated the impact of cross-linked tapioca starches on coating dough viscosity and oil absorption in breaded chicken strips.The chicken strips were breaded and fried in palm oil at a temperature of 180 • C for five minutes.The use of cross-linked tapioca starches in the dough significantly reduced the oil content on the surface of the samples.This reduction was attributed to the extent of cross-linking of the starch replaced by wheat flour.Cross-linking within the starch granules can make them more resistant to deformation during heating, promoting a barrier against oil.Thus, the usefulness and feasibility of using cross-linked tapioca starches in reducing oil absorption in fried products are highlighted. Park et al. [57] analyzed the influence of the addition of wheat fiber on the physicochemical properties and the sensory characteristics of pork loin chops.The formulations were produced with different concentrations of wheat fiber (0, 1, 2, 3, and 4%) and the breaded samples were fried in soybean oil at 170 • C. The fat content and, therefore, energy decreased significantly with increasing levels of wheat fiber.It is believed that the hydrating properties of dietary fiber promoted a more substantial binding power with moisture than with fat so that the moisture content was preserved and the absorbed fat content decreased.Trained panelists carried out the sensory evaluation for attributes such as color, flavor, softness, juiciness, and overall acceptance.Samples with added wheat fiber received higher evaluations for color, softness, juiciness, and general acceptance than the control group.In terms of color, samples with 1% and 2% dietary fiber received better evaluations, while tenderness, juiciness, and overall acceptability received excellent ratings as the amount of wheat dietary fiber added increased.The group treated with 3% wheat fiber received the highest score for overall preference.Dogan, Sahin, and Sumnu [58] compared the effects of adding different flours, including rice flour, on the quality of deep-fried chicken nuggets, using sunflower oil at 180 • C, for 3, 6, 9, and 12 min.Although it did not significantly affect moisture retention, the addition of rice flour in the formulation significantly reduced oil absorption in the samples when compared with the control (wheat and corn flour). Nasiri et al. [59] evaluated the effects of adding different flours to the coating batter, using soy and corn flour added to wheat flour, in proportions of 5 and 10%, to coat samples of shrimp nuggets (Penaeus spp.), which were pre-fried at 150 • C in sunflower oil for 30 s and frozen for a week.The samples were thawed at 4 • C for 24 h and then subjected to the frying process for 0, 60, 120, 180, 240, and 300 s at 150, 170, and 190 • C. Formulations containing corn flour showed higher oil absorption than those containing soy flour and control (wheat flour).In addition, among all the formulations tested, the formulation containing 5% of corn flour had the lowest moisture content.These results may be related to the viscosity of the dough containing corn flour, which did not differ from the control when 10% was added and decreased when 5% of corn flour was added.This way, the lower capacity of the dough to bind to water results in greater availability of water to be evaporated during frying and, consequently, greater oil absorption.Ketjarut and Pongsawatmanit [60] investigated the effects of the partial replacement of wheat flour (WF) by tapioca starch (TS) on the quality of breaded and fried chicken wings.Tapioca starch was added to the batter in 25 and 50% proportions.The samples were breaded and then pre-fried in refined palm oil for 4 min at an average temperature of 167 • C using a deep fryer.Part of the samples was taken to the oven at 195 • C for 8 min, and after cooling, they were packaged and frozen (−18 • C) for two weeks.The final frying was performed at an average temperature of 195 • C for 3 min.A decrease in oil content in the crusts (49.7-41.3%)was observed as the addition of TS increased.The crust of the dough containing only WF showed larger pores in the pre-fried product due to the thermosetting gluten network, facilitating oil entry into the crust layer.For sensory evaluation, the samples were randomly presented to fifty untrained panelists, one by one, immediately after frying.Appearance, color, crust thickness, adhesion between the crust layer and the chicken, crispness, and overall acceptance were evaluated using a nine-point hedonic scale.It was observed that the notes increased with increasing TS substitution in the WF/TS flour blends, and the sample containing 50% TS was the one that presented better results for all the attributes evaluated. Foods 2024, 13, 512 10 of 16 Kilinççeker [61] evaluated the use of oat flour as a coating material in fried chicken meatballs, replacing wheat and corn flour with oat flour in proportions of 3:1, 1:1, and 1:3 (w/w) wheat: oats in batter and 3:1, 1:1 and 1:3 (w/w) corn: oats in breading.Formulations without adding oats were used as a control, and all samples were fried for 5 min at 180 • C. The moisture content in the fried samples decreased with increasing oat flour in the dough, while the lipid content increased.The amount of gluten was reduced as the oat flour content in the dough increased.Thus, water loss and increased oil absorption may be related to gluten reduction since the structure formed by gluten offered more resistance to mass transfer in the samples.Despite this, the use of oat flour positively affected sensory properties such as appearance, color, and flavor.Ten trained judges and a nine-point hedonic scale were used for sensory evaluations of appearance, color, odor, taste, and texture.Results showed that all the breading mixes with oat flour had a higher appearance, color, taste, and overall acceptability scores than the control.The results showed that oat flour has functional and nutritional properties and can be used in coating formulations. Legumes Legumes are plants of the Fabaceae (or Leguminosae) family or the fruit of these specific plants.They are characterized by seeds carried in pods and are often edible, which form part of the diet.Nutrient content includes complex carbohydrates, low fat, proteins with a good amino acid profile, vitamins such as B complex, folate, ascorbic acid, and vitamin E, and minerals, including calcium, copper, iron, magnesium, phosphorus, potassium, and zinc, as well as antioxidants, polyphenols and several other phytochemicals with biological activities important to human health [68].As a result, legume flours are an interesting strategy for breading meat products and have been evaluated through some studies (Table 1). In a study that evaluated the use of chickpea flour in the coating of chicken nuggets, previously covered with batter, fried in hydrogenated palmolein margarine at 180 • C, Kilinççeker and Kurt [63] found that in addition to improving sensory properties, coated samples that contained a greater amount of chickpea flour in the batter showed the highest moisture preservation and the lowest fat content.The authors concluded that this result could be attributed to the higher adhesion effects of chickpea flour on the batter since its moisture content is lower and its protein content is higher, which strengthened the batter's structure coating and prevented moisture migration from the nuggets.As for the sensory analysis, ten trained judges assessed the sensory properties using an eleven-point hedonic scale for appearance, color, odor, taste-flavor, and texture.The increasing chickpea flour in the mixtures increased the scores of sensory parameters.All sensory properties in the coated nuggets were at high acceptable levels.The increasing sensorial notes were related to the golden color, pleasant fried and soft chickpea odor, taste, and texture. Kilinççeker, Hepsa g, and Kurt [64] evaluated the potential use of lentil (L) and chickpea (C) flours as coating materials in fresh and frozen chicken meatballs fried in sunflower oil.The flours were mixed in different proportions (2:1, 1:1, 1:2 L:C w/w), and corn flour was used for the control samples.The authors observed that after the frying process, flour mixtures in the proportions of 2:1 and 1:1 L:C (w/w), despite not showing a significant difference, promoted greater retention of humidity than the control samples, thus being good options for the coating material.The sensory analysis also did not show a statistically significant difference (p > 0.05).Samples were served randomly to ten trained panelists, and the sensory properties were assessed using a nine-point hedonic scale for appearance, color, odor, flavor, and texture.Sensory scores of deep-fat-fried meatballs coated with lentil and chickpea flour were acceptable. Nasiri et al. [59] also evaluated the effects of using soy flour to coat shrimp nuggets (Penaeus spp.).The coating promoted greater moisture retention and less oil absorption during frying.Water retention was attributed to the high water-binding capacity presented by soybean flour.In contrast, the lower oil absorption was attributed to the higher protein content, higher water-binding capacity, and higher viscosity of the flour, which favored the control of the loss of moisture and, consequently, the absorption of oil during frying.Dogan, Sahin, and Sumnu [58] evaluated the effects of adding soy flour on the quality of deep-fried chicken nuggets, using sunflower oil at 180 • C for 3, 6, 9, and 12 min.After frying, soybean flour promoted higher moisture content due to a more resistant crust that served as a barrier to water loss.The texture of fried products was also significantly better when using soy flour (p < 0.05).The authors concluded that due to its high water-binding capacity, the batter with the addition of soy flour could control the loss of moisture and, therefore, the absorption of oil during frying.In addition, the higher viscosity of the batter with soy flour increased adhesion in the samples, which was also effective in controlling oil absorption. Para et al. [65] investigated the influence of black bean flour as a coating material on the physicochemical properties of chicken nuggets, using batter formulations prepared with the flour at concentrations of 25 and 35%.The samples were covered with the batter and fried in a fryer with refined cottonseed oil for 3-5 min at 175 • C. Characteristics such as coating thickness, cooking yield, crude protein, ethereal extract, ash, and crude fiber content of the nuggets increased significantly (p < 0.05) with the increase in flour.The pH and the moisture/protein ratio decreased significantly (p < 0.05).The ether extract of the formulation containing 35% (17.43%) of the flour was higher than the formulation containing 25% (16.49%), which may be due to the greater absorption of oil by the sample since the flour content is higher.Black bean flour provided higher scores for the sensory attributes of color and appearance, flavor, texture, and general acceptability, showing a significant difference (p < 0.05) for all but juiciness.Color and appearance scores, flavor, texture, and overall acceptability decreased significantly as the level of black bean flour increased in the batter mix.This may be due to the intensity of the dark pigmentation of black bean flour and the greater proportion and a thick coating of flour, masking the product's flavor.These results were better when the flour was used at a concentration of 25%.This proportion is considered optimal in the preparation of chicken nuggets. The use of sorghum (Sorghum bicolor), millet (Pennisetum glaucum), and soybean (Glycine max) flours in deep-fried chicken breast fillet coating were evaluated by Kwaw et al. [62].The fillets were breaded with the flours, used individually, and fried in sunflower oil at 175 • C.After frying, when comparing the samples coated with flour individually and the control sample breaded with wheat flour, the samples coated with legume flour had a lower fat content.This result may be due to an increased porosity of the crust of the samples coated only with cereal flours, which promoted increased oil absorption.The percentage of losses caused by frying was related to moisture loss and oil absorption, and samples breaded with soy flour (16.73%) had the lowest percentages after the analyses, which may be related to protein denaturation.The results indicated that legume-based flours developed more substantial barrier properties against moisture losses and, thus, low percentages of losses by frying.A sensory test was also conducted with 45 untrained panelists using a nine-point hedonic scale to evaluate color, flavor, moisture, texture, aroma, and overall acceptability.The sample breaded with soy obtained better ratings for moisture, compared with cereal flours, and for texture, compared with the uncoated sample.Crust color evaluation revealed that the sample breaded with soy was the most preferred among samples coated with flour individually.The assessment of flavor, aroma, and overall acceptability showed a similar trend with soy, with higher scores than samples breaded with cereals. Considering the mixture of different sources of flour, there are few studies.Kwaw et al. [62] investigated flour mixtures prepared in proportions of 1:1 sorghum:soybean and millet:soybean (w/w), and 1:1:1 sorghum:millet:soybean (w/w/w), and samples were fried in sunflower oil at 175 • C. The percentage of losses caused by frying in samples breaded with a mixture of sorghum and soy flours showed the lowest percentages (20.88%) after analysis, which, as discussed previously, may be related to protein denaturation.The authors reported that samples coated with mixtures containing soy flour showed a slight decrease in fat content, which is associated with the properties of soy in the formulations.As for the sensory test, the samples coated with the composite flours received better moisture ratings than the individual flours.Sorghum-soy was the preferred breading mixture, and evaluation of flavor, aroma, and overall acceptability showed a similar trend with soy composite flours (except millet).Chicken breast coated with an equal proportion of soy and sorghum flour was the most preferred, with an overall acceptability of 84.44%, showing that this formulation could improve the quality and acceptability of chicken breast compared with conventional wheat flour.The results showed that combining soy and sorghum produced a composite flour positively impacting the breaded chicken's characteristics. Fruits and Vegetables Fruits and vegetables are high in vitamins, minerals, and fiber.The total use of these foods can be an alternative, offering products of high nutritional value, developed from parts usually discarded, and contributing to reducing the negative impact on the environment [69].Based on this, one of the ways to use it would be as flour obtained from these vegetables, with the potential for use in the coating of meat products (Table 1). In a study that characterized the chemical composition of chicken nuggets breaded with pequi pulp flour in the breading layer, Braga-Souto et al. [66] used a control formulation composed of breadcrumbs and three formulations containing pequi pulp flour in proportions of 25, 50, and 100%.The samples were immersed in batter, pre-floured with the formulations containing pequi flour, immersed in batter again, breaded with cornmeal, and subjected to frying in an electric fryer with soybean oil at a temperature of 170 to 180 • C for 4 min.The pequi pulp flour added nutritional value to the breaded chicken, contributing to the lipid and protein content.Still, no significant differences (p > 0.05) were found between the moisture content and oil absorption of the formulations.Sensory analysis was conducted with sixty panelists who evaluated the samples based on appearance, texture, flavor, and overall acceptance characteristics using a nine-point hedonic scale.The results showed no statistically significant difference between treatments (p > 0.05), and the products were accepted.Pequi flour has the same potential for consumption and acceptance by the consumer as common breading, adding greater nutritional value. Freitas et al. [67] analyzed the use of potato flour (Solanum tuberosum L.) cv.Monalisa in a mix of flour (corn, breadcrumbs, and potatoes) to cover the breaded chicken and compared it with commercial formulations.The chicken breast fillets were breaded with formulations containing the addition of 40, 60, and 80% of potato flour.Then, they were pre-fried and fried at 180 • C for 30 s and 3 min, respectively.The lipid content after the frying process was lower in the formulations with potato flour, decreasing as the amount of potato flour in the formulations increased.This result can be attributed to the smaller granulometry presented by the potato flour since the samples coated with flours with larger granulometry absorbed more oil.The sensory analysis was carried out to determine the proposed formulations' overall acceptance.Fifty untrained panelists and a structured hedonic scale of nine points were used.The results showed that adding different levels of potato flour remained the same overall acceptance of the products. Thus, using vegetable flours, in addition to guaranteeing one of the main objectives of food coating, which is the improvement of sensory attributes, can improve the nutritional value and reduce oil absorption, which are directly related to the health of consumers and are reasons for concern regarding fried foods (Figure 2).donic scale of nine points were used.The results showed that adding different levels of potato flour remained the same overall acceptance of the products. Thus, using vegetable flours, in addition to guaranteeing one of the main objectives of food coating, which is the improvement of sensory attributes, can improve the nutritional value and reduce oil absorption, which are directly related to the health of consumers and are reasons for concern regarding fried foods.(Figure 2). Conclusions Coating foods with vegetable flours other than breadcrumbs and wheat can contribute to the reduction of oil absorption by meat foods during frying.The use of flours from different vegetable sources is a good alternative because they can add nutritional value, Conclusions Coating foods with vegetable flours other than breadcrumbs and wheat can contribute to the reduction of oil absorption by meat foods during frying.The use of flours from different vegetable sources is a good alternative because they can add nutritional value, enhance fiber content, and promote the modification of the surface of meat foods, favoring the development of desirable sensory characteristics.The variety of sources of the flours presented in this study also effectively reduced the undesirable changes resulting from the frying process.Furthermore, they contributed to moisture preservation in these foods, reducing the caloric value, compared with breadcrumbs and wheat, and ensuring positive effects involving physicochemical, nutritional, and sensory properties. Figure 1 . Figure 1.Oil absorption mechanisms.(A) During frying: (1) heat transfer from oil to food; (2) temperature rise; (3) loss of water in the form of steam; (4) increased internal food pressure; (5) oil inlet, replacing water.(B) After frying (cooling): (6) temperature reduction; (7) reduction of the internal pressure of the food and formation of the vacuum effect; (8) absorption of the oil present on the surface of the food.Based on Brannan et al. [17]. Figure 1 . Figure 1.Oil absorption mechanisms.(A) During frying: (1) heat transfer from oil to food; (2) temperature rise; (3) loss of water in the form of steam; (4) increased internal food pressure; (5) oil inlet, replacing water.(B) After frying (cooling): (6) temperature reduction; (7) reduction of the internal pressure of the food and formation of the vacuum effect; (8) absorption of the oil present on the surface of the food.Based on Brannan et al. [17]. Table 1 . Application of coatings based on flours from vegetable sources in deep-fried meat products.
v3-fos-license
2020-12-08T02:00:43.658Z
2020-12-07T00:00:00.000
227336448
{ "extfieldsofstudy": [ "Physics" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "http://link.aps.org/pdf/10.1103/PhysRevResearch.3.023138", "pdf_hash": "9fe5c1c67cbaa23bc84f024d55e3534b1cc36a1b", "pdf_src": "Arxiv", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41252", "s2fieldsofstudy": [ "Physics" ], "sha1": "39b2d186fbb32f26b8aa1f8cce633d864b50de7f", "year": 2020 }
pes2o/s2orc
Topological chiral spin liquids and competing states in triangular lattice SU($N$) Mott insulators SU($N$) Mott insulators have been proposed and/or realized in solid-state materials and with ultracold atoms on optical lattices. We study the two-dimensional SU($N$) antiferromagnets on the triangular lattice. Starting from an SU($N$) Heisenberg model with the fundamental representation on each site in the large-$N$ limit, we perform a self-consistent calculation and find a variety of ground states including the valence cluster states, stripe ordered states with a doubled unit-cell and topological chiral spin liquids. The system favors a cluster or ordered ground state when the number of flavors $N$ is less than 6. It is shown that, increasing the number of flavors enhances quantum fluctuations and eventually transfer the clusterized ground states into a topological chiral spin liquids. This chiral spin liquid ground state has an equivalent for the square lattice SU($N$) magnets. We further identify the corresponding lowest competing states that represent another distinct type of chiral spin liquid states. We conclude with a discussion about the relevant systems and the experimental probes. I. INTRODUCTION SU(N ) Mott insulators are representative examples of quantum systems with a large local Hilbert space where quantum fluctuations can be strongly enhanced and exotic quantum phases could be stabilized. All the SU(N ) spin operators are present in the effective model for the Mott insulators and would be able to shuffle all the spin states rather actively in the local Hilbert space. The system can be quite delocalized within the local Hilbert space, that is to enhance quantum fluctuations and induce exotic quantum phases. This aspect is fundamentally different from the SU(2) Mott insulators with large-S local moments that also has a large local Hilbert space. For the large-S SU(2) Mott insulators, the pairwise Heisenberg interaction is quite ineffective to delocalize the spin states in the large-S Hilbert space, and thus quantum fluctuations are strongly suppressed. Thus, it is the conventional wisdom not to search for exotic quantum phases among the large-S SU(2) Mott insulators, but among the spin-1/2 quantum magnets with a strong frustration. In contrast, the emergence of the SU(N ) Mott insulators brings a new searching direction for exotic quantum phases. SU(N ) Mott insulators are not a theoretical fantasy, but exist in nature. It has been shown that the ultracold alkaline-earth atoms (AEA) on optical lattices can simulate quantum many-body physics with an SU(N ) symmetry without any fine-tuning [1]. The nuclear spin of fermionic AEA can be as large as I = 9/2 for 87 Sr, while the outer shell electrons give a total spin S = 0 and makes the hyperfine coupling inactive. This observation effectively extends the realization of SU(N ) magnets up to * gangchen@hku.hk N = 2I + 1. Early efforts by Congjun Wu explored total spin-3/2 alkaline fermions on optical lattices where the SU(4) symmetry can be achieved with fine tuning [2][3][4][5]. Quantum Monte Carlo simulations were introduced later to study magnetic properties of the SU(2N ) Hubbard model [6,7]. There is also some effort in searching for an emergent SU(N ) symmetry in real materials particularly for the SU(4) case. The two-orbital Kugel-Khomskii model can become SU(4) symmetric after some fine tuning [8,9]. Experimental and numerical evidence also suggests that Ba 3 CuSb 2 O 9 could be a prominent candidate on a decorated honeycomb lattice [10], though the Cu-Sb dumbbell is quenched rather than an active degree of freedom. The SU(4) Heisenberg model has further been proposed for the spin-orbit-coupled Mott insulator α-ZrCl 3 where the degree of freedom is the spinorbit-entangled J = 3/2 local moment [11] on a honeycomb lattice [12]. More recently, it has been proposed that the Mott insulating and superconducting behaviors in twisted bilayer graphenes can be captured by a twoorbital Hubbard model with an emergent SU(4) symmetry on a moiré triangular lattice [13]. Other work even suggested that double moiré layers, built from transition metal dichalcogenides or graphenes, separated from one another by a thin insulating layer would be a natural platform to realize Hubbard models on the triangular lattice with SU(4) or SU(8) symmetries [14,15]. Owing to enhanced quantum fluctuations for SU(N ) Mott insulators, the pioneering theoretical works [16,17] by Hermele et al, have obtained the topological chiral spin liquid (CSL) ground states with intrinsic topological orders even for an unfrustrated square lattice when N ≥ 5 [16][17][18]. Depending on the atom occupation numbers, the system can support both Abelian and non-Abelian statistics for the anyonic excitations. Since unfrustrated lattices such as square and honeycomb lattices already bring exotic and interesting physics [19][20][21], the frustrated lattices could further harbor nontrivial quantum phases, for example the triangular lattice with SU(3) and SU(4) spins [22][23][24]. In this work, we focus on the SU(N ) Heisenberg model on a triangular lattice where each lattice site comprises the fundamental representation of the SU(N ) group. For AEA, this corresponds to one atom per lattice site and is known to be most stable against the three-body loss. Because the 1/N filling is kept throughout, the large-N limit differs fundamentally from the large-N extension of the spin-1/2 SU(2) models where the 1/2 is kept and two sites can form a SU(2) spin singlet [25]. Here, as N sites are needed to form a singlet, the valence cluster solid (VCS) state is generically disfavored in the large-N limit. Instead, by our large-N calculation, two types of CSL states with background U(1) gauge flux φ = π − π/N and π − π/(2N ) (see Fig. 1(a)) are identified as the ground and lowest competing states for 6 ≤ N ≤ 9, respectively. For smaller N 's, various symmery-broken cluster/stripe states are obtained. We expect the large-N results are more reliable when N is large. The rest of this paper is organized as follows. The general Heisenberg model of SU(N ) spins is introduced and simplified at the large-N saddle point in Sec. II. The self-consistent minimization algorithm is implemented to solve the reduced mean-field spinon Hamiltonian. The technique details of this algorithm are described in Sec. III. It is emphasized that the optimized solutions strictly satisfy the local constraints. In Sec. IV, both the ground states and lowest competing states for 2 ≤ N ≤ 9 are reported and analysed, especially for two types of CSL states. Finally, in Sec. V, this work conclude with a discussion about relevant systems and the experimental probes. II. THE SU(N ) HEISENBERG MODEL IN THE LARGE-N APPROXIMATION We begin with an SU(N ) Heisenberg model on the triangular lattice where each site comprises the fundamental representation of the SU(N ) group. This model can be obtained from the strong coupling limit of an SU(N ) Hubbard model with 1/N filling or one particle per site. The SU(N ) Heisenberg model is given as where J is the antiferromagnetic exchange interaction and the sum is taken over the nearest neighbor bonds and spin flavors. The SU(N ) spin operators can be simply expressed with the Abrikosov fermion representation S αβ (r) = f † rα f rβ , and α, β = 1, . . . , N . Hereafther, a summation over repeated indices in the form of Greek letters is supposed unless otherwise specified. A local constraint on the fermion occupation f † rα f rα = 1 is imposed to reduce the enlarged Hilbert space. In principle, an SU(N ) singlet could be formed by N sites. But the SU(N ) exchange rapidly transforms one N -site singlet to other sets of N -site singlets, resulting in a failure of the conventional understanding. Instead, the spin Hamiltonian Eq. (1) is solvable in the limit N → ∞ via a large-N saddle point approximation in the imaginary-time path integral formulation. The corresponding partition function can be expressed as The action S is given as where µ r is the Lagrangian multiplier to enforce the Hilbert space constraint, χ rr is the auxiliary field to decouple the fermion operators, and J ≡ N J. As the action S scales linearly with N , the large N limit leads to a saddle point approximation that results in the saddle point equations, χ rr = −J f † r α f rα /N , f † rα f rα = 1, and the saddle point or mean-field Hamiltonian for the fermionic spinons is In the following, we will search for the saddle point with the lowest mean-field energy E MF numerically and discuss the properties of ground states. Before that, we first discuss the lower bound of E MF and the bound saturation conditions for reference. An exact lower bound on E MF for generic lattices [26] was first obtained by Rokhsar for the half-filling, and was shown to be saturated by valence bond states with various spin singlet coverings. These states break the lattice translation but preserve the spin-rotation symmetry, and fluctuations beyond the mean field can break the high degeneracy among them [25]. The lower bound was generalized to the 1/N filling with [17] where N s is the number of lattice sites. We set J max = J for each bond. The saturation for Eq. (5) is reached by a N -simplex VCS state composed by N -site simplices. That is, every site on the lattice is directly connected to the other N − 1 sites within the same cluster by a single bond with an exchange coupling J max . On a d-dimensional lattice, the N -simplex VCS with N > d + 1 is prohibited without fine-tuning of the exchange. Thus, there are only two-simplex and threesimplex VCS's on the triangular lattice. For N ≥ 4, possible cluster states are general N -site ones satisfying a stricter energy bound [17] is the total number of isolated bonds in the lattice. III. THE SCM ALGORITHM To determine the saddle-point solutions, we closely follow the numerical self-consistent minimization (SCM) algorithm developed in Refs. 16 and 17. The technical details of the algorithm are described in the following. During one energy optimization process for a given geometry and periodic boundary conditions, the algorithm starts from initializing the fields χ rr at each bond randomly with χ rr = |χ rr |e iφ rr , with a uniform distribution of amplitudes |χ rr | ∈ [0.02, 0.20] and phases φ rr ∈ [0, 2π]. The chemical potentials are set to be the default value µ r = 0 in the beginning. Obviously, the local constraints are violated here in general and the deviation of the local fermion density can be denoted as δn r = 1 − f † rα f rα . The expectation value is taken using the ground state of H MF with the unchanged µ r at this stage. To obtain the correct density, one must adjust the chemical potential µ r by δµ r at each site. It is found that to the lowest order, the desired adjustment of chemical potentials δµ r can be expressed as where G −1 (r − r , 0) is nothing but the inverse of the density-density correlation at zero frequency. The elements of the correlation G(r − r , 0) can be calculated at the single particle level from the mean-field Hamiltonian H MF , with the help of the standard linear response theory in principle. However, it follows that the inversion of G(r − r ) naively diverges. Physically, this is because a uniform adjustment of the chemical potential at every site is trivial and the fermion density will keep intact. This obstacle is overcome by following the treatment suggested in Ref. 17; that is diagonalizing G(r − r , 0) and only focusing on its non-zero eigenvalues g i . Note that the index i refers to the basis where G(r − r , 0) is diagonalized. In such a basis, the relationship Eq. (6) becomes Here, the index i should not be summed. For all indices with vanishing eigenvalues g i , µ i is taken to be zero concurrently. Then, the adjustment of the chemical potential δµ r is well-defined. A simple replacement of µ r → µ r + δµ r gives a new mean-field Hamiltonian H MF and related ground state. In consequence, it affects the local fermion density conversely, resulting in a new deviation δn r . These processes construct a self-consistent procedure and the problem of searching for an appropriate set of chemical potential deviation δµ r can be solved by iterating the procedure until the density is uniform. This is the core of the algorithm to preserve the local single-occupation constraints n r = f † rα f rα = 1. With the modified chemical potentials calculated in the previous step, an update set of auxiliary fields χ rr can be determined via χ rr = −J f † r α f rα /N , but the local constraints may be violated once again. The amended auxiliary fields and chemical potentials can be treated as a new and better starting point. The procedure described in the previous step is thus implemented iteratively until reaching a converge of the ground state energy within a given numerical error. It has been shown in Ref. 17 that the energy of the final state must be less or equal to that of the initial state after the optimization process. Therefore, we eventually obtain a local energy minimum. In order to reach the global minimum as much as possible, we start from at least 50 randomly initialized fields, and reap a collection of local minima satisfying the singleoccupation constraints. The lowest two are accepted as the best results of the ground state and lowest competing state energies. IV. THE MEAN-FIELD RESULTS We describe the ground states and lowest competing states of the mean-field Hamiltonian Eq. (4) from the SCM algorithm. Because different geometries can accommodate different candidate ground states especially cluster states with unknown dimensions, in the calculation, we consider all unit cells of a parallelogram geometry 1 × 2 with 1,2 ≤ 2N for 2 ≤ N ≤ 5 and 1,2 ≤ N for N ≥ 6, respectively. In some ordered cases for N = 4, 5, larger unit-cell sizes are also considered for confirmation. Each unit cell is repeated by L 1,2 (≥ 20) times along the triangular Bravais lattice vectors to form a superlattice. The superlattice itself has periodic boundary conditions. In practice, for each case, we ran the SCM procedure at least 50 times with different random seeds to avoid any missing of ground states caused by numerical problem. The results are listed in Table I and Table II. For N = 2 and 3, the lowest energies we found saturate the bound Eq. (5), meaning that the ground states are dimer and three-site simplex VCS, respectively. Actually, the obtained ground states are highly degenerate because any state with each site covered by one dimer/cluster unit has the same energy in the large-N limit. An ordered dimer or three-site simplex VCS state, as illustrated in Figs. 2(a-b), is expected to be selected beyond the mean field. Note that, the non-zero expectation values |χ rr | in Fig. 1(a) and (b) can only take 1/2 and 1/3, respectively. The true ground state for N = 2 is the wellknown 120 • Néel state and differs from the mean-field here. The N = 3 case was also shown to have a threesublattice magnetic order in previous numerics [27,28]. This reflects the deficiency of the large-N approximation for small N cases. In Table I, we further find that the lowest competing states are CSL states with φ = π−π/N . The N = 4 case is more compelling due to the rapid growth of experimental proposals [14,15,29,30] and large-N approximation could provide useful insight here. We find the ground state energy do not saturate the bound Eq. (5) as excepted, but saturate the stricter bound discussed previously. The ground states are foursite VCS depicted in Fig. 2(c), accompanied by a large degeneracy for the same reason as N = 2 and 3. The non-zero expectation values are |χ rr | = 1/4. A similar plaquette order was also reported in a recent work [15]. In Fig. 3, we depict the lowest competing state for N = 4. One can see that the lattice is covered by stripes with three different bond expectation values |χ rr | ≈ 0.030, 0.158, and 0.224. Along one of lattice vectors, there is a unit-cell doubling that breaks the lattice translation. The background fluxes through each plaquette are inhomogeneous as well and manifest the same unit-cell doubling pattern. Specifically, the average flux is a constant value φ avg = π − π/N where N = 4. The same ordered pattern has also been obtained through the DMRG study from the Kugel-Khomskii model [31]. They attribute the symmetry breaking to symmetry allowed Umklapp inter-actions in certain finite geometries. In our results, however, such a stripy state is not the ground state even in the two-dimensional limit. From N = 5, neither the bound Eq. (5) nor the stricter one can be saturated. Thus, the ground states are no longer VCS. Our numerical calculation finds that the ground state for N = 5 is very similar to the lowest competing state for N = 4 (see Fig. 2(d)), except that the bond expectation values can only take |χ rr | ≈ 0.072, 0.145, and 0.179, and the average background flux φ avg = π − π/N shifts accordingly. It also breaks the lattice translation symmetry along one of lattice vectors and manifests itself as a stripe pattern with a doubled unitcell. The lowest competing state for N = 5 is a CSL with φ = 4π/5. With further increasing N , the frustration is enhanced. Eventually, the ground states become CSL states with φ = π − π/N when 6 ≤ N ≤ 9. Correspondingly, the lowest competing states we found share the identical form. They are CSL states as well except that the background magnetic fluxes shift to φ = π − π/(2N ). With the two types of CSL states identified, we now discuss the properties of these topological liquid states. The CSL is characterized by the mean-field saddle point All bonds on the lattice have a uniform expectation value for |χ| but modulated by a U(1) gauge field a rr so that the flux φ on each plaquette is a constant. The CSL breaks both parity and time-reversal. The bond phase field a rr is treated as a fluctuating U(1) gauge field coupled to the fractionalized spinons. By checking the behavior of the mean-field Hamiltonian Eq. (4) at the CSL saddle points, we find that both types of CSL states have a fermion band structure with N bands where only the lowest is filled (see Fig. 1(b)). The Fermi level lies in the gap between the lowest two bands, thus all discussion can be applied to both CSL states. Furthermore, the first type of CSL with φ u,d = π − π/N on the triangular lattice can be mapped to the counterpart on a square lattice up to a time-reversal. If we regard the adjacent up and down triangles as a unit shown in Fig. 4, the phase of the shared bond has no contribution to the total flux, and we have the relationship where φ s is the background U(1) flux through each square plaquette. In Ref. [16], Hermele et al, found that, the CSL states are ground states for 5 ≤ N ≤ 10 on the square lattice in the mean-field calculation. As the spinon is gapped out by the emergent U(1) gauge flux pattern, the Chern-Simons term enters the theory for U(1) gauge fluctuations. After integrating the gapped spinons, one obtains a topological quantum field theory with a Chern-Simons term corresponding to the chiral Abelian topological order and anyonic statistics. The spinon is converted in anyons with a statistical angle π ± π/N . Gapless chiral states carrying spin degree of freedom are also supported by the CSL as edge modes and the low-energy theory is described by the SU(N) 1 WZW model. V. DISCUSSION To summarize, we study the Heisenberg antiferromagnets with SU(N ) symmetry on the triangular lattice. In the large-N approximation, a variety of ground states and lowest competing states are identified for different parameter N . At the mean-field level, the ground state for 2 ≤ N ≤ 4 is a N -site VCS state with a large degeneracy. Specifically, ordering patterns with doubled unit cell and average background flux φ avg = π − π/N are found in the N = 4 and 5 cases. These ordered states break the lattice translation symmetry along one of lattice vectors and become the lowest competing state and ground state for N = 4 and 5, respectively. The frustration from SU(N ) exchange interaction is enhanced when N > 5, resulting two types of CSL states as the lowest two states for 6 ≤ N ≤ 9. Among them, the CSL states with φ = π − π/N have a lower energy, and have its counterpart on the square lattice. Although the true ground states are not what we found for N < 4, our calculation can provide useful insight for cases N ≥ 4 where large-N approximation becomes more reliable. In a very recent DMRG study of an SU(4) spin model on the triangular lattice, phase diagrams for integer fillings were obtained and compared with conventional MFT ones [15]. The quite good agreement of the phase boundaries determeted by two methods suggests that N = 4 is perhaps large enough for the mean-field analysis we performed in this work. Thanks to the development of ultracold experimental techniques, the SU(N ) Mott insulators have been realized with AEA on various optical lattices using the Pomeranchuk cooling [32], even the Mott crossover and SU(N ) antiferromagnetic spin correlations were recently observed with 173 Yb atoms [33,34]. Nontrivial physics of multicomponent fermions with broken SU(N ) symmetry were also proposed on this platform [35]. The emergency of the widely concerned SU(4) or even SU (8) symmetric interaction has been proposed in the twisted bilayer graphene and double moiré layers systems re-cently [12,14]. This moiré lattice system could be an novel platform to detect possible phases in this work. The ultracold atom systems may have many restrictions in the detection of anyonic spinon excitations and edge states. Nevertheless, spin-dependent Bragg spectroscopy may be used to detect the spinon continuum [16][17][18], singlet-triplet oscillation technique can detect the nearest-neighbor spin correlation [34] for CSLs, and the lattice potential could be adjusted to localize and manipulate the anyonic quasiparticles [16][17][18]. For solid-state systems, more techniques are available. These include but are not restricted to quantized thermal Hall transport [36] for the edge modes, scanning tunneling microscope of anyons at defects [37], or even angleresolved photon emission measurement for the spinon signatures [38].
v3-fos-license
2024-06-01T05:07:53.785Z
2024-01-01T00:00:00.000
270146918
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://academic.oup.com/burnstrauma/article-pdf/doi/10.1093/burnst/tkae004/57981630/tkae004.pdf", "pdf_hash": "912e3f1e89593a695c21c6f5f213c1c18755d66b", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41254", "s2fieldsofstudy": [ "Medicine" ], "sha1": "0359935031687a0f3bde1d66e08455b4b5df63ac", "year": 2024 }
pes2o/s2orc
Extracellular cold-inducible RNA-binding protein mediated neuroinflammation and neuronal apoptosis after traumatic brain injury Abstract Background Extracellular cold-inducible RNA-binding protein (eCIRP) plays a vital role in the inflammatory response during cerebral ischaemia. However, the potential role and regulatory mechanism of eCIRP in traumatic brain injury (TBI) remain unclear. Here, we explored the effect of eCIRP on the development of TBI using a neural-specific CIRP knockout (KO) mouse model to determine the contribution of eCIRP to TBI-induced neuronal injury and to discover novel therapeutic targets for TBI. Methods TBI animal models were generated in mice using the fluid percussion injury method. Microglia or neuron lines were subjected to different drug interventions. Histological and functional changes were observed by immunofluorescence and neurobehavioural testing. Apoptosis was examined by a TdT-mediated dUTP nick end labelling assay in vivo or by an annexin-V assay in vitro. Ultrastructural alterations in the cells were examined via electron microscopy. Tissue acetylation alterations were identified by non-labelled quantitative acetylation via proteomics. Protein or mRNA expression in cells and tissues was determined by western blot analysis or real-time quantitative polymerase chain reaction. The levels of inflammatory cytokines and mediators in the serum and supernatants were measured via enzyme-linked immunoassay. Results There were closely positive correlations between eCIRP and inflammatory mediators, and between eCIRP and TBI markers in human and mouse serum. Neural-specific eCIRP KO decreased hemispheric volume loss and neuronal apoptosis and alleviated glial cell activation and neurological function damage after TBI. In contrast, eCIRP treatment resulted in endoplasmic reticulum disruption and ER stress (ERS)-related death of neurons and enhanced inflammatory mediators by glial cells. Mechanistically, we noted that eCIRP-induced neural apoptosis was associated with the activation of the protein kinase RNA-like ER kinase-activating transcription factor 4 (ATF4)-C/EBP homologous protein signalling pathway, and that eCIRP-induced microglial inflammation was associated with histone H3 acetylation and the α7 nicotinic acetylcholine receptor. Conclusions These results suggest that TBI obviously enhances the secretion of eCIRP, thereby resulting in neural damage and inflammation in TBI. eCIRP may be a biomarker of TBI that can mediate the apoptosis of neuronal cells through the ERS apoptotic pathway and regulate the inflammatory response of microglia via histone modification. Background Approximately 69 million people suffer from traumatic brain injury (TBI) annually worldwide [1,2].TBI can progress through a process that includes primary injury and secondary craniocerebral injury.Secondary brain injury is a crucial factor for a worse prognosis in patients with TBI because it disrupts brain homeostasis and triggers a chronic neurodegenerative cascade [3,4]. Neuronal apoptosis and microglial polarization, the most common and vital cellular events following TBI, are the leading contributors to TBI-induced secondary injury [5,6].Uncontrolled cell apoptosis appears to be an important cause of neurological disability in TBI [7].Moreover, a persistent inflammatory response promotes neural cell death and exacerbates nerve function damage and posttraumatic disorders [8,9].Notably, as the major resident immune cells of the brain, microglia are the main contributors to the pathogenesis of neuroinflammation.The M1-like phenotype of microglia is involved mainly in uncontrolled neuroinflammation.In contrast, the M2-like phenotype is associated with the alleviation of inflammation.Many studies have shown that inhibition of microglial activation markedly reduces neuroinflammation and thus alleviates long-term cognitive impairment following TBI [10,11].Therefore, inhibiting neural apoptosis and modulating microglial M1/M2 polarization may be helpful for improving the functional outcomes of patients with TBI. Cold-inducible RNA-binding protein (CIRP) was first discovered as an RNA chaperone that regulates the cell cycle in hibernating animals [12].There are two different forms of CIRP: intracellular CIRP and eCIRP.Currently, increasing evidence has indicated that eCIRP is critically involved in the development of inflammatory diseases [13,14].It is likely that eCIRP is associated with alcohol-, haemorrhage-and cerebral ischaemia-induced brain inflammation [15][16][17][18].However, the role of CIRP in brain injury is still controversial.Wang et al. reported that CIRP inhibited apoptosis and exerted a neuroprotective effect during mild hypothermia in patients with TBI [19].In contrast, Liu et al. demonstrated that CIRP deficiency relieves neuronal damage by inhibiting microglial activation during deep hypothermic circulatory arrest (DHCA) [20].However, the potential role and mechanism of action of eCIRP in TBI have not been fully elucidated.We therefore aimed to explore the effects of eCIRP on neuronal apoptosis and microglial polarization during TBI and to define the underlying mechanisms of these phenomena. In these neural-specific CIRP KO mice, the CIRP gene was knocked out in central nervous system neurons and glial cell precursors, and CIRP was normally expressed in other tissues and cells. Mice were housed in a room at a constant temperature on a 12-h light-to-dark cycle.All animal experiments were approved and carried out according to the guidelines of the ethical committee of the General Hospital of the Chinese PLA, Beijing, China (No. SYXK2019-0021). Patients Venous blood was obtained from patients with a confirmed diagnosis of TBI 24 h after being in the hospital and from healthy donors with informed consent (Table 1).The details of the patients and the inclusion criteria are shown in Table 1.Briefly, 8 patients with a diagnosis of traumatic brain injury (TBI) in the Department of Neurosurgery, Chinese PLA General Hospital and 10 donors were enrolled in this retrospective study.The inclusion criteria for patients were as follows: (1) aged between 20 and 70 years, (2) diagnosed The protocol of the experiments In this study, animal experiments were carried out in mice with TBI.Cell experiments were performed using BV2 cells and neuro-2a cells.Details of the experiments are shown in supplementary Figure 1a, b, see online supplementary material. In vivo The TBI models were generated using the fluid percussion injury (FPI) method using WT and KO mice.The animals were then randomly divided into four groups: the WT sham group, the KO sham group, the WT TBI group and the KO TBI group.Details of the animal groups are shown in supplementary Figure 2a, see online supplementary material.Blood samples were collected from the WT sham and WT TBI groups on day post TBI (dpi) 1. Brain tissues were collected from different groups at 4 h and at 1, 7 and 28 dpi.Expression of CIRP, apoptosis-related proteins, inflammatory cytokines and glial cells markers were examined via western blotting (WB), real-time quantitative polymerase chain reaction (q-PCR) and immunofluorescence staining at various time-points after TBI.Apoptosis and brain damage were observed by haematoxylin and eosin (HE) staining and TUNEL assays at 4 h and at 1, 7 or 28 dpi.The levels of tumour necrosis factor-α (TNF-α), interleukin-1β (IL-1β), neuron-specific enolase (NSE) and astrocytic S100 calcium binding protein B (S100B) in the serum of the mice were measured via ELISA at 1 dpi. In vitro Neuro-2a cells or BV2 cells were cultured in RMPI-1640 medium supplemented with 10% fetal bovine serum.The cells were then treated as described below. In neuro-2a cells, the ultrastructure, cell apoptosis and expression of the endoplasmic reticulum stress (ERS) apoptotic pathway were observed by transmission electron microscopy, annexin V-FITC and WB assays after they were treated with different doses of eCIRP for 48 h.Next, Neuro-2a cells were treated with 1 μg/ml eCIRP for 48 h after they were pretreated with GSK2656157 (0.5-2 μM) for 1 h.Then, the apoptotic pathway and apoptotic ratio were detected by WB and annexin V-Fluorescein isothiocyanate (FITC) assays. In BV2 cells, the expression of inflammatory pathway components was detected by WB after they were treated with 1 μg/ml lipopolysaccharide (LPS) for 24 h after transfection with siCIRP or NC.In addition, BV2 cells were treated with eCIRP at different doses for 48 h or treated with 1 μg/ml eCIRP for 48 h after pretreatment with TAK-242 (1 μM) for 30 min or with MC1742 (1 μM) for 12 h.Then the release of inflammatory cytokines in BV2 cells was examined via ELISA. FPI In this study, an animal model was generated using the hydraulic shock method, as shown in supplementary Figure 2b, see online supplementary material.The head was fixed to the stereotactic frame while the mouse was in the prone position.The skin was disinfected with iodine volt and alcohol.The scalp was cut along the midline to separate the periosteum and expose the skull.The structure of the brain was mapped by mouse stereotaxy using a brain stereoscopic positioning map.After sufficient drilling, a 3-mm-long window was created on the left parietal skull bone.A custommade strike tube was tightly fixed to the nape around the bone window using reinforced dental zinc phosphate cement.Then the strike tube was filled with normal saline.After confirming that there was no leakage in the percussion tube or the percussion sleeve, the percussion tube was closely connected to the brain-trauma instrument and the pendulum height of the hydraulic percussion instrument was adjusted to ensure a stable percussion force.The mice were injured by hydraulic percussion.In the present study, the pendulum angle of the FPI device was varied between 9.8 and 10.8 degrees to bring about a peak pressure of between 1.1 and 1.3 atm when triggered against capped intravenous tubing.A Tektronix digital oscilloscope (TDS460A, Tektronix, Inc., Beaverton, OR, USA) was connected to examine the duration and peak pressure of the fluid pulse.The mice in the sham group were connected to the FPI device at 0 • C.After injury, the mice were placed on their backs and their righting time was measured as an indicator of injury severity.After righting, the mice were re-anaesthetized, and the tube was removed.Finally, the scalp was disinfected and sutured and the animals were placed in a heated cage until they recovered.To choose the moderate-to-severe TBI model, FPI mice were included only if the righting reflex was >5 min according to the criteria of previous studies [21][22][23].Details of the animal experiment are shown in supplementary Figure 2a, b. At different time points after FPI, the mice were euthanized and the brains were removed.The tissues were collected from the damaged side of the brain in various regions (hippocampus or parietal cortex).Brain tissues from the same position from sham mice were dissected and used as controls.The tissues were frozen at −70 • C for RNA or protein extraction. Real-time PCR Briefly, total RNA was extracted with TRIzol reagent and reverse transcribed into cDNA with SuperScript III reverse transcriptase.Target gene expression was quantified.Glyceraldehyde 3-phosphate dehydrogenase was used as an internal control.The expression levels were calculated with the primer pairs used for the amplification of target mRNAs, as shown in Table 2.The data were analysed using the comparative cycle threshold method. Western blotting Cells or brain tissues were washed with phosphate buffer saline and treated with cell lysate.The mixture was centrifuged at 12,000 × g at 4 • C for 10 min.The supernatant was collected for protein extraction using the Qproteome Mammalian Protein Prer Kit.The protein concentration was determined using a bicinchoninic acid protein assay kit.Then, 30 μg of protein was separated by polyacrylamide gel electrophoresis and transferred onto a nitrocellulose membrane by iBlot.Binds were visualized by chemiluminescence using a FluorChem E system.Antibodies against CIRP, GRP78, ATF4, CD206, caspase-3, cleaved caspase-3, Bcl-2, Bax, PERK and p-PERK were used to determine the expression levels of the proteins.The immunoblot results were subsequently analysed using Image-Pro Plus 6.0 software to measure the area and grey value for each target band.The target protein and the internal reference were compared for semiquantitative analysis.Protein content = area of the band × average grayscale; semiquantitative target protein content = target protein content/glyceraldehyde 3-phosphate dehydrogenase protein content.All the data are presented as the mean ± SD. HE and immunofluorescence staining The brain tissues were isolated on different days after TBI.The tissues were immersed in 4% paraformaldehyde overnight, embedded in paraffin and cut into sections (4 μm).The slices were dewaxed with xylene and washed with ethanol.The sections were stained with HE to assess lesion volume.The area of the lesion hemispheres was measured by ImageJ.The percent loss in volume was calculated by comparing the damaged area to the uninjured hemisphere as previously reported [21].For immunofluorescence staining, the brain sections were treated with citric acid antigen repair liquid for 15 min at 96 • C, immersed in 0.5% Triton X-100 for 30 min and blocked in 0.5% bovine serum albumin for 1 h at room temperature.Primary antibodies against GFAP, Iba-1, CIRP, CD86, CD206 and NeuN were added to the slides, which were then incubated at 4 • C overnight.The next day, the slides were washed, incubated with secondary antibodies conjugated to Cy3 or Cy5 for 1 h and then with 4 ,6-diamidino-2-phenylindole for 3-5 min.Approximately 8-10 brain sections were obtained from the damaged region.Positive cells in 5 brain sections were counted.The number of positive cells in the field of interest or the mean fluorescence intensity were assessed using Image-Pro Plus 6.0 software.Briefly, the optical density was calibrated and the area of interest was determined.Thereafter, the sum density of the integrated optical density was measured to calculate the mean density as previously reported [24,25]. TUNEL assays TUNEL assays were performed according to the instructions of the manufacturers.Briefly, the slides were rehydrated, proteinase K (20 mg/ml) was added, the slides were washed with PBS and refixed in 4% paraformaldehyde.The nucleotide mixture and Recombinant Terminal Deoxynucleotidyl Transferase enzyme were added to the slides at 37 • C for 1 h and 2×saline sodium citrate was used to stop the reaction.The percentage of apoptotic cells was calculated as the number of positive cells/the number of all cells in at least four microscopic fields. Behavioural evaluation Mouse neurological severity score (mNSS) An NSS of 10 points was used to estimate neurological injury after TBI, and the motor, reflex and balance abilities of the animals were tested after nerve injury.A score of 0 to 10 represented the injury grade from no damage to severe injury.One or zero points were given for failed or successful tasks, respectively.Testing was performed by the investigators who were blinded to the animal groups.In the present study, NSSs were established at 4 and 24 h, and on dpi 1, 3, 7, 21 and 28. Open field test Anxiety-like behaviour and locomotor activity after nerve injury were assessed in the open as previously described.Briefly, mice were placed in an empty arena (45 × 45 × 30 cm) on dpi 7, 14, 21 and 28.The mice were allowed to freely explore for 5 min.Movement trials were recorded with an overhead camera.The time or distance in the center of the arena (29.5 × 29.5 cm) and the number of entries in the centre were calculated using EthoVision XT version 9 software (Noldus Information Technology, Leesburg, VA, USA). Y-maze test Exploratory activity and memory ability were examined using a Y-maze test (Y-maze parameter, arm length 40 cm; upper and lower arm width 13 cm or 3 cm; wall height 15 cm; BrainScience Idea, Osaka, Japan).Mice were placed in the central area, and the time and number of trips that each mouse spent in each arm were recorded using the EthoVision XT video imaging system within 10 min. ELISA The levels of eCIRP, TNF-α, IL-1β, NSE and S100B in the serum or cell supernatants were measured via ELISA kits, according to the manufacturer's protocols. Nonlabelled quantitative acetylation of proteomics The tissues in the damaged region of the WT or KO mice were collected at 1 dpi and subsequently sent to Shanghai GeneChem Co. for proteomic analysis (Shanghai, China).Briefly, the proteins were extracted and quantified from the samples.The proteins were subsequently separated and digested.The resulting peptides were collected and subjected to Kac enrichment with an acetyl-lysine motif (Ac-K) kit.The samples were analysed on a nanoElute coupled to a tim-sTOF Pro (Bruker, Bremen, Germany) instrument equipped with a CaptiveSpray source.The mass spectrum data were analysed using MaxQuant software version 1.6.14.0.Lysine acetylation sites with a fold change > 2 or < 0.5 and a p value (Student's t test) < 0.05 were considered differential acetylation sites. Transmission electron microscopy Cells from different groups were fixed in 2.5% glutaraldehyde and photographed by a JEOL JEM 1210 transmission electron microscope (JEOL, Peabody, MA, USA) at 80 or 60 kV on a thin-film369 electron microscope (ESTAR thick base; Kodak, Rochester, NY, USA). Statistical analysis The data are shown as the mean ± standard deviation of the mean (SD).GraphPad Prism 9 was used for statistical analyses.The normality of the distribution was determined using the Shapiro-Wilk test.For the experiments involving two groups, an unpaired t test was used to assess normally distributed data, and the Mann-Whitney U test was used to assess nonnormally distributed data.One-or two-way analysis of variance was used to analyse significant differences among the groups, which was followed by Tukey's test for post hoc multiple comparisons if significant effects existed in the main interaction.Outlier data were identified by Graph-Pad Prism 9 and removed.Spearman correlation analysis was used to evaluate the correlation between nonnormally distributed data.Pearson correlation analysis was used for normally distributed data.P values < 0.05 were regarded as statistically significant. Establishment of the TBI model To investigate the potential role of CIRP in the pathogenesis of TBI, WT and neural-specific CIRP KO TBI mice were subjected to lateral FPI.After FPI, the mortality rate was ∼7.8% in WT TBI mice and 7.1% in KO TBI mice at 24 h.Approximately 80% of all TBI mice exhibited seizure activity immediately after FPI.It is well known that a righting time >5 min indicates moderate-to-severe TBI, and the average righting time was 530 ± 36 s in this study (data not shown). CIRP expression is upregulated in TBI animals First, we measured the expression levels of CIRP in the damaged brain regions at different time points after TBI.The (a, b) CIRP expression in the damaged region of the brain of TBI mice was measured by q-PCR and WB at different time points, and GAPDH was used as the internal standard (n = 4).The data are expressed as the mean ± SD.Statistical significance: * p < 0.05; * * p < 0.01; * * * * p < 0.0001.CIRP cold-inducible RNA-binding protein, GAPDH glyceraldehyde 3-phosphate dehydrogenase, q-PCR real-time quantitative polymerase chain reaction, TBI traumatic brain injury, WB Western blot results of q-PCR analysis showed that the mRNA expression of CIRP increased at 4 h, peaked on dpi 1, and gradually decreased from 7 to 28 dpi (3.1-, 4.1-, and 2.4-fold increases in TBI mice vs. sham mice at 4 h and 1 and 7 dpi, respectively; all p < 0.05; Figure 1a).Similarly, the results of a WB assay revealed that CIRP protein expression peaked at 1 dpi and remained high for at least 7 days after TBI (Figure 1b). CIRP deficiency alleviates injury volume and neuronal apoptosis after TBI in vivo Next, cortical tissue loss was estimated using HE staining at 1, 7 and 28 dpi.As shown in Figure 2a, the hemispheric volume loss was ∼1.86, 6.25 and 6.59% in the KO TBI mice, and 3.11, 9.27 and 10.24% in the WT TBI mice at 1, 7 and 28 dpi, respectively.There were significant differences between the WT and KO TBI mice at various time points.The hemispheric volume loss in the WT TBI mice was ∼167% greater at 1 dpi, 148% greater at 7 dpi and 155% greater at 28 dpi than that in the KO TBI mice (interaction F [3,16] = 20.21; group effect F [1,16] = 114, all p < 0.001; Figure 2a).Then, cell apoptosis in the brain was examined by TUNEL assay at 1, 7 and 28 dpi (Figure 2b, and supplementary Figure 3a, see online supplementary material).As shown in Figure 2b, a large quantity of TUNEL + cells was observed in the damaged region of the TBI mice at 1 dpi.There were 148, 180 and 200% more TUNEL-positive cells in the WT TBI mice than in the KO TBI mice at 1, 7 and 28 dpi, respectively (interaction F [3,32] = 18.48; group effect F [1,32] = 160.6;all p < 0.001; Figure 2c). The apoptosis of neurons in the damaged region was examined by TUNEL assay and immunofluorescence staining for NeuN (a specific marker of neurons) at various time points (Figure 2d and supplementary Figure 3b).TUNEL + /NeuN + cells were 162, 202 and 261% more common in WT TBI mice than in KO TBI mice at 1, 7 and 28 dpi, respectively (interaction F [3,32] = 12.36; group effect F [1,32] = 98.34; all p < 0.001; Figure 2e).Furthermore, we examined the level of apoptotic proteins in the damaged region of the brain at 1 dpi using a WB assay.The results showed that the expression of Bax and cleaved caspase-3 was obviously downregulated.In contrast, the expression of Bcl-2 was upregulated in the KO TBI group compared to the WT TBI group (interaction F [3,16] = 79.77; group effect F [1,16] = 157.7;p < 0.001; Figure 2f).eCIRP induces neuronal cell apoptosis via the ERS-related apoptotic pathway Cells were cultured with different doses of eCIRP (0, 0.5, 1 and 2 μg/ml) for 48 h.As shown in Figure 3a, treatment with eCIRP in vitro resulted in cell apoptosis, especially at a dose of 1 μg/ml.However, there were no obvious differences in the apoptosis of neurons between the 1 and 2 μg/ml groups.Thus, we examined the ultrastructure and expression of ERS markers including p-PERK, GRP78, ATF4 and CHOP in neuron-2a cells after they were treated with 0, 0.5 or 1 μg/ml eCIRP for 48 h.The results confirmed that eCIRP treatment, especially at a dose of 1 μg/ml, could activate apoptosis pathways related to ERS in neuron-2a cells and upregulate the expression of p-PERK, GRP78, ATF4 and CHOP in neuron-2a cells (interaction F [6,24] = 0.41, p = 0.861; group F [2,24] = 51.35;p < 0.001, Figure 3b) and cause significant expansion of the ER (Figure 3c). Next, neuron-2a cells were pretreated with GSK2656157 (an inhibitor of PERK) at different doses (0.5-2 μmol/l) for 1 h, and then exposed to 1 μg/ml eCIRP for 48 h to further investigate whether eCIRP could induce neuronal cell death CIRP deficiency inhibits astroglial and microglial responses after TBI Herein, we assessed glial activation by immunofluorescence staining for GFAP or Iba-1 (specific makers of astroglia or microglia) in damaged brain regions of WT or KO TBI mice at different time points after TBI. GFAP staining revealed that astrogliosis occurred after TBI, as indicated by the increase in the percentage of GFAP-positive cells in the damaged region of the brain.Astrocytes became hypertrophic, with a significant overlap of astroglial protrusions in TBI mice from 1 to 7 dpi (supplementary Figure 4a, see online supplementary material).Notably, the fluorescence intensity of GFAP in the damaged cortex of WT TBI mice was much greater than that in the damaged cortex of KO TBI mice.Approximately 3.10-, 1.93-and 3.09-fold increases were observed in the WT TBI mice vs. the KO TBI mice at 1, 7 and 28 dpi, respectively (interaction F [3,32] = 95.74; group effect F [1,32] = 356.9;all p < 0.001; Figure 4a). Iba-1 staining revealed that the number of microglia in the cortex increased at 1 dpi, accumulated continuously at 7 dpi, and decreased at 28 dpi in both WT and KO TBI mice.With regard to cell morphology, the morphology of the microglia changed from a branching shape to a retractable, relatively large cellular body and macrophage-like shape from 1 to 7 dpi, and most of the microglia developed a branching shape at 28 dpi ( supplementary Figure 4b).In addition, the fluorescence intensities of Iba-1 in the damaged cortex of WT mice were ∼1.51, 2.12 and 1.76 times greater than those in KO TBI mice at 1, 7 and 28 dpi, respectively (interaction F [3,32] = 28.84; group effect F [1,32] = 211.6;p < 0.001; Figure 4b). Similarly, we examined the potential impact of eCIRP on microglial polarization in vitro.The results showed that eCIRP treatment promoted the activation of BV2 cells towards the proinflammatory M1-like phenotype, as evidenced by the apparent increase in TNF-α and IL-1β secretion 48 h after eCIRP treatment in a dose-dependent manner compared with that in the untreated group (all p < 0.001; Figure 5e). CIRP deficiency alleviates neurobehavioural dysfunction after TBI The neurological functions of the mice were evaluated using the 10-point mNSS test, the Y-maze test and the open field test at different time-points after TBI (Figure 6a).The results of the mNSS test revealed obvious neurological dysfunction in TBI mice at 4 h and at 1 and 3 dpi.Both KO and WT TBI mice tended to recover, with the median NSS improving to 1 and 2.1, respectively, at 7 dpi.However, at 4 h after TBI, the WT TBI mice exhibited greater dysfunction than did the KO 6c), although the TBI mice travelled less distance into the central region than the sham mice did.However, there were no significant differences in locomotor speed among the four groups (data not shown).In the Y-maze test, KO TBI mice entered the novel arm more frequently than WT TBI mice did at 7 dpi (interaction F[9,64] = 1.434, p = 0.1928; group effect F[3,64] = 6.809, p < 0.001; Figure 6d).In addition, KO TBI mice spent more time in the novel arm than did WT TBI mice at dpi 7 (interaction F[9,64] = 4.278; group effect F[3,64] = 3.766; all p < 0.05; Figure 6d), which implied that mice in the KO TBI group were better able to explore new objects.Thus, the results of different behavioural experiments indicated that TBI might markedly result in cognitive dysfunction, while CIRP knockout could alleviate TBI-induced cognitive deficits and neurological damage. CIRP deficiency regulates histone H3 acetylation in the brain after TBI We further determined the changes in protein acetylation levels in the cerebral cortex of KO and WT TBI mice via label-free quantitative acetylation proteomics at 1 dpi.The acetylation levels of 58 proteins were obviously different between the two groups.These proteins were mainly associated with behaviour, cell proliferation and the immune response (Figure 7a).Most of these proteins were located in the nucleus and cytoplasm, as shown in Figure 7b.The results of the protein-based domain analysis revealed that most of the alterations in the protein domain were focused on the histone and tubulin domain superfamilies (Figure 7c).As presented in Figure 7d, histone H3 was one of the most significantly altered proteins according to acetylation level, and the possible acetylation site was at lysine 9. To verify the results of proteomic experiments, we measured the levels of acetylated histone H3 at the lysine 9 site (H3K9ac) in the cerebral cortex of WT and KO TBI mice using a WB assay at 1 dpi.Histone H3 acetylation was decreased in the cerebral cortex of TBI mice compared to that in the sham group.In addition, the level of H3K9ac in the KO TBI group was significantly greater than that in the WT CIRP mediates neuroinflammation by modulating histone H3 acetylation A previous report revealed that histone H3 acetylation is involved in the microglial inflammatory response after TBI [26].Herein, we investigated whether CIRP deficiency could affect the activity of inflammatory pathways in microglial cells by acetylation of H3 in vitro using LPS treated cell models.BV2 cells were transfected with CIRP-siRNA or NC, 24 h later they were treated with 1 μg/ml LPS for 24 h.The expression of H3K9ac in different cells was examined by the WB assay, and the untreated BV2 cells that underwent NC were served as the control.The results showed that CIRP knockdown in BV2 cells alleviated the decrease in H3K9ac and α7nAChR expressions, thus attenuating the enhanced expressions of L-1β as well as TNF-α induced by LPS stimulation (interaction F(8,30) = 21.32,p < 0.001; group effect F(2,30) = 3.335, p < 0.05; Fig. 8a). Discussion Intracellular CIRP is critically involved in cellular stress responses such as ultravioletr irradiation, hypoxia and hypothermia, and is able to protect cells from environmental changes.eCIRP is regarded as a damage-associated molecule in specific cells, including lymphocytes, epithelial cells, endothelial cells, neutrophils and microglia.Moreover, eCIRP is an essential mediator of the cerebral inflammatory response and can enhance the release of multiple inflammatory cytokines in ischaemia, trauma and alcoholism [29,30].However, the potential functions and underlying mechanism of eCIRP in the pathogenesis of TBI remain to be further elucidated. Herein, the relationship between CIRP expression and TBI-induced neuroinflammation was investigated through both in vivo animal models and in vitro cell experiments.We observed blood deposition and significant tissue damage (tissue colouration was different from that at normal sites) in the damaged regions of the TBI mice at 1 dpi, indicating that the TBI model was successfully established in the present study.First, we examined the location and quantity of CIRP in neural cells and blood.CIRP was found to be located mainly in the nucleus of normal neural cells; it translocated from the nucleus to the cytoplasm at 1 dpi (supplementary Figure 4c) and maintained a high level at least 7 days after TBI (Figure 1b).Moreover, the increased serum levels of CIRP were positively correlated with inflammatory mediator levels in both TBI animals and patients at 1 dpi.Previous reports have shown that CIRP is translocated during hypoxia and inflammation and that the administration of eCIRP to healthy rats could increase the serum inflammatory cytokines [31].Our results are in agreement with those of previous studies and confirm the close relationship between eCIRP and the inflammatory response. The development of TBI is a complex pathophysiological process.Therefore, identifying reliable biomarkers in biological fluids, including blood and cerebrospinal fluid, is highly valuable.S100B and NSE are known to be neural-specific biochemical markers, and serum levels of NSE and S100B can reflect the severity of brain damage with high specificity and sensitivity [32].In this study, we detected that there were positive correlations between CIRP and NSE or S100B, which suggested that CIRP might be used as a biomarker for assessing the severity and prognosis of TBI. Increasing evidence has indicated that CIRP deficiency can relieve neural injury induced by DHCA, cardiopulmonary bypass, and alcohol-induced or hypoxic-ischaemic brain injury [16][17][18]20].Therefore, we speculated that CIRP might also be involved in neuronal apoptosis following TBI.Currently, most studies investigating the role of CIRP in neurological inflammation involve systemic CIRP knockout mice.Loss of the peripheral CIRP gene may affect the progression of central inflammation.In contrast, the central conditional knockout model can overcome this shortcoming and better reflect the central regulatory effect of CIRP in mediating neuroinflammation.Thus, we established TBI animal models using neural CIRP genespecific knockout mice to accurately explore the potential role and mechanism of CIRP in TBI-induced brain damage.CIRP deficiency occurs in both neurons and glial cell precursors in our animal models.The results showed that CIRP deficiency in the central nervous system obviously ameliorated the lesion volume and neuronal apoptosis in TBI animals.Moreover, CIRP deficiency negatively regulated Bax expression and enhanced Bcl-2 expression, thus decreasing caspase-3 activation in brain tissues after TBI.It is well accepted that Bcl-2 and Bax are important components of the apoptotic pathway that can inhibit or induce the expression of caspase-3, the conductor of apoptosis.Taken together, these data suggest that CIRP is a vital contributor to neuronal apoptosis. Until recently, the specific mechanisms underlying eCIRPinduced neuronal cell apoptosis have not been fully understood.Many studies indicate that ERS is a pivotal factor in disrupting cellular stability and inducing apoptosis [33].In the setting of TBI, persistent ERS induced by secondary cerebral injuries is considered a key contributor to cell apoptosis as well as uncontrolled neuroinflammation.For example, Shimizu et al. [34] reported that CIRP promoted inflammation and apoptosis in the lungs resulting from sepsis by triggering ERS.Therefore, we hypothesized that eCIRP released by neural cells might lead to neuronal cell apoptosis via the induction of ERS following TBI.In the present study, we observed that stimulation with eCIRP for 48 h markedly augmented neuronal apoptosis, as indicated by the apparent alteration and expansion of the ER.These results reveal, for the first time, a link between CIRP and ERS in the development of TBI. When ERS is induced by continuous stimulation from a harmful environment, PERK dissociates from GRP78 and is autophosphorylated, thereby promoting the expression of ATF4 and CHOP.As a critical transcription factor of ERS-mediated cell death, CHOP is an important regulator of Bcl-2 family members, and persistent activation of CHOP can induce cell apoptosis [35,36].Recently, Yi et al. [37] reported that PERK-ATF4-CHOP was associated with glucocorticoid-induced neuronal apoptosis.In a previous study, we found that high mobility group box-1 protein, a downstream cytokine of CIRP in inflammation, induced ERS-related apoptosis via the PERK-ATF4-CHOP pathway [38].Herein, we revealed the activation of PERK as well as the elevation of GRP78, ATF4 and CHOP in neurons after eCIRP stimulation.Moreover, treatment with a PERK signalling inhibitor not only alleviated the activation of the PERK-ATF4-CHOP pathway but also inhibited the eCIRP-induced apoptosis and expansion of the ER.These results suggest that the excessive response to ERS evoked by a high concentration of eCIRP during TBI might contribute to neuronal apoptosis by activating the PERK-ATF4-CHOP signalling pathway. Although several studies have shown that simple M1/M2 polarization cannot fully represent the functional heterogeneity of microglia [39,40], classifying M1/M2 microglia is still helpful for understanding the functional status of microglia in the pathogenesis of TBI [41].It is well known that M1 microglia coexpress Iba-1 (a special microglial marker) and biomarkers such as CD86, CD16 and MHC-II, while the M2 microglia coexpress Iba-1 and phenotypic markers, including CD206, CD163 and Arg-1.The persistent M1-like polarization of microglia strongly affects functional outcomes during TBI; thus, regulating the balance of M1/M2 polarization in the microglia is crucial for the development of TBI [42][43][44].Previous studies reported that recombinant eCIRP could boost the secretion of proinflammatory cytokines from murine microglia in a time-and dose-dependent manner.In contrast, inhibition of CIRP diminishes microglial activation in DHCA-induced rat brain injury [18,20].In our animal models, CIRP deficiency occurred in both neurons and glial cell precursors; CIRP deficiency not only regulated neurons but also modulated microglia.We noted for the first time that knockout of CIRP in glial cell precursors inhibited the M1-like phenotype of microglia and therefore alleviated the functional deficiency of mice during TBI. Acetylation is a kind of epigenetic modification that regulates gene expression independently of the DNA sequence.A series of studies indicated that protein acetylation is involved in mediating neuroinflammation following TBI.For instance, Gao et al. [45] noted that the acetylation level of histone H3 was significantly decreased in the brain from hours to days after TBI, which was associated with secondary brain pathology after TBI.However, microglial activation and neuronal degeneration evoked by TBI can be alleviated by increasing histone acetylation in the brain [26].In the present study, we found that CIRP knockout markedly upregulated the acetylation level of histone H3 in the cerebral cortex after TBI at 1 dpi. As the most prominent component of the cholinergic anti-inflammatory pathway, the α7nAChR is closely related to many neurological disorders including neuroinflammation, Parkinson's disease and Alzheimer's disease [46][47][48].Recently, it was reported that TLR4 could inhibit the expression of α7nAChR through histone deacetylase activity in BV2 cells during neuroinflammation [48].Notably, TLR4 is a cell surface receptor involved in eCIRP signalling [31].Thus, it is reasonable for us to speculate that eCIRP may affect the α7nAChR by regulating the acetylation level of histones via TLR4.In the present study, the LPSinduced reductions in α7nAChR and H3K9ac levels were alleviated by CIRP knockdown.Consequently, the LPSinduced formation of proinflammatory mediators was inhibited by the downregulation of CIRP in BV2 cells.Similar results were obtained from in vivo experiments.Furthermore, we observed that pretreatment with an activator of acetylated histone H3 or with an inhibitor of TLR4 significantly inhibited the eCIRP-induced release of proinflammatory cytokines in BV2 cells.These findings suggest that CIRP may impact the microglial inflammatory response by regulating α7nAChR expression through histone modification via TLR4. In addition to microglia, astrocytes play important roles in neuroprotection and inflammation.It is well accepted that astrocyte activation can mediate neuroinflammation by enhancing the release of inflammatory mediators secondary to TBI.Astrogliosis and astrogleneration are the main pathological changes in astrocytes during brain injury, as are the upregulation of GFAP expression [47,49].Similarly, we noted hypertrophic astroglial cells with more robust expression of GFAP in the damaged regions of mice after TBI.However, CIRP deficiency significantly attenuated the activation of astrocytes following TBI. Taken together, the results of our study confirmed the important role and significance of eCIRP in TBI and led to a deeper understanding of the molecular signalling pathway involved in neuroinflammation, in turn revealing a potential novel biomarker and therapeutic target for improving outcomes after TBI.Nevertheless, the current work must be interpreted in the context of a number of limitations.First, we did not explore the underlying mechanism by which CIRP regulates astrocyte activation during TBI.Second, we examined the effect of CIRP on histone H3 acetylation but did not provide a precise regulatory pathway linking CIRP and histone H3 acetylation.Third, we only observed the survival rates of TBI mice within 24 h after TBI due to the limited number of neural-specific CIRP knockout mice, and long-term survival rates should be recorded in our further studies.Fourth, we examined the expression of CIRP in brain tissues only after TBI in vivo and explored the molecular mechanism of TBI using neurons and glial cell lines in vitro.It is more reliable to perform these experiments with primary neurons and glial cells during TBI.Finally, TBI appears to be a key contributor to neurodegenerative disease, and a recent study revealed that eCIRP is involved in the development of Alzheimer's disease [50], which suggested that eCIRP might be associated with TBI-induced neurodegenerative diseases.Therefore, additional clinical and basic studies are needed to further investigate the potential significance of CIRP in longterm brain dysfunction in the setting of TBI. Conclusions In summary, this study demonstrated that eCIRP is induced by brain injury and exerts a harmful impact on the development of TBI by augmenting neuronal apoptosis via the ERS pathway and regulating M1/M2 polarization of microglia via the TLR4/histone H3/α7nAChR pathway.Therefore, the downregulation of eCIRP expression can significantly inhibit neuronal cell apoptosis and the activation of microglia/astrocytes and might serve as a key target for the intervention of uncontrolled neuroinflammation and subsequent brain dysfunction resulting from severe TBI. Figure 1 . Figure 1.CIRP is abnormally expressed during the development of TBI.(a, b) CIRP expression in the damaged region of the brain of TBI mice was measured by q-PCR and WB at different time points, and GAPDH was used as the internal standard (n = 4).The data are expressed as the mean ± SD.Statistical significance: * p < 0.05; * * p < 0.01; * * * * p < 0.0001.CIRP cold-inducible RNA-binding protein, GAPDH glyceraldehyde 3-phosphate dehydrogenase, q-PCR real-time quantitative polymerase chain reaction, TBI traumatic brain injury, WB Western blot Figure 2 . Figure 2. CIRP knockout alleviates brain injury and cell apoptosis after TBI in vivo.(a) Tissue loss was recorded by HE-stained images and quantified by ImageJ software on different days after TBI.The degree of tissue loss was calculated as the area of the region of loss compared to the area of the undamaged hemisphere (n = 3, scale bars: 500 μm).(b) TUNEL-stained (TUNEL + : TUNEL positive) images showing cell apoptosis in the damaged region of the brain from WT and CIRP KO TBI mice at 1 dpi (scale bar: 50 μm).(c) Histograms showing cell apoptosis in the various groups at different time points after TBI (n = 5).(d) Images showing neuronal cell apoptosis (TUNEL + /NeuN + : TUNEL positive/ NeuN positive) in regions of the brain damaged by TBI in WT or KO mice at 1 dpi (scale bars: 50 or 20 μm).(e) Histograms showing the analysis of neuronal apoptosis at different time points after TBI (n = 5).(f) CIRP, Bcl-2, caspase-3, cleaved caspase-3 and Bax expression in damaged regions of the brain from WT or KO TBI mice was examined via WB analysis at 1 dpi (n = 3).The data are presented as the mean ± SD.Statistical significance: * p < 0.05; * * p < 0.01; * * * p < 0.001; * * * * p < 0.0001.BAX Bcl-2 associated X protein, Bcl-2 B cell lymphoma gene-2, CIRP cold-inducible RNA-binding protein, TBI traumatic brain injury, d day, dpi day post-traumatic brain injury, HE hematoxylin and eosin, KO neural-specific CIRP knockout, NeuN neuronal nuclei, TUNEL TdT mediated dUTP nick-end labelling, WB Western blot, WT wild type Figure 3 . Figure 3. eCIRP regulates neuronal cell apoptosis via the ERS pathway.Neuro-2a cells were cultured with eCIRP at different concentrations (0.5, 1 and 2 μg/ml) for 48 h.Cells cultured without eCIRP for 48 h were used as controls.(a) Apoptotic rates of neuro-2a cells were analysed via flow cytometry after treatment with different doses of eCIRP (n = 3).(b) Expression levels of GRP78, p-PERK, PERK, CHOP and ATF4 in neuro-2a cells from the different groups were determined via WB (n = 3).(c) Ultrastructural alterations in neuro-2a cells were observed via electron microscopy after treatment with different doses of eCIRP.The arrows indicate the ER (scale bars: 5 or 1 μm).(d) Cells were pretreated with various concentrations of GSK2656157 for 1 h and then stimulated with eCIRP (1 μg/ml) for 48 h.Untreated cells were cultured for 48 h as controls.Expression levels of GRP78, PERK, p-PERK, CHOP and ATF4 in neuro-2a cells in the different groups were examined via WB.(e) Cells were pretreated with GSK2656157 (1 μg/ml) for 1 h and stimulated with eCIRP (1 μg/ml) for 48 h.Untreated cells were cultured for 48 h as controls.The apoptotic rates of neuro-2a cells in different groups were analysed via flow cytometry.(f) Ultrastructural changes in neuro-2a cells in various groups were analysed via electron microscopy.The arrows indicate the ER (scale bars: 5 or 1 μm).The data are presented as the mean ± SD from three independent experiments (n = 3).Statistical significance: * p < 0.05; * * p < 0.01; * * * p < 0.001; * * * * p < 0.0001.ATF4 transcription factor 4, CHOP C/EBP homologous protein, eCIRP extracellular cold-inducible RNA-binding protein, ER endoplasmic reticulum, ERS endoplasmic reticulum stress, GRP78 glucose-regulated protein 78, GSK2656157 inhibitor of protein kinase RNA-like ER kinase, PERK protein kinase RNA-like ER kinase, PI Propidium Iodide, p-PERK cleaved protein kinase RNA-like ER kinase, WB Western blot Figure 4 . Figure 4. CIRP knockout inhibits the activation of microglia/astrocytes after TBI.(a) Images and histograms of astrocyte activation.Immunofluorescence staining and statistical analysis of GFAP expression in the damaged cortex of WT or KO TBI mice at different time points after TBI (scale bar : 50 μm).(b) Images and histograms of microglial activation.Immunofluorescence staining and statistical analysis of Iba-1 expression in damaged cortex from WT or KO TBI mice at different time-points after TBI (scale bar: 50 μm).The data are presented as the mean ± SD from five independent experiments; * * * * p < 0.0001.DAPI 4 ,6-Diamidino-2-phenylindole, CIRP cold-inducible RNA-binding protein, d day, GFAP glial fibrillary acidic protein, Iba-1 ionized calcium binding adapter molecule-1, KO neural-specific CIRP knock-out, MFI mean fluorescence intensity, TBI traumatic brain injury, WT wild-type Figure 7 . Figure 7. CIRP deficiency increased His acetylation at lysine 9 after TBI.Protein acetylation levels in the damaged cortex of WT or KO TBI mice were tested via label-free quantitative acetylation proteomics experiments at 1 dpi.(a-c) Histograms showing the functions, locations and domains of the significantly changed proteins at 1 dpi.(d) Table showing the top ten changed proteins and their acetylation positions.(e) Acetylation levels of histone H3 in the damaged cortex of WT or KO TBI mice were assessed via WB analysis at 1 dpi.The data are presented as the mean ± SD from three groups.* * * p < 0.001; * * * * p <0.0001.CIRP cold-inducible RNA-binding protein, d day, dpi day post-traumatic brain injury, H3 histone H3, H3K9ac histone H3 acetylation levels at the lysine 9 site, KO neural-specific CIRP knockout, TBI traumatic brain injury, WB Western blot, WT wild-type Table 1 . Characteristics of TBI patients and volunteers The primers and Small interfering RNA (si) CIRP or negative control (NC) of siCIRP were obtained from GenePharm Co., Suzhou, China.The BV2 and neuro-2a cell lines were purchased from Procell Life Science and Technology Co., Ltd, Wuhan, China.Triton X-100 was obtained from Sigma, St. Louis, MO, USA.Enzymelinked immunosorbent (ELISA) kits were obtained from Excell Inc., Shanghai, China. with moderate to severe TBI (Glasgow Coma Scale < 12), and (3) had a duration from injury to admission of <24 h.The exclusion criteria were as follows: (1) died within 24 h of admission, (2) had multiple traumatic conditions, including TBI, (3) had chronic intracranial haematoma, and (4) had a history of immune diseases.Blood samples were collected from patients and donors.Peripheral blood (2-5 ml) was collected in the morning, transferred to a blood procoagulation tube and gently inverted to mix fully.Blood samples were incubated at room temperature for 30 to 60 min and centrifuged at 13,000 rpm for 10 min at 4 • C. The supernatants were stored at −70 • C.This study was authorized by the ethics committee of Chinese PLA General Hospital, Beijing, China (No. S2021-539-01).RNA-like ER kinase (p-PERK) and the acetyl-lysine motif kit were obtained from Cell Signaling Technology, Danvers, MA, USA.An Qproteome Mammalian Protein Prer Kit was obtained from QIAGEN GmbH (Germany).TdT-mediated dUTP nick end labelling (TUNEL) kits were purchased from Promega, Wisconsin, CA, USA. Table 2 . Sequences of primers
v3-fos-license
2019-03-30T13:07:35.329Z
2014-10-26T00:00:00.000
39710177
{ "extfieldsofstudy": [ "Biology" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://doi.org/10.5455/javar.2014.a30", "pdf_hash": "271a55002d5964d5073dca065fb0091bc808def4", "pdf_src": "ScienceParseMerged", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41255", "s2fieldsofstudy": [ "Agricultural And Food Sciences", "Biology" ], "sha1": "271a55002d5964d5073dca065fb0091bc808def4", "year": 2014 }
pes2o/s2orc
Molecular epidemiology and antibiotic resistance pattern of Enteropathogenic Escherichia coli isolated from bovines and their handlers in Jammu , India The study was aimed to investigate the molecular epidemiology and antibiotic resistance pattern of Enteropathogenic Escherichia coli (EPEC) in bovines and their handlers in Jammu, India. A total of 173 samples comprising of 103 fecal samples from bovines (60 from cattle and 43 from buffaloes), 28 stools and 42 fingertip rinses from bovine handlers were collected during August 2011 to March 2012. The isolated 126 E. coli strains (86 from bovines and 40 from handlers) belonged to 25 different serogroups in addition to rough and untypeable strains. Using multiplex polymerase chain reaction, four EPEC strains were isolated; two each from bovines and their handlers, of which two possessed the hemolysin (hlyA) gene. The prevalence of EPEC was recorded as 1.66% (n=1/60) in cattle, 2.32% (n=1/43) in buffaloes, and 2.85% (n=2/70) in their handlers. Antibiogram studies with the EPEC revealed the presence of multidrug resistant E. coli. The isolates were mostly resistant to Amikacin, Amoxicillin, Cefixime and Streptomycin, and sensitive to Chloramphenicol. This study indicates that bovines as well as their handlers in Jammu region harbor EPEC, many of which being multi-drug resistant and carrying the hemolysin gene could be of high pathogenic potential for humans. Molecular epidemiology and antibiotic resistance pattern of Enteropathogenic Escherichia coli isolated from bovines and their handlers in Jammu, India Majueeb U Rehman 1,* , Mohd Rashid 1 , Javeed Ahmad Sheikh 1 and Mohd Altaf Bhat 2 1 Division of Veterinary Public Health and Epidemiology, Sher-e-Kashmir University of Agricultural Sciences and Technology, RS Pura, Jammu 181102, India; 2 Division of Veterinary Microbiology and Immunology, Sher-e-Kashmir University of Agricultural Sciences and Technology, RS Pura, Jammu 181102, India. INTRODUCTION Enteropathogenic E. coli (EPEC) are an important cause of diarrhea worldwide especially in developing countries (Chen and Frankel, 2005;Alikhani et al., 2007).These extracellular pathogens intimately attach to the epithelial cells of intestine resulting in a severe lesion on the epithelial layer called attachmenteffacement (A/E) lesion that destroys the absorptive villi resulting in malabsorption and diarrhea (Schmidt, 2010).The attachment is carried with an outer membrane protein intimin, encoded by the eaeA gene that exists as a part of 35 kb Pathogenicity Island called locus for enterocyte effacement (LEE).The latter encodes a type III secretion system (T3SS) that translocates multiple effector proteins into the host cells and disrupts the cytoskeleton to produce the A/E lesion (Clarke et al., 2003;Garrido et al., 2006). Cattle are regarded as potential sources of EPEC that possess the virulence machinery to be pathogenic to humans (Monaghan et al., 2013;Bolton et al., 2014).Transmission to human occurs via the food chain (Trabulsi et al., 2002), but direct contacts with the ruminant feces and their environment may represent an increased risk factor for human disease (Aidar-Ugrinovich et al., 2007).In India, several EPEC strains isolated from cattle belong to serogroups found associated with severe disease in humans (Wani et al., 2003).Taking this into consider-ation and the increasing concern of resistance of pathogenic bacteria to antibiotics among animals and humans, the present study was undertaken on the epidemiology and antibiotic resistance pattern of EPEC from bovines and their handlers in Jammu, India. MATERIALS AND METHODS Collection of samples: A total of 173 samples were collected from bovines and bovine handlers.Of these, 103 fecal samples were collected from bovines comprising 60 from cattle and 43 from buffaloes per rectally at organized farms (Cattle farm Belicharana and Cattle farm, RS Pura), as well as from the household farms of the areas of Sidher, Khanachak, and Kotli, RS Pura, Jammu during the period from August 2011 to March 2012.Samples from cattle and buffalo calves were obtained by rectal swabs (Hassan et al., 2014).Twenty seven (27) stool samples and forty three (43) fingertip rinses were obtained from the persons handling or rearing the animals at these farms.Samples were collected in plastic containers and transported on ice to the laboratory. Selective plating: The samples were enriched using MacConkey broth followed by selective plating (Khan et al., 2002).Two to three lactose fermenting colonies from each MacConkey Agar plate were streaked on Eosin Methylene Blue agar for appearance of characteristic metallic sheen.Presumptive E. coli after isolation were subjected to further biochemical identification (Hitchins et al., 1992;Quinn et al., 1994;Roy et al., 2012), and the purified cultures were maintained in 0.75% nutrient agar media slants in triplicate. Serogrouping: The E. coli isolates were submitted to National Salmonella and Escherichia Centre, Central Research Institute Kasauli, HP, India for serogrouping on the basis of their "O" antigens. Multiplex polymerase chain reaction (mPCR): DNA extraction was carried out by the heat lysis (snap chill) method.The E. coli isolates were first revived in Mac Conkey agar to obtain fresh isolates and re-suspended in 100 μL of nuclease free water in separate microcentrifuge tubes, which were subjected to heat lysis by keeping in boiling water bath for 10 min, and quickly placed in ice.The bacterial lysates were centrifuged at 10,000 rpm for 10 min, and the supernatant was taken as template DNA for mPCR.Previously reported primers (Paton and Paton, 1998) were used in this study as mentioned in Table 1.Briefly, the mPCR was carried out in a final reaction volume of 25 μL using 2 mM MgCl2, 0.6 mM concentrations of each 2´deoxynucleoside 5´-triphosphate (dNTPs), 5 μL of 5x assay buffer, 0.5 μL of forward and reverse primers, 2.0 μL template DNA and 1.0 U of GoTaq DNA Polymerase (PROMEGA CORPORATION, MADISON, USA) in a Thermocycler (APPLIED BIOSYSTEMS GENE AMP PCR SYSTEM 2400, USA).The amplified PCR products were analyzed by gel electrophoresis in 2% agarose containing ethidium bromide (0.5 μg/mL), visualized with UV-illumination, and imaged with gel documentation system. AAT GAG CCA AGC TGG TTA AGC T The oligonucleotide sequences were described by Paton and Paton (1998). Antibiotic resistance testing: The EPEC isolates were examined for their antimicrobial drug susceptibility/ resistance pattern against 15 antibiotics by disc diffusion technique (Bauer et al., 1966;Roy et al., 2012).The antibiotic discs were obtained from HI MEDIA LABORATORIES PVT.LTD.(Mumbai, India) (Figure 1).Culture and sensitivity test (CST) was done by inoculating 3-4 colonies of E. coli in 5 mL nutrient broth followed by incubation at 37 0 C for 4h till light to moderate turbidity develops.After inoculation with 100 µL of the broth culture using sterile cotton swabs, plates of Mueller-Hinton Agar (MHA) (HI-MEDIA, Mumbai, India) were allowed to dry, antibiotic discs were placed 2 cm apart on the MHA plates and incubated at 37 0 C for 16-18 h.The diameter of the zones of inhibition was measured and interpreted with HI MEDIA antibiotic zone scale. RESULTS AND DISCUSSION A total of 126 E. coli strains were isolated; 86 from bovines and 40 from their handlers.The isolates belonged to 25 different serogroups based on "O" antigen in addition to rough and untypeable strains (Table 2).Most of these serogroups were described by Wani et al. (2003) and Orden et al. (2002) isolated from bovines.Serogroups O3, O8, O17, O25 and O76 were reported by Vagh and Jani (2010) in cattle and buffalo calves, whereas O8, O17 and O25 were reported in cattle and buffalo calves by Joon and Kaura (1993) and Kaura et al. (1991).The presence of other serogroups could be due to their varied environmental distribution.Certain E. coli strains with serogroups O25, O106 and O60 were isolated from bovines as well as their handlers at the same farm indicating their possible transmission (Table 2). All the four EPEC isolates were obtained from one farm i.e., Cattle Farm Belicharana, and were untypeable.The prevalence of EPEC in cattle, buffaloes and bovine handlers was 1.66%, 2.32% and 2.85%, respectively (Table 3).Several other researchers have reported EPEC prevalence close to these values.In Kashmir of India, it was found to be 1.53% in diarrhoeic calves (Kawoosa et al., 2008), 2.7% from calves in Sao Paulo, Brazil (Aidar-Ugrinovich et al., 2007), 3.7% in bovine feces from Ireland (Monaghan et al., 2013), 5.8% in cattle feces from a cluster of twelve farms in Netherland (Bolton et al., 2014), and from 1.8% of diarrheic stool samples in Kolkata, India (Dutta et al., 2013).However, the prevalence of EPEC from rural household farms was nil in our study, which is in contrast to other findings from Kashmir, India reported in calves at unorganized farms (Kawoosa et al., 2008).The isolation of EPEC strains of bovine and human origin from one farm where the handlers had an association with large number of animals suggests that risks of transmission to the handlers could be higher in larger intensive farms as compared to small and household farms. Antibiotic resistance pattern of the EPEC isolates: Resistance to three or more antibiotics was observed in one EPEC isolate of bovine origin and among two human isolates.Highest resistance (75%) was observed against Streptomycin, Amoxicillin, Amikacin and Cefixime (Figure 1).However, all of the EPEC isolates were sensitive to Chloramphenicol, followed by Norfloxacin (75%) and Co-trimoxazole (75%).Interestingly, an EPEC isolate obtained from the stool sample of a bovine handler at Cattle Farm Belicharana was resistant to twelve of the fifteen antibiotics used, but showed sensitivity to Chloramphenicol and intermediate resistance to Gentamicin and Cefotaxime.Multi-drug resistance among EPEC from human patients was reported from North India (Vaishnavi and Kaur, 2003).Higher resistance of EPEC to Streptomycin, Ampicillin, Tetracycline and Sulphonamides was also reported from children in Brazil, with 43% of typical EPEC isolates being resistant to three or more of the antibiotics tested (Scaletsky et al., 2010).Resistance of EPEC to Nalidixic acid and Tetracycline isolated from bovine feces at different bovine farms was reported from Ireland (Bolton et al., 2014).However, the isolates showed intermediate resistance to Ciprofloxacin unlike our study in which the isolates in general showed sensitivity to Chloramphenicol, Norfloxacin, Co-trimoxazole and Ciprofloxacin.The higher antibiotic resistance among EPEC of both human and animal origin isolated in this study could be a cause of concern for the public health. CONCLUSIONS This study indicates that cattle, buffaloes as well as their handlers in Jammu region harbor EPEC.A high prevalence of hemolysin gene among the EPEC and other E. coli strains and the multiple antibiotic resistance warrant of their high pathogenic potential to humans.However, further epidemiological studies are suggested involving large populations especially among the unorganized farms of this region. Table 2 : Serogroups of E. coli isolated from organized and unorganized farms. Table 3 : Prevalence of Enteropathogenic E. coli in cattle, buffaloes and their handlers.
v3-fos-license
2020-06-09T15:35:11.013Z
2020-06-09T00:00:00.000
219542984
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://bmcnurs.biomedcentral.com/track/pdf/10.1186/s12912-020-00445-7", "pdf_hash": "797ac208e9e44e8a6048aac56ac0aed1da18dcbe", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41256", "s2fieldsofstudy": [ "Medicine" ], "sha1": "797ac208e9e44e8a6048aac56ac0aed1da18dcbe", "year": 2020 }
pes2o/s2orc
Parents’ and nurses’ ideal collaboration in treatment-centered and home-like care of hospitalized preschool children – a qualitative study Background The hospitalization of children requires collaboration between parents and nurses in partnerships. This study examines parents’ and nurses’ experiences of ideal collaboration in treatment-centered and home-like care of hospitalized preschool children. Methods This qualitative study is part of a larger study of 12 parents and 17 nurses who were responsible for 11 hospitalized children. Data collection took place at a Norwegian general paediatric unit, and the data were gathered from observations of and qualitative interviews with the parents and nurses. The analysis was conducted in six steps, in alignment with Braun and Clarke. Results Two essential themes emerged from the analysis. (1) Treatment-centered care focuses on the following tasks in building relationships – gaining trust, securing – gaining voluntariness, distracting and comforting, and securing and gaining voluntariness. The purpose of treatment-centered care is to perform diagnostic procedures and offer treatment. (2) Home-like care, the purpose of which is to manage a child’s everyday situations in an unfamiliar environment, focuses on the following tasks: making familiar meals, maintaining normal sleeping patterns, adjusting to washing and dressing in new situations, and normalizing the time in between. From this pattern, we chose two narratives that capture the essence of ideal collaboration between parents and nurses. Conclusion The ideal collaboration between nurses and parents is characterized by flexibility and reciprocity, and is based on verbal and action dialogues. In treatment-centered care, parent-nurse collaboration was successful in its flow and dynamic, securing the children’s best interests. Meanwhile, the achievement of the children’s best interest within home-like care varied according to the level of collaboration, which in turn was related to the complexity of the children’s everyday situations. Background This article focuses on the collaboration between parents and nurses when preschool children are hospitalized. Evidence supports that creating a partnership with parents or other family members improves the quality of care for children with long-term illnesses, as well as the quality of paediatric nursing practice [1]. The philosophy behind this is that the family is the constant in the child's life, and family-centered care offers a way of involving parents in the child's care through partnership. However, actively involving parents in partnerships appears to be challenging. Smith, Swallow & Coyne [2], in a concept synthesis based on thirty studies, indicates the importance of supporting parents in their role, valuing parents' knowledge and experiences, and incorporating parents' expertise in developing effective parent-professional relationships as collaborative processes. However, the synthesis also suggests that implementation of these concepts into practice remains problematic, because of poor information sharing, lack of understanding of the family context, and not valuing parents' knowledge and contribution [2][3][4][5][6]. Past studies on collaboration between nurses and parents have focused on collaboration between nurses and parents in relation to the performance of specific procedures and the treatment of hospitalized children. When the nurses/health personnel have the initiative and responsibility of organizing and performing the necessary tasks [7][8][9][10][11][12][13], the parents participated and assisted in procedural situations [10,13], and their presence was considered important [7]. A number of studies have described the parents' performance of daily basic care of hospitalized children. Parents mainly regard the following as their responsibility: for example, bathing and dressing the child, administering food, mobilizing the child, and comforting the child [7,9,10,[13][14][15][16][17][18][19]. In order to take care of the children, the parents needed the support and facilitation skills of the nurses [20]. This corresponds to the fact that nurses, due to their other duties, were unable to provide basic care for the children [10]. Aein et al., [8] emphasized that nurses and parents experience that they each have their specific domain in caring for the children. The way in which nurses and parents share the responsibility of caring for the child is in continuous flux [15]. The overall aim of this study is to explore the experiences of parents and nurses and the concrete ways in which nurses and parents collaborate in partnership when caring for hospitalized preschool children. From observation and responses of nurses and parents, the study explored the question of what is the ideal collaboration to fill the need of effective partnership in family-centered care. Methods The study has a qualitative design based on a hermeneutical perspective where the basis is the understanding and interpretation of parents' and nurses' experiences in their everyday life. According to Gadamer, our understanding is influenced of our prejudices and the present must be understood in the light of the past. Our understanding of an experience is therefore always the fusion of both present and past based on a polarity of familiarity and strangeness [21]. To obtain a deeper understanding of the experiences of both nurses and parents, a field study with observations and interviews were performed [22,23]. The analysis of the observations and interviews was based on a hermeneutical perspective, which guided interpretation of the parents' and nurses' actions and experiences [21]. Participants The study took place at a general medical paediatric unit in a Norwegian hospital. The criteria for selection were that the parents stayed with the child, the child was in the beginning stages of hospitalization, probably staying for 2 days or more, the child was neither critically nor terminally ill, the child was between one and 6 years oldpreschool age in Norway; and the parents had Norwegian as their first language. We planned to include 10-15 preschool children with their respective parents and nurses, while leaving room to potentially expand or reduce the number of participants. The participating nurses had to be responsible for the selected children. To recruit participants, the head nurse directly asked parents to participate in the study, and invited individual nurses to participate, by sending an email, which instructed nurses to submit their answers in a locked box. Twelve parents (three fathers and nine mothers) of 11 hospitalized children, and 17 female nurses participated in the observation (all were registered nurses [RN]; one nurse was also a paediatric nurse: their experiences in paediatric units ranged from 1 to 17 years). When 11 children were included in the population, less nuances and variations of the collaboration between parents and nurses was apparent in the sample. All parents and 13 nurses were interviewed during the observation period. For practical reasons, we did not interview all nurses included in the study, but rather gave priority to nurses who were responsible for the children on the day the children arrived and the day they left the unit. Six of the children's hospitalizations were planned in advance, and five children were admitted with acute medical conditions. They had various medical diagnoses: four children had chronic medical disorders from birth. The children, eight girls and three boys, were hospitalized from two to 4 days, and one child was readmitted for 1 day. The children's ages were between one and six as follows: two children were 1 year old, four were two, four were three, and one was six. 16,697/JE). The head nurse of the children's unit, who also obtained informed written consent from the parents, contacted the informants. We obtained informed written consent from the nurses before the start of the study. The participants were informed of their rights to confidentiality and voluntariness, how to participate in the study, and their right to withdraw at any time. Children are vulnerable due to their immaturity, and parents become vulnerable when their child suffers in an unfamiliar environment. This was addressed by spending time introducing the researcher (first author) both to the child and the parent(s) and by spending time for the researcher and participants to become acquainted, starting with small talk, before taking a more passive role. The researcher (first author) was conscious of her own preunderstanding and attempted to approach the participants with openness [23]. Data collection The primary researcher (first author) performed the data collection for this study over a period of 4 months, observing one child per week in an unstructured manner [22]. The researcher's role was as a partial participant observer. That is, at the beginning of each observation period, the research stayed close to the situation, including participating in small talk, and then stepped back in the room in order not to affect the participants' collaboration. The researcher sometimes participated, for example by giving equipment to a nurse. The researcher followed the nurse who was responsible for each child during every morning shift and some afternoon shifts until that child was discharged (27 morning shifts, 5 afternoon shiftsabout 160 h). If procedures was planned for in the afternoon, the observation continued. Descriptive and reflective field notes were written retrospectively, shortly after the observed situations and often in the afternoon. The descriptions focused on personal relationships and movements, conversation, play, and the performance of practical tasks, including procedural situations. To supply the field notes, the first author conducted qualitative interviews with parents after the observation and with nurses at the time of the child's discharge. The interviews and observations focused on the collaborative situations of parents and nurses, more precisely, on the participants' actions and experiences of collaboration related to medical procedures, the child's treatment, and topics such as the child's sleep and meals. The observation and interview guide was thematically oriented with the themes: washing and dressing, meals and eating, sleeping, relief (i.e., the parents' need to leave the child), play/activity, illness experiences (e.g., disease symptoms, discomfort and pain), and procedures and treatments. The aim of the interviews was to generate rich insights on the parents' and nurses' experiences of the observed collaborative situations. The interviews were performed in the hospital (except with one parent who was contacted via phone and one interviewed at home) and lasted from 30 to 90 min. The interviews were audio taped and transcribed verbatim. Field notes were written after each observation. The children were also assigned fictitious names to preserve anonymity, both in the field notes and the transcript. Data analysis We organized the collected data thematically, based on actions and experiences. To do so, we applied the hermeneutic method, alternating between observed and interview details in a holistic approach [21]. The data from the observations provided important details for the analysis of the participants' actions and events in collaboration situations, including the context. Meanwhile, the data from the interviews was important in analyzing parents' and nurses' understanding of the situations, their reactions, and the reasons why they acted in specific ways. We conducted a thematic analysis [24], which is a 'bottom up' and inductive way to identify themes and patterns in the data. The thematic analysis searches for patterns in the data, and the themes identified are strongly linked to the data itself [25]. During the analysis, we systematically examined the data to identify repeated patterns of meaning. This process was composed of six steps [24]. The first step, 'familiarizing yourself with the data', involved reading the entire transcript of material. We read the text several times to become familiar with it and, at the same time, to note ideas to encodelatent themes. The second step involved generating initial codes by identifying interesting aspects based on patterns, themes and notes in the text related to collaboration between parents and nurses. The preliminary themes developed were 'everyday situations' and 'procedural situations'. In the third phase, we sorted the different codes into potential themes based on the discovered patterns. In the fourth step, we collated all the relevant coded data extracted into broader themes searching for relationships between themes and between different levels of themes and subthemes, thus making an overview. In the fifth step, we named the themes; the names captured something important about the data in relation to the research question: 'treatment-centered care' and 'home-like care'. 'Treatment-centered care' included the following subthemes: 1) building relationships gaining trust, 2) securinggaining voluntariness, 3) distracting and comforting, and securing and gaining voluntariness. The second theme was 'home-like care' and included the following subthemes: 1) making familiar meals, 2) maintaining normal sleeping patterns, 3) adjusting washing and getting dressed in new situations, 4) normalizing the time in between (cf. Table 1). Then, for each individual theme, we wrote a detailed analysis and identified narratives in relation to the research question [24,26]. Results The two essential themes with subthemes that emerged from the analysis, illustrating two dissimilar care situations with different purposes, make up the structure of the findings below. Treatment-centered care The aim of treatment-centered care was to perform diagnostic procedures and to carry out treatment of the child in their best interest; this type of care was related to the cause of the child's hospitalization. We found that parents and nurses collaborated by sharing responsibility and tasks in a dynamic way. The starting point for the nurse was building a relationship and achieving the child's trust before then enlisting the child's voluntariness. The nurse and parents further distracted and comforted the child to make him/her feel safe. To ensure the child's voluntariness and to carry out the procedure with as little resistance and protest as possible, collaboration was essential. In the worst-case scenario, the need to use force caused the child discomfort, and this could increase procedure time. In some situations, the nurses would sideline the parents and call for other nurses to support them. In the treatment-centered care, the initiators were the nurses: they had the main responsibility of carrying out the procedures and treatments and of delegating tasks to the parents. When performing procedures, nurses and parents balanced their actions in a flowing, collaborative way to safeguard the child's wellbeing. The collaboration visualized below in the written narrative of Jo (a fictitious common unisex name) embodies the most common and ideal collaborative situations as observed and expressed by participants. The story (quotations in italics, authors' comments in bold italics) is organized in accordance with the subthemes. The narrative of Jo Building relationshipsgaining trust Jo, who is a preschool child, has the fifth scheduled hospitalization; doctors are searching for a diagnosis. The child is with the father. When the nurse meets the child and the father, she first greets Jo and then the father. Next, she involves herself in the child's play with a train. She does this before introducing the child to the procedural situation. The father said the following about the nurse's way of building a relationship with the child: Very good, she makes Jo trust her. The nurse confirms Table 1 Examples of themes and subthemes Meaning units from interview text and field notes/codes Name of meaning units/codes Themes Subthemes The nurse first greets the child and then the father, and involves herself in play with the child. When the nurse has played a short while with the child, she tells the father that they have to go to the reception room to do reception procedures (observation). Very good, she makes Jo trust her before she introduced the procedure situation (interview, father). When I meet a child without knowing how the child reacts and how the child is, I will build a relationship, and play and talk to the child before I start with [the] procedure (interview, nurse). Getting to know each other before introducing the procedure Getting the child to trust the nurse before the procedure starts Building a relationship with the child Treatmentcentered care Building relationshipsgaining trust Kim wakes up and is about to have the breakfast. The mother tells the nurse that Kim has a poor appetite and eats sparsely. The nurse asks the child directly what the child wants to eat. Afterwards, the nurse brings the child the food the child wants. The mother and the nurse help the child to sit upright in bed. When the nurse returns to the room, the food is almost untouched. The nurse asks what to order for dinner. The nurse has given the child fever-reducing medication before Kim's favorite dish is served. Kim takes only a few bites (observation). Kim got something else to eat than what was on the menu. The nurse said she would do her best to get something that the child likes. However, Kim would not eat. The child has no appetite. My responsibility is to be there as a mum; take care of my child, do things Kim wants, give Kim something to drink and such (interview, mother). The mother's task is to be there for the child and offer the child any drinks the child wants. My task is to monitor the child's physical condition (interview, nurse). The nurse provided desired food because of poor appetite Personalizing and facilitating the meal for the child in collaboration with the mother Providing desired food related to poor appetite Ensuring that the child receives food and drink Taking care of the child's physical condition Home-like care Making the meal familiar this: When I meet a child without knowing how the child reacts and how he/she is, I will build a relationship, play with and talk to the child. The starting point for the nurses involved building a relationship with the child by playing together in order to achieve the child's trust. This was done before they introduced the child to the procedural situation. The parents' role here was to support the nurses. Securinggaining voluntariness After a while, Jo has to undergo different procedures. The nurse brings the equipment to measure oxygen and Jo's pulse. She shows how it is done on her own hand, and then she wants to do it on the child. However, the father tells her to perform the procedure on him first. Afterwards, he lets the child play with the finger equipment. The father puts it on the child's finger, and the nurse makes it exciting for the child by showing Jo the rhythm on the screen. The nurse then wants to measure the child's blood pressure, but the father asks if it would be better to wait until the end of this procedure. The nurse and the father collaborate when weighing the child and measuring the child's height. They explain gradually what will happen. The child is on the scale, and the nurse and father stand on either side of the scale in front of the child. The nurse says the child's weight aloud and boasts about how big the child is. Afterwards they measure the child's height together. In the interview, the nurse explains the situation: I show the procedure on myself. However, not everything can be shown; one must try to render it harmless and show what we will do next. The father says: I am trying to keep the disagreeable thing at the end. It was done in this way, and Jo was not forced into things that he/she did not like. In this way, the nurses together with the parents prepared the child to perform the required procedure. The father was active, engaged, and worked ahead of what was going to happen to the child. He often took the initiative. The nurse followed the father's advice. Therefore, they limited the need to force the child and instead enlisted the child's voluntariness. The parents' deep knowledge of the child allowed them to become involved in these situations with different but collaborative contributions. Distracting and comforting, and securing and gaining voluntariness Upon seeing the blood pressure cuff in the nurse's hand, the child starts screaming. The father then asks the nurse if she can measure his blood pressure first. She wraps the cuff around the father's wrist and begins to measure. The nurse then addresses the child, who is in his father's lap, and together with the father, they try to place the cuff around the child's arm, but the child screams more intensely and turns away. The father says it does not hurt; it just feels a bit tight. The nurse decides it is best to wait, stops the procedure and says: Everything went well until we pulled out the cuff and the child screamed, wriggled and turned away. Then it was best to postpone it, because if you proceed, then the relationship you have tried to build deteriorates. The nurse emphasized the child's discomfort, and her choice to postpone the procedure. This way she maintained her relationship with the child; that was her overriding aim. The father had the overview of the situation. Based on his knowledge of his child, he supported the child, but the nurse made the decision to end the procedure. Later in the day, the nurse says to the father that she would like to take a blood pressure measurement before administering the narcosis. As soon as the nurse with the equipment enters the dining room where they sit, the child starts to cry. The nurse says aloud that one should not do unpleasant things in this room, but that this was appropriate right now. The father quickly moves the child from the couch to the computer. The nurse and the father stand behind the child and bend over the child. The nurse sits down next to the table and plays with some small animals with the child on the computer table. The nurse shields the child's view of the blood pressure measurement equipment, and the child forgets about it after a while and stops crying. The nurse says the animals' names or alternates between doing this and asking the child about their names. They place the animals together in rows. The nurse asks about the sounds the animals make, and they make the animals' sounds together. She maintains the relationship by playing with the child, and the father is participating in this play. The nurse then stands, brings out the blood pressure cuff, and together with the father wraps the cuff around the child's arm. The child protests and cries. They tell the child that it is not dangerous. At the same time, the father finds pictures of trains on the computer. He shows the trains to the child. The nurse, child and father look at the pictures together. The child's cry calms down, and the nurse measures the blood pressure. When the cuff is tightened around the arm, the child cries more. The father continues to show new pictures, but the child start crying again loudly when the cuff is tightened on the arm. The father comforts and calms the child. The nurse ends the situation by talking about and playing with the animals with the child, just as they did before the measurement. The nurse says the following about the father: He was good at distracting and smart to say that it is not dangerous. He stayed with the child, took care of the child, and comforted the child. The father says: It may be my task as a parent to distract, comfort, maybe inspire, in order to change the focus and to forget about what is disagreeable. The phases of distracting and comforting the child lasted a long time, and moved back and forth. In this way, the adults avoided or reduced the need to force the child into submitting to the procedure that worried the child. Nurses and parents collaborated in a dynamic way by distracting and comforting the child to safeguard the child and enlisted the child's voluntariness with the common aim of performing the procedures and taking care of the child's well-being. The father and nurse were able to establish a physical distance from the equipment by using the computer as a toy and by playing with the toy animals. Both the computer and the animals became tools for distracting and rendering the situation harmless to the child. The father's deep affiliation with and knowledge of his child enabled him to take the initiative in some of the situations. The nurse inspired the father and also allowed him space to act. In a dynamic way, based on reciprocity, they complemented each other. However, in some situations we observed that the responsible nurse might include other nurses for support, a move which sidelined the parents. This happened when the parents became evasive, which made the child feel insecure. In such situations, the focus was mostly on performing the procedure with less attention on the child. Home-like care The aim of home-like care is to safeguard the child's everyday situations in an unfamiliar and strange environment. Maintaining familiar routines were prioritised by the nurses and parents, with varying degrees of collaboration, with the aim of individualizing situations by making familiar meals, maintaining normal sleep patterns, adjusting washing and getting dressed in new situations and normalizing the time in between various situations. The parents were the initiators and were mainly responsible for asking for assistance. The nurses shared responsibility and tasks with the parents, but their involvement with the child was more indirect. Sometimes, the nurses might take over some tasks based on the assessment of needs, and at other times the nurses kept their distance because it was in the best interest of the child. The child's illness, the child's severity, the age of the child, the parents' previous hospital experience and their presence in the hospital were conditions that gave rise to variations in the degree of nursing involvement. The narrative of Kim (a fictitious common unisex name) embodies the findings of the most ideal and common collaboration from observations and interviews. The story (quotations in italics, authors' comments in bold italics) is organized in accordance with the subthemes. The narrative of Kim The preschool child Kim is bedridden with a high fever due to a urinary tract infection making the child very ill. The mother is with her child, and they share a room with another family. Maintaining normal sleeping patterns Kim has been sleeping restlessly during the night due to fever and discomfort. The nurse reports on this and looks into the room in the morning. The light in the room is dim, and the beds are close to each other. The mother lies half in her own bed, but she rests her head in Kim's bed; a screen surrounds them. The nurse whispers to the mother, and they agree to let Kim sleep. The nurse says that she should not disturb them, and she invites the other family to eat in the dining room. She encourages them to stay in the playroom after breakfast. In this way, nurses and parents collaborated to maintain the child's normal sleeping patterns and enable the rest and security of both the mother and child. The mother was responsible in this concrete situation: she stayed physically close to her child and made a home-like shelter for herself and her child. Making the meal familiar After a while, Kim wakes up and is about to have the breakfast. Kim has a poor appetite and eats sparsely. The nurse asks the child what he/she wants to eat. The nurse brings the child the food he/she wants, and the mother and the nurse help the child to sit upright in bed. When the nurse returns to the room, the food is almost untouched. The nurse asks what she should order for dinner. The nurse has given the child fever-reducing medication before dinnerwhich is Kim's favourite dish is served. Kim takes only a few bites. The mother says in the interview; Kim got something else to eat than what was on the menu. The nurse said she would do her best to get something that the child likes. However, Kim would not eat. The child has no appetite. My responsibility is to be there as a mum: take care of my child, do things Kim wants, give Kim something to drink and such. The nurse confirms the mother's role: The mother's task is to be there for the child and offer her any drinks the child wants. My task is to monitor the child's physical condition. The next day the nurse also serves breakfast. After a while she returns to the room; the mother and the child are eating at the table. The nurse comments on how much better the child looks and how cosy they are. She leaves the room after a short time. Both situations show that the parent and nurse had the common goal of getting the child to eat. Making the meal familiar to the child was the parent's responsibility. The nurse made practical facilitations to make the meals attractive and offered support by supplying the necessary medication and being flexible with the meals. Adjusting washing and getting dressed in a new situation After breakfast the first day, Kim has been washed while lying in bed and has the shirt changed with the mother's help. The nurse supports them with necessary equipment. The child is still affected by fever. The next day, the child has slept well during the night, and he/she feels better. Kim is dressed before the nurse enters the room. In the first situation, the mother needed the nurse to facilitate collecting equipment; however, the next day, she did it alone. In this way, nurses collaborated with parents and adjusted the process of washing and getting dressed to the child's new situation so that the parents could perform the activities. Normalizing the time in between The next day, the child is better, and the nurse informs the mother and child about an activity room where the child can play with a teacher. The child does not initially show any interest, but after a while, the mother follows the child in. Later in the day, the nurse asks if Kim has been in the activity room; the mother responds yes, and the child nods. The mother points to what Kim has madea bird hanging on the bedand the child shows it off proudly. The nurse brags and says she will go with the child the next day and make a bird. The mother says: Doing activities with my child -I will say that is my responsibility. I promised to be with Kim for activities and sit there as long as Kim wanted. Kim feels better when he/she gets to do things and not just sit in the room. The nurse says: Taking the initiative in relation to the child is my job as well as collaborating with the mother. Parents and nurses varied their degree of collaboration when maintaining routines well known by the child in everyday situations depending on the level of the child's illness and treatment. However, the parents had the main responsibility. In some situations, when the child's illness and treatment presented challenges to everyday situations, the nurses might take over and perform treatment-care to safeguard the child's wellbeing. Based on the complexities of the care situation, parents and nurses might change their roles and responsibilities. The degree of the nurses' involvement varied from weak to moderate to high in order to attend to the child's everyday needs. We observed that these areas of collaboration depended upon the nurse's sensitivity to the child's reactions, as well as input from the parents. In addition, the parents' input relied on good interactions between parent and child and was based on the parents' knowledge and affiliation with their child. In this way, the collaboration between nurses and parents was characterized by flexibility and reciprocity and was based on dialogues in action. Discussion The aim of this study was to explore the experiences of parents and nurses and the concrete ways in which nurses and parents collaborate in partnership when caring for hospitalized preschool children. The findings revealed two characteristics of ideal collaboration between nurses and parents in this context: flexibility and sharing of responsibility and tasks. The findings suggest that there are distinct areas of responsibility for nurses and parents and distinct purposes for the care work performed. The nurses and parents in this study took turns taking the initiative and supporting each other's goals and actions in partnership. While the parents were responsible for maintaining home-like care, the nurses assumed the primary responsibility for treatment-centered care. Both nurses and parents were dependent on each other's help to sustain this responsibility, but the relationship changed in accordance with the level of severity of the child's illness. Flexibility within both areas of collaboration depended on the nurse being sensitive to the child's needs and taking the parents' input into consideration. Furthermore, effective interaction between the parents and children, based on the parents' knowledge of and affiliation with the child, was also a precondition for ideal collaboration. These issues are discussed in more depth below. Common and dissimilar goals and roles In line with international research, the nurses took the initiative and had the responsibility of organizing and performing procedures and treatment [7][8][9][10][11][12][13]. In our study, we observed that the collaboration between nurses and parents was dynamic with the aim of ensuring the child's willingness to perform the procedures. The study further emphasises that the parents' presence and active input was a necessary factor in making the child feel secure and a precondition for ensuring the child's voluntariness. The nurse and parents shared the unspoken common goal of performing the treatment because it was necessary. The parents' input was based on the parents' knowledge of and strong attachment to the child. This made the parents secure in their role; therefore, they were able to participate in an active manner. This contributed to the child's willingness to be treated with the use of as little force as possible. Having the child's best interest as a common goal allowed a flow of mutual collaboration and dialogue between nurses and parents. In several situations the nurse's initiative and their playing with a child was important both to establish trust between nurse and child and to distract and comfort the child. The nurse had the main responsibility of carrying out the treatment, which is in line with earlier studies. Our study provides supplementary findings: for example, parents took care of the child affected by the disease or treatment and maintained the child's rhythm of everyday situations at home in an unfamiliar environment in order to make them more familiar. To do this, nurses and parents varied their degrees of collaboration when individualizing the child's situations. In some situations, the nurses had to take over and perform treatmentcentered care. We argue that the nurse's knowledge and contextual experience-based sensitivity to the child's reactions and the parents' contributions enabled the flexibility in both areas of collaboration. This is in line with current theory, which includes parent-professional collaboration in family-centered care and partnership-in-care. The emphasis is on the importance of supporting parents in their role, valuing parents' knowledge and experience, and incorporating parents' expertise in developing effective parent-professional relationships as collaborative processes [2]. Parent-professional collaboration and partnership in care, as part of family-centered care, aligns with Norwegian legislation that regulates the rights of parents and hospitalized children: parents have the right to stay together with their child, and to not lose their income when staying at hospital [27]. This right is supported by the right of public paid health care [28]. In this triangular relationship between parents, nurse and child, the child was the primary receiver of care, and both parents and nurses were caregivers in a mutual, dynamic and dialogic collaboration. At the same time nurses and parents switched roles as caregivers and care receivers in relation to each other; they needed each other's help. This may be described as a collaborative hierarchy, where the participants switched places and roles based on the care situations. Contextual sensitivity and reciprocity -make the care safe The nurse's professional knowledge and procedural skills, knowledge of how to interpret the child's and the parents' reactions, as well as their ability to enter into play with the child seem to be preconditions for correct decisions and actions in a complex collaborative situation. The division of responsibility, characterized by reciprocity based on dialogue in the collaboration, happened in accordance with what Tove Pettersen defines as mature care. Pettersen claims that in situations where it is necessary to change perspective and assess possibilities and limitations in order to find solutions, being able to interpret the care receiver's expression is a precondition. This requires that one possesses contextual sensitivity in the situation [29]. The collaboration was also in accordance with Pettersen's [30] argument that care receivers are not passive receivers but rather active in the relationships and thereby equal participants. In mature care, the care is administered in dialogue with the receiver, that is, in a partnership, and it is done in a dynamic way where empathy with the care receiver is of significance. This is in accordance with the intention of the Convention on the Rights of the Child [31]. Parents represented the child based on knowledge and the child's attachment in accordance with the theory of attachment [32]. This contributed to balancing the use of force on the child in procedural situations and to furthering the child's willingness to submit to procedures. The limits of care When performing the ideal care, nurse and parents had an unspoken, common goal of carrying out the treatment because it was necessary. They worked in a dialogical relationship to achieve this. The child's emotional attachment to the parents challenged the parents to provide unambiguous input in the collaboration as a response to the child's need. This is, however, in contrast to situations where the parents or nurse become insecure in their role and evasive in the collaboration, and the child becomes insecure and less willing to submit to treatment. The procedure time may then increase, or it may become necessary to postpone the procedure. This presupposes a context that allows for time to develop a partnership and to establish a feeling of calm in procedural situations. When parents change the way they react to the child's needs, the circle of security is broken and the children become insecure. The child's circle of security is broken because the parent's role changes [33]. This is in line with John Bowlby's theory of emotional attachment between children and parents based on continuity in the parents' response to the child's needs. Where the attachment contact between parents and children were good i.e. physical contact such as the physical embrace of the child, the collaboration was successful. This may also happen if the nurse takes over treatment-centered care in home-like care situations. This may be in opposition to what Pettersen [29] points out: Mature care between caregivers and care receivers, according to the principle of reciprocity, maintains a balanced use of power, where dialogue is central. This is a challenge to the dialogue and the reciprocal relationship. We observed that where the circle of security is disturbed, the nurses needed to take on more responsibility and include the parents in the situation, as well as fulfil their responsibility of carrying out procedures and making the child feel safe. Assuming the responsibility for parents and child at the same time is demanding, and requires that the nurses have matured with regard to both professional knowledge and skills. Despite feeling empathic for the child and the child's lack of willingness to receive treatment, the nurses did not refrain from performing procedures and administering treatment. They may have sidelined the parents by calling in other nurses to assist if necessary. Parents were then bumped down in the collaborative hierarchy, so that the nurse was able to perform the necessary assessment and treatment of the child. Thus, the nurse's relationship with the parents and especially regarding treating and diagnosing the child experienced some challenges. According to Pettersen these are the situations where the limit of care for the nurse has been reached. A way of limiting the use of force was to limit time spent in situations that caused the child discomfort. In order to carry out procedures/treatment, the nurse included new nurses in the situation to help, but this was not done until other approaches had been tried out. The partnership role between parents and nurses is in line with the notion of parent-professional collaboration in family-centered care [2]. Nurses and parents assume distinct areas of responsibility, distinct purposes for the care work, and distinct roles, as affirmed by international literature [8]. Despite these differences, understanding of the partnership between nurses and parents can serve a bridge-building function to connect the differences and create a common goal for the best interests of the child. Strengths and limitations of the study The study was conducted in one small general medical paediatric unit of a Norwegian hospital with children with different medical diagnoses. These findings could be of value in similar contexts and cultures but not in relation to children admitted to intensive care units. The intention was to obtain in-depth knowledge of parents' and nurses' experiences. Possible weaknesses may be that the study was performed in one hospital in one geographical area, and that the findings are related to a specific culture. The study's strengths lie in the method used: a field study, combining participant observations and qualitative interviews. The visual access to the process reinforces the possibility of following up on core aspects of the care situations by carrying out in-depth interviews. These would provide deeper insights into the area of research. Conclusion The aim of the study was to explore parents' and nurses' concrete collaborative experiences. The findings describe two ways of collaborating in the best interest of the child. Collaboration in treatment-centered and home-like care has different purposes and is linked to different situations even though the situations may interfere with each other. Moreover, collaboration is based on the parents and nurses having different responsibilities. In order to safeguard the child's best interest, collaboration between nurses and parents was characterized by flexibility and reciprocity and by dialogues in action. Areas of collaboration were characterized by the nurse's sensitivity to the child's reactions as well as by input from the parents. Parents depended on good interactions with their children based on their knowledge of and affiliation with the child. The findings showed that parents and nurses' partnerships were central to describing the ideal collaboration between them. Relevance to clinical practice The findings may be of use to families with children admitted to general children's wards. In terms of clinical practice, the findings may present nurses with the possibility of collaborating flexibly and in partnership with parents. These perspectives of the combination of the differences and the partnership is important in the nurses understanding of their roles and as a fundamental part of their practice. It is therefore necessary to include and emphasize the perspectives in the education of nurses. Ways to learn about the nursing role for the students are in practice, in simulation of relevant cases and with reflection in practice over the nursing role. The leadership of the nursing's practice have to facilitate this fundamental perspective of the nurse's role and as a basis for the nursing practice.
v3-fos-license