added
string
created
string
id
string
metadata
dict
source
string
text
string
version
string
2023-09-01T06:42:23.925Z
2023-08-30T00:00:00.000
261396082
{ "extfieldsofstudy": [ "Physics" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://www.aanda.org/articles/aa/pdf/2024/03/aa47831-23.pdf", "pdf_hash": "f25732b8d32c5fb7001420fde2d116c4d92f150d", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43549", "s2fieldsofstudy": [ "Physics" ], "sha1": "3c98ea2b3bb1a30348b344a0929e8b26f16487f3", "year": 2023 }
pes2o/s2orc
Bars and boxy/peanut bulges in thin and thick discs III. Boxy/peanut bulge formation and evolution in presence of thick discs Boxy/peanut (b/p) bulges, the vertically extended inner parts of bars, are ubiquitous in barred galaxies in the local Universe, including our own Milky Way. At the same time, a majority of external galaxies and the Milky Way also possess a thick-disc. However, the dynamical effect of thick-discs in the b/p formation and evolution is not fully understood. Here, we investigate the effect of thick-discs in the formation and evolution of b/ps by using a suite of N-body models of (kinematically cold) thin and (kinematically hot) thick discs. Within the suite of models, we systematically vary the mass fraction of the thick disc, and the thin-to-thick disc scale length ratio. The b/ps form in almost all our models via a vertical buckling instability, even in the presence of a massive thick disc. The thin disc b/p is much stronger than the thick disc b/p. With increasing thick disc mass fraction, the final b/p structure gets progressively weaker in strength and larger in extent. Furthermore, the time-interval between the bar formation and the onset of buckling instability gets progressively shorter with increasing thick-disc mass fraction. The breaking and restoration of the vertical symmetry (during and after the b/p formation) show a spatial variation -- the inner bar region restores vertical symmetry rather quickly (after the buckling) while in the outer bar region, the vertical asymmetry persists long after the buckling happens. Our findings also predict that at higher redshifts, when discs are thought to be thicker, b/ps would have more 'boxy-shaped' appearance than more 'X-shaped' appearance. This remains to be tested from future observations at higher redshifts. Introduction A number of observational studies find that nearly half of all edge-on disc galaxies in the local Universe exhibit a prominent boxy or peanut-shaped structure (hereafter b/p structure; e.g.see Bureau & Freeman 1999;Lütticke et al. 2000;Erwin & Debattista 2017).A wide variety of observational and theoretical evidence indicates that many bars are vertically thickened in their inner regions, appearing as "boxy" or "peanutshaped" bulges when seen in edge-on configuration (e.g.Combes & Sanders 1981;Raha et al. 1991;Erwin & Debattista 2016).The presence of b/p structure has also been detected for galaxies in the face-on configurations, for example, in IC 5240 (Buta 1995), IC 290 (Buta & Crocker 1991), and several others (see examples in Quillen et al. 1997;McWilliam & Zoccali 2010;Laurikainen et al. 2011;Erwin & Debattista 2013).Several photometric and spectroscopic studies of the Milky Way bulge revealed that the Milky Way also has an inner b/p structure (e.g.see Nataf et al. 2010;Shen et al. 2010;Ness et al. 2012;Wegg & Gerhard 2013;Wegg et al. 2015).The occurrence of b/p bulges is observationally shown to depend strongly on the stellar mass of the galaxy, and a majority of barred galaxies above stellar mass log(M * /M ) ≥ 10.4 host b/p bulges (e.g.see Yoshino & Yamauchi 2015;Erwin & Debattista 2017;Marchuk et al. 2022).A similar (strong) stellar-mass dependence of the b/p bulge occurrence, at redshift z = 0, is shown to exist for the TNG50 suite of cosmological zoom-in simulations (Anderson et al. 2024). Much of our current understanding of the b/p formation and its growth in barred galaxies is gleaned from numerical simulations.Studies using N-body simulations often find that, soon after the formation of a stellar bar it undergoes a vertical buckling instability, which subsequently gives rise to a prominent b/p bulge (e.g.see Combes et al. 1990;Raha et al. 1991;Merritt & Sellwood 1994;Debattista et al. 2004;Martinez-Valpuesta et al. 2006;Martinez-Valpuesta & Athanassoula 2008;Saha et al. 2013).Indeed, Erwin & Debattista (2016) detected two such local barred-spiral galaxies that are undergoing such a buckling phase.If a barred N-body model is evolved for a long enough time, it might go through a second and prolonged buckling phase, thereby producing a prominent X-shape feature (e.g.Martinez-Valpuesta et al. 2006;Martinez-Valpuesta & Athanassoula 2008).Furthermore, Saha et al. (2013) showed that a bar buckling instability is closely linked with the maximum meridional tilt of the stellar velocity ellipsoid (denoting the meridional shear stress of stars).Alternatively, a b/p bulge can be formed via the trapping of disc stars at vertical resonances during the secular growth of the bar (e.g.see Combes & Sanders 1981;Combes et al. 1990;Quillen 2002;Debattista et al. 2006;Quillen et al. 2014;Li et al. 2023) or by gradually increasing the fraction of bar orbits trapped into this resonance (e.g.see Sellwood & Gerhard 2020).The main difference between these two scenarios of b/p formation is that when the bar undergoes the buckling instability phase, the symmetry about the mid-plane is no longer preserved for a period of time (see discussion in Cuomo et al. 2023). Regardless of the formation scenario, the b/p bulges are shown to have a significant effect on the evolution of disc galaxies by reducing the bar-driven gas inflow (e.g.see Fragkoudi et al. 2015Fragkoudi et al. , 2016;;Athanassoula 2016).The formation of b/p bulges can affect metallicity gradients in the inner galaxy (e.g.Di Matteo et al. 2014;Fragkoudi et al. 2017a) and can also lead to bursts in star formation history (e.g.Pérez et al. 2017).In addition, Saha et al. (2018) showed that for a 3D b/p structure (i.e.b/p seen in both face-on and edge-on configurations), it introduces a kinematic pinch in the velocity map along the bar minor axis.Furthermore, Vynatheya et al. (2021) demonstrated that for such a 3D b/p structure, the inner bar region rotates slower than the outer bar region. On the other hand, a thick-disc component is now known to be ubiquitous in majority of external galaxies as well as in the Milky Way (e.g.see Tsikoudi 1979;Burstein 1979;Yoachim & Dalcanton 2006;Comerón et al. 2011aComerón et al. ,b, 2018)).The existence of this thick-disc component covers the whole range of the Hubble classification scheme, from early-type S0 galaxies to late-type galaxies (Pohlen et al. 2004;Yoachim & Dalcanton 2006;Comerón et al. 2016Comerón et al. , 2019;;Kasparova et al. 2016;Pinna et al. 2019a,b;Martig et al. 2021;Scott et al. 2021).The thick-disc component is vertically more extended and kinematically hotter as compared to the thindisc component.The dynamical role of a thick-disc on the formation and growth of non-axisymmetric structures has been been studied for bars (e.g., Klypin et al. 2009;Aumer & Binney 2017;Ghosh et al. 2023) and spirals (Ghosh & Jog 2018, 2022).Past studies demonstrated that the (cold) thin and (hot) thick discs are mapped differently in the bar and boxy/peanut bulge (e.g.Athanassoula et al. 2017;Fragkoudi et al. 2017b;Debattista et al. 2017;Buck et al. 2019).Since the presence of a thick disc can significantly affect the formation, evolution, and properties of bars (Ghosh et al. 2023), we need to explore how it will affect the b/ps, since b/ps are essentially the vertical extended part of the bar. Stellar bars are known to be present in high-redshift (z ∼ 1) galaxies (e.g.see Sheth et al. 2008;Elmegreen et al. 2004;Jogee et al. 2004;Guo et al. 2023;Le Conte et al. 2023).Furthermore, a recent study by Kruk et al. (2019) showed the existence of b/p structure in high redshift (z ∼ 1) galaxies as well.At high redshift, discs are known to be thick, kinematically hot (and turbulent), and more gas rich.So, the question remains as to how efficiently the b/p structures can form in such thick discs at such high redshifts.Fragkoudi et al. (2017b) studied the effect of such a thick disc component on the b/p formation using a fiducial two-component thin+thick disc model where the thick disc constitutes 30% of the total stellar mass.The formation and properties of b/p bulges in multi-component discs (i.e. with a number of disc populations greater than two) was also studied in Di Matteo (2016), Debattista et al. (2017), Fragkoudi et al. (2018a,b), and Di Matteo et al. (2019).However, a systematic study of b/p formation in discs with different thin and thick discs, as well as composite thin and thick discs is still missing.We aim to pursue this here. In this work, we systematically investigated the dynamical role of the thick-disc component in b/p formation and growth using a suite of N-body models with (kinematically hot) thick and (kinematically cold) thin discs.We varied the thick-disc mass fraction and considered different geometric configurations (varying ratio of thin-and thick-disc scale lengths) within the suite of N-body models.We quantified the strength and growth of the b/p in both the thin-and thick-disc stars and studied the vertical asymmetry associated with the vertical buckling instability.In addition, we investigated the kinematic phenomena (i.e.change in the velocity dispersion, meridional tilt angle) associated with the b/p formation and its subsequent growth. The rest of the paper is organised as follows.Section 2 provides a brief description of the simulation setup and the initial equilibrium models.Section 3 quantifies the properties of the b/p structure, their temporal evolution, and the vertical asymmetry in different models and the associated temporal evolution.Section 4 provides the details of kinematic phenomena related to the b/p formation and its growth, while Sect. 5 provides the details of the relative contribution of the thin disc in supporting the X-shape structure.Section 6 contains the discussion, while Sect.7 summarises the main findings of this work. Simulation setup, and N-body models To motivate our study, we used a suite of N-body models consisting of a thin and a thick stellar disc, and the whole system is embedded in a live, dark-matter halo.One such model was already presented in Fragkoudi et al. (2017b).In addition, these models have been thoroughly studied in a recent work of Ghosh et al. (2023) in connection with a bar-formation scenario under varying thick-disc mass fractions.Here, we used the same suite of thin+thick models to investigate b/p formation and evolution with varying thick-disc mass fractions. The details of the initial equilibrium models and how they are generated are already given in Fragkoudi et al. (2017b) and Ghosh et al. (2023).Here, for the sake of completeness, we briefly mention the equilibrium models.Each of the thin and thick discs is modelled with a Miyamoto-Nagai profile (Miyamoto & Nagai 1975), with R d , z d , and M d being the characteristic disc scale length, the scale height, and the total mass of the disc, respectively.The dark-matter halo is modelled with a Plummer sphere (Plummer 1911), with R H and M dm being the characteristic scale length and the total halo mass, respectively.The values of the key structural parameters for the thin and thick discs, dark-matter halo, and the total number of particles used to model each of these structural components are mentioned in Table 1.For this work, we analysed a total of 19 N-body models (including one pure thin-disc-only and three pure thick-disc-only models) of such thin+thick discs. The initial conditions of the discs are obtained using the iterative method algorithm (see Rodionov et al. 2009).For further details, the reader is referred to Fragkoudi et al. (2017b) and Ghosh et al. (2023).The simulations are run using a TreeSPH code by Semelin & Combes (2002).A hierarchical tree method (Barnes & Hut 1986) with opening angle θ = 0.7 is used to calculate the gravitational force, which includes terms up to the quadrupole order in the multipole expansion.A Plummer potential was employed for softening the gravitational forces with a softening length = 150 pc.We evolved all the models for a total time of 9 Gyr. Within the suite of thin+thick disc models, we considered three different scenarios for the scale lengths of the two disc (thin and thick) components.In rthickE models, both the scale lengths of thin and thick discs are kept the same (R d,thick = R d,thin ).In rthickS models, the scale length of the thick-disc component is shorter than that for the thin-disc one (R d,thick < R d,thin ), and in rthickG models, the scale length of the thick-disc component is Before we present the results, we mention that in our thin+thick models, we can identify and separate, by construction, which stars are members of the thin-disc component at initial time (t = 0) and which stars are members of the thick-disc component at t = 0, and we can track them as the system evolves self-consistently.Thus, throughout this paper, we refer to the b/p as seen exclusively in particles initially belonging to the thindisc population as the "thin-disc b/p" and that seen exclusively in particles initially belonging to the thick-disc population as the "thick-disc b/p". Boxy/peanut formation and evolution for different mass fraction of thick-disc population 3.1.Quantifying the b/p properties Figure 1 shows the distribution of all stars (thin+thick) in the edge-on projection, calculated at the end of the simulation run (t = 9 Gyr), for all the thin+thick models considered here.In each case, the bar is placed in the side-on configuration (i.e.along the x-axis).A prominent b/p structure is seen in most of these thin+thick models.We further checked the same edge-on stellar density distribution, calculated for the thin-and thick-disc stars separately.Both of them show a prominent b/p structure in most of the thin+thick models.For the sake of brevity, we do not show it here (however, see Fig. 2 in Fragkoudi et al. 2017b). 3.1.1.Quantifying the b/p strength and its temporal evolution Here, we quantify the strength of the b/p structure and study its variation (if any) with thick-disc mass fraction.Following Martinez-Valpuesta & Athanassoula (2008) and Fragkoudi et al. (2017b), in a given radial bin of size ∆R (=0.5 kpc), we calculate the median of the absolute value of the distribution of particles in the vertical (z) direction, |z| (normalised by the initial value, z0 ) of a snapshot seen edge-on and with the bar placed side-on (along the x-axis).In Fig. 2, we show one example of the corresponding radial profiles of the |z|/z 0i (i = thin, thick, thin+thick) computed separately for the thin-and thick-disc particles, as well as for the thin+thick disc particles, as a function of time.As seen in Fig. 2, a prominent peak in the radial profiles (at later times) of |z|/z 0i denotes the formation of a b/p structure in the thin+thick model.Here, we mention that as the vertical scale height, and hence the vertical extent of thick disc is larger (by a factor of three) than that for the thin disc (see Table 1); the normalisation by z0 is necessary to unveil the intrinsic vertical growth due to the b/p formation.When only the absolute values of |z| are considered, the thick-disc stars always produce a larger value of |z| than the thin-disc stars.This happens due to the construction of equilibrium thin+thick models, and not due to the b/p formation.The normalised peak for the thin-disc is much larger than that for the thick-disc, in concordance with the previous results (Fragkoudi et al. 2017b).Furthermore, at later times, the peak in |z|/z 0i profiles shifts towards outer radial extent (more prominent for the thin disc b/p), indicating the growth of the b/p structure towards outer radial extent.These trends are also seen to hold for other thin+thick models that form a b/p structure during their evolutionary pathway.To quantify the temporal evolution of the b/p strength, we define the b/p strength at time t, S b/p (t) as the maximum of the peak value of the |z|/z 0i , i.e., In Martinez-Valpuesta et al. (2006) and Martinez-Valpuesta & Athanassoula (2008), a method based on the Fourier decomposition was formulated to quantify the strength of a b/p structure. In Appendix A, we compare this method with Eq. ( 1) for the thin+thick model rthickE0.5. Figure 3 shows the corresponding temporal evolution of the b/p strength, calculated separately for the thin and thick discs and for the composite thin+thick disc particles for all thin+thick models considered here.The thin-disc b/p remains much stronger than the thick-disc b/p structure, and this trend holds true for all thin+thick models with three different configurations (i.e.rthickS, rthickE, and rthickG) that develop a prominent b/p structure during their course of temporal evolution. The temporal evolution of the b/p strength in three thickdisc-only models merits some discussion.For the rthickS1.0model, the values of S b/p show a monotonic increase with time, denoting the formation of a prominent b/p structure (also see the bottom row of Fig. 1).However, the final b/p strength for the rthickS1.0model remains lowest when compared to other rthickS models with different f thick values.For the rthickE1.01), for thin-disc (upper panels), thick-disc (middle panels), and total (thin+thick) disc stars (lower panels) for all thin+thick disc models with varying f thick values (see the colour bar).Left panels show the b/p strength evolution for the rthickS models, whereas middle panels and right panels show the b/p strength evolution for the rthickE and rthickG models, respectively.The thick disc fraction ( f thick ) varies from 0.1 to 0.9 (with a step-size of 0.2), as indicated in the colour bar.The blue solid lines in the middle and the bottom rows denote the three thick-disc-only models ( f thick = 1), whereas the red solid line in the top middle panel denotes the thin-disc-only model ( f thick = 0; for details, see text).model, the temporal evolution of S b/p shows a sudden jump around t = 6.5 Gyr and then remains constant.By the end of the simulation, this model forms a b/p structure which appears boxier than a peanut or X shape.For the rthickG1.0model, the temporal evolution of S b/p does not show much increment, and the model does not form a prominent b/p structure at the end of the simulation. In addition, for a fixed value of f thick , we calculated the gradient of S b/p (t) with respect to time t (dS b/p (t)/dt) for three different geometric configurations considered here.One such example is shown in Fig. 4 for f thick = 0.5.A prominent (positive) peak in the dS b/p /dt profile denotes the onset of the b/p formation.As seen clearly, for a fixed f thick value, the peak in the dS b/p /dt profile occurs at an earlier epoch when compared with the other two geometric configurations.This confirms that the b/p forms at an earlier time in the rthickS0.5model compared to the rthickE0.5 and rthickG0.5models.These trends are in tandem with the fact that the bars in rthickS models form at earlier times and grow faster compared to the other two disc configurations (for details, see Ghosh et al. 2023). Lastly, in Fig. 5 (top panel), we show the final b/p strength (i.e.calculated at t = 9 Gyr using the thin+thick disc particles) for all thin+thick models considered here.We point out that, in some cases, the maximum of the |z|/z 0i profile for the thick-disc is not always easy to locate; sometimes they display a plateau rather than a clear maximum (see Fig. 2).This, in turn, might have an impact in the estimation of the b/p strength.In order to derive an estimate of the uncertainty on the b/p strength measurement, we constructed a total of 5000 realisations by resampling the entire population using a bootstrapping technique (Press et al. 1986), and for each realisation we computed the radial profiles of |z|/z 0i as well as its peak value (denoting the b/p strength).The resulting error estimates are shown in Fig. 5.The final b/p strength shows a wide variation with the f thick values as well as with the thin-thick-disc configuration.To illustrate, for the rthickS models, the final b/p strength decreases monotonically as the f thick value increases.For the rthickE models, the final b/p strength increases from f thick = 0.1−0.3, and then decreases monotonically as the f thick value increases.A [kpc] similar trend is also seen for the rthickG models.Nevertheless, the strength of the b/p shows an overall decreasing trend with increasing f thick values, and this remains true for all three geometric configurations considered here.increases significantly (by a factor of ∼2) over the entire evolutionary phase.In addition, towards the end of the simulation run, the thick disc b/p is larger (by ∼10−15%) than the thin disc b/p.We found a similar trend in temporal variation of the b/p extent for other thin+thick models, and therefore they are not shown here. We further checked how the extent of the b/p structure, by the end of the simulation run, varied with the thin-to-thick disc mass fraction ( f thick ).In Fig. 5 (bottom panel), we show the corresponding extents of the b/p structure computed at t = 9 Gyr using the thin+thick stellar particles for all thin+thick models considered here.The extent of the b/p structure increases steadily as the f thick value increases, and this trend holds true for all three different configurations considered here.Furthermore, at a fixed f thick value, the rthickG models show a higher value for the R b/p when compared to other two configurations, thereby denoting that rthickG models form a larger b/p structure, by the end of the simulation run, as compared to rthickE and rthickS models.Lastly, in Appendix C, we show how the extent of the thin disc b/p and thick disc b/p, at the end of the simulation (t = 9 Gyr), vary across different f thick values and different disc configurations. Vertical asymmetry and buckling instability To quantify the vertical asymmetry and the time at which it occurs, we first calculated the amplitude of the first coefficient in the Fourier decomposition (A 1z ), which provides a measure of the asymmetry (e.g.see Martinez-Valpuesta et al. 2006;Martinez-Valpuesta & Athanassoula 2008;Saha et al. 2013).The first Fourier coefficient (A 1z ) is defined as (Martinez-Valpuesta & Athanassoula 2008) where m i is the mass of the ith particle, and ϕ i is the angle of ith particle measured in the (x, z)-plane with the bar placed along the x-axis (side-on configuration).M tot is denotes the total mass of the particles considered in this summation.Following Martinez-Valpuesta & Athanassoula (2008), to make this coefficient more sensitive to a buckling, we only included the stars (see Eq. ( 2)) that are momentarily within the extent of the b/p (R b/p ) in the summation.The corresponding temporal evolution of the buckling amplitude (A 1z ), calculated separately for thin, thick, and thin+thick particles, for the model rthickE0.5 is shown in Fig. 7 (left panel).A prominent peak in the A 1z profile denotes the vertical buckling event.We further checked that for all the thin+thick models, a peak in the A 1z is associated with the dip/decrease in the bar strength.This is expected since it is well known that the bar strength decreases as it goes through the buckling phase.The A 1z amplitude is larger for the thin-disc stars when compared with that of the thick-disc stars.This is consistent is with the scenario that the thin disc b/p is stronger than the thick disc b/p.Another way of quantifying the buckling instability is by measuring the buckling amplitude, A buck , which is defined as (for details, see Sellwood & Athanassoula 1986;Debattista et al. 2006Debattista et al. , 2020) where m j , z j , and φ j denote the mass, vertical position, and azimuthal angle of the jth particle, respectively, and the summation runs over all star particles (thin, thick, thin+thickwhichever is applicable) within the b/p extent.The quantity A buck denotes the m = 2 vertical bending amplitude (for further details, see Debattista et al. 2006Debattista et al. , 2020)).The corresponding temporal evolution of the buckling amplitude (A buck ), calculated separately for thin, thick, and thin+thick particles for the model rthickE0.5 is shown in Fig. 7 (right panel).A prominent peak in the A buck profile denotes the onset of the vertical buckling instability.Furthermore, the peak of value of A buck is higher for the thin-disc than that for the thick-disc.This is again consistent with the thin disc b/p being stronger than the thick disc b/p.This trend holds true for all the thin+thick models considered here.Next, we define τ buck as the epoch for the onset of the buckling event when the peak in A buck occurs.As seen from Fig. 7, the epoch at which the peak in A 1z occurs coincides with τ buck .This is not surprising as both the quantities denote the same physical phenomenon of vertical buckling instability.In Sect.3.3, we further investigate the variation of τ buck with f thick and its connection with bar-formation epoch.While Fig. 7 clearly demonstrates the temporal evolution of the vertical asymmetry associated with the b/p structure formation, it should be borne in mind that A 1z (quantifying vertical asymmetry) or A buck (quantifying the m = 2 vertical bending amplitude) only informs us about the buckling instability in an average sense, and hence it lacks any information about the 2D distribution of the vertical asymmetry.To investigate that, we computed the 2D distribution of the mid-plane asymmetry.Following Cuomo et al. (2023), we define where Σ(x, z) denotes the projected surface number density of the particles at each position of the image of the edge-on view of the model.The resulting distribution of A Σ (x, z), computed separately for the thin-disc, thick-disc, and thin+thick discs at six different times (before and after the buckling happens) are shown in Fig. 8 for the model rthickE0.5.At the initial rapid 2)) denoting vertical asymmetry in bar region, calculated using thin-disc (blue lines), thick-disc (red lines), and thin+thick disc (black lines) for the model rthickE0.5.Right panel: temporal evolution of buckling amplitude, A buck (Eq.( 3)), calculated using thin-disc (blue lines), thick-disc (red lines), and thin+thick disc (black lines) for the model rthickE0.5.The vertical magenta dotted line denotes the onset of buckling instability (τ buck ), calculated from the peak of A buck profile (for details, see text).Furthermore, for reference, we indicate the onset of bar formation (τ bar , vertical maroon dotted line), calculated from the amplitude of the m = 2 Fourier moment. bar growth phase (t ∼ 1 Gyr), the A Σ (x, z) values remain close to zero, indicating no breaking of vertical symmetry in that evolutionary phase of the model.Around t ∼ 2.7 Gyr, the model undergoes a strong buckling event (see the peak in Fig. 7).As a result, the distribution A Σ (x, z) shows large positive and negative values at t = 2.75 Gyr, thereby demonstrating that the vertical symmetry is broken around the mid-plane.At a later time (t = 5 Gyr), the vertical symmetry is restored in the inner region (as indicated by A Σ (x, z) ∼ 0).However, in the outer region (close to the ansae or handle of the bar), A Σ (x, z) still displays non-zero values, thereby indicating that the vertical asymmetry still persists in the outer region.Around t ∼ 6.85 Gyr, the model undergoes a second buckling event (see the second peak in A 1z , albeit with smaller values in Fig. 7).As a result, at a later time (t = 6.95Gyr), the model still shows non-zero values for A Σ (x, z) in the outer region.By the end of the simulation run (t = 9 Gyr), the values of A Σ (x, z) become close to zero throughout the entire region, thereby demonstrating that the vertical symmetry is finally restored.As Fig. 8 clearly reveals that the thin-disc stars show a larger degree of vertical asymmetry (or equivalently larger values of A Σ (x, z)) when compared with the thick-disc stars, we further checked the distribution of A Σ (x, z) in the (x, z)-plane at different times for other thin+thick models that host a prominent b/p structure.We found an overall similar trend of spatio-temporal evolution of the A Σ (x, z) as seen for the model rthickE0.5. Correlation between bar and b/p properties Past theoretical studies of the b/p formation and its subsequent growth have revealed a strong correlation between the (maximum) bar strength and the resulting (maximum) b/p strength (e.g.see Martinez-Valpuesta & Athanassoula 2008).We tested this correlation for the suite of thin+thick models considered here.The maximum bar strengths for the models are obtained from Ghosh et al. (2023) where we studied, in detail, the bar properties for these models.The maximum b/p strengths for the models are obtained from Eq. ( 1).We mention that all stellar particles (thin+thick) are used to calculate the maximum bar and b/p strengths for all models.The resulting distribution of the thin+thick models in maximum bar-maximum b/p strengths are shown in Fig. 9.As is seen clearly from Fig. 9, a stronger bar in a model produces a stronger b/p structure.This correlation holds true for all three geometric configurations considered here.Therefore, we also find a strong correlation between the (maximum) bar strength and the resulting (maximum) b/p strength in our thin+thick models, in agreement with past findings.In addition, we investigated the correlation (if any) between the lengths of the bar and the b/p in our thin+thick models.In Fig. 10 (top panel), we show the temporal evolution of the ratio of the b/p length (R b/p ) and the bar length (R bar ) for the model rthickE0.5.The bar length, R bar , is defined as the radial location where the amplitude of the m = 2 Fourier moment (A 2 /A 0 ) drops to 70% of its peak value (for a detailed discussion, see recent work by Ghosh & Di Matteo 2024).As is clearly seen, the ratio increases shortly after the formation of b/p, and the ratio almost saturates by the end of the simulation run (9 Gyr).Furthermore, we calculated the b/p length (R b/p ) and the bar length (R bar ), at the end of the simulation (9 Gyr) for all thin+thick models considered here.This is shown in Fig. 10 (bottom panel).For the rthickE and rthickG models, the ratio increases progressively with increasing f thick values.However, for the rthickS model, the ratio increases monotonically until f thick = 0.7, and then it starts to decrease.Lastly, we investigated the time delay between the bar formation and the onset of buckling instability for all thin+thin models considered here, and we studied if and how it varies with thick-disc mass fraction ( f thick ).In Appendix D, we show how the bar-formation epoch, τ bar , varies with different f thick values and with different disc configurations.Similarly, in Sect.3.2, we define the epoch of buckling instability when the peak in A buck occurs.The resulting variation of the time delay, τ buck − τ bar with f thick is shown in Fig. 11.For a fixed geometric configuration (rthickE, rthickS, or rthickG), the time interval between the bar formation and the onset of buckling instability becomes progressively shorter with increasing f thick values.This happens due to the fact that with increasing f thick , the bar forms progressively at a later stage (see Appendix D).In addition, for a fixed f thick value, the rthickS models almost always show shorter time delay (τ buck − τ bar ) when compared to other two geometric configurations considered here (see Fig. 11). Kinematic signatures of buckling and its connection with b/p formation Understanding the temporal evolution of the diagonal and offdiagonal components of the stellar velocity tensor was shown to be instrumental for investigating the formation and growth of the b/p structure (see Saha et al. 2013, and references therein). Here, we systematically studied some key diagonal and offdiagonal components of the stellar velocity dispersion tensor and their associated temporal evolution for all the thin+thick models considered. At a given radius R, the stellar velocity dispersion tensor is defined as (Binney & Tremaine 2008) where quantities within the brackets denote the average quantities, and the averaging is done for a group of stars.Here, i, j = R, φ, z.The corresponding stress tensor of the stellar fluid is defined as where ρ(R) denotes the local volume density of stars at a radial location R. τ n and τ s denote the normal stress (acting along the normal to a small differential imaginary surface dS ) and the shear stress (acting along the perpendicular to the normal to dS ), respectively (for further details, see Binney & Tremaine 2008;Saha et al. 2013).The components of τ n are determined by the diagonal elements of the velocity dispersion tensor, while the shear stress is determined by the off-diagonal elements of the velocity dispersion tensor (for details, see Binney & Tremaine 2008).Furthermore, the diagonal elements of the velocity dispersion tensor determines the axial ratios of the stellar velocity ellipsoid with respect to the galactocentric axes (ê R , êφ , êz ), whereas the orientations of the velocity ellipsoid are determined by the off-diagonal elements of the velocity dispersion tensor (for details, see Binney & Tremaine 2008;Saha et al. 2013).One such quantity of interest is the meridional tilt angle, which is defined as The tilt angle, Θ tilt , denotes the orientation or the deformation of the stellar velocity ellipsoid in the meridional plane (R−z-plane).In the past, it was shown for an N-body model that when the bar grows, it causes much radial heating (or equivalently, increasing the radial velocity dispersion, σ RR ) without causing a similar degree of heating the vertical direction (or equivalently, no appreciable increase in the vertical velocity dispersion, σ zz ).Consequently, the model goes through a vertical buckling instability causing the thickening of the inner part, which in turn also increases σ zz (e.g.Debattista et al. 2004;Martinez-Valpuesta et al. 2006 Fragkoudi et al. 2017b;Di Matteo et al. 2019).Therefore, it is of great interest to investigate the temporal evolution of the verticalto-radial velocity dispersion (σ zz /σ RR ) in order to fully grasp the formation and growth of b/p structure in our thin+thick models.In addition, using N-body simulations, Saha et al. (2013) demonstrated that during the onset of the buckling phase, the model shows a characteristic increase in the meridional tilt angle, Θ tilt , which in turn could be used as an excellent diagnostic to identify an ongoing buckling phase in real observed galaxies.In this work, we studied the temporal evolution of these dynamical quantities in detail for all the thin+thick models considered. In Appendix E, we show the radial profiles of vertical-toradial velocity dispersion as a function of time for the model rthickE0.5.In order to quantify the temporal evolution of σ zz /σ RR , we computed them using Eq. ( 5 of the vertical-to-radial velocity dispersion (σ zz /σ RR ), calculated separately for thin, thick, and thin+thick particles, is shown in Fig. 12 (left panel) for the model rthickE0.5.As seen from Fig. 12, the temporal evolution of σ zz /σ RR displays a characteristic U shape (of different amplitudes) during the course of the evolution, arising from the radial heating of the bar (increase in σ RR ) and the subsequent vertical thickening (increase in σ zz ) due to the buckling instability.In addition, the temporal profiles of σ zz /σ RR for the thin disc show a larger and more prominent U-shaped feature when compared with that for the thickdisc stars.This is consistent with the notion that thin-disc b/p are, in general, stronger than the thick-disc b/p.The epoch corresponding to the maximum increase in the quantity σ zz /σ RR coincides with the peak in the A buck , denoting strong vertical buckling instability (see the location of vertical magenta line in Fig. 12).This further demonstrates that the b/p structures in our thin+thick models are indeed formed through vertical buckling instability.In Appendix E, we show the temporal evolution of σ zz /σ RR for all thin+thick models considered here.We checked the trends mentioned above hold true for all the models that formed a b/p structure during the evolutionary trajectory.Lastly, we investigated the temporal evolution of the meridional tilt angle, Θ tilt for the model rthickE0.5. Figure 12 (right panel) shows the corresponding temporal evolution of the tilt angle, Θ tilt , calculated separately for thin, thick, and thin+thick particles using Eq. ( 7) for the model rthickE0.5.The temporal evolution of Θ tilt shows a characteristic increase during the course of evolution.The epoch of the maximum value of the tilt angle coincides with the epoch of strong buckling instability (see the location of vertical magenta line in Fig. 12).This is in agreement with the findings of Saha et al. (2013) and is consistent with a "b/p formed through buckling" scenario.Furthermore, the temporal profiles of Θ tilt for the thin disc shows a larger and more prominent peak when compared with that for the thickdisc stars.This is expected as the thin-disc b/p is stronger than the thick-disc b/p.We checked that the trends, mentioned above, hold true for all the models that formed a b/p structure during the evolutionary trajectory.For the sake of brevity, they are not shown here. X-shape of the b/p and relative contribution of thin disc A visual inspection of Fig. 1 already revealed that for a fixed geometric configuration, the appearance of the b/p structure changes from more X-shaped to more boxy as the thick-disc mass fraction steadily increases.This trend is more prominent for the rthickE and rthickG models (see middle and right panels of Fig. 1).Here, we investigate this in further detail.In addition, we also investigate how the thin-and thick-disc stars contribute to the formation of the b/p structure. To carry out the detailed analysis, we first chose two models, namely, rthickE0.1 and rthickE0.9.In the rthickE0.1 model, the thin-disc stars dominate the underlying mass distribution, whereas in the rthickE0.9model, the thick-disc stars dominate the mass distribution, thereby providing an ideal scenario for the aforementioned investigation.In Fig. 13 (top left panels), we show the density contours of the edge-on stellar (thin+thick) distribution (with bar placed along the x-axis) in the central region encompassing the b/p structure.This clearly brings out the stark differences in the morphology of density contours.In the rthickE0.1 model, the contours have a more prominent X-shaped appearance, whereas in the rthickE0.9model, the contours have a more prominent box-shaped appearance.Figure 13 (bottom left panels) shows the density profiles along the bar major axis, calculated at different heights (from the mid-plane) for these two thin+thick models.At larger heights, a bimodality in the density profiles along the bar major axis reconfirms the strong X-shaped feature for the rthickE0.1 model.For the rthickE0.9model, no such bimodality in the density profiles along the bar major axis is seen, thereby confirming that the b/p structure is more box-shaped in the rthickE0.9model.Furthermore, we calculated the vertical stellar density distribution at a radial location around the peak of the b/p structure (see vertical blue lines in top left panels of Fig. 13) for these two models.A careful inspection reveals that the vertical stellar density distribution for the rthickE0.1 model is more centrally peaked (with well-defined tails), whereas vertical stellar density distribution for the rthickE0.9model is broader, especially at larger heights (see right panel of Fig. 13). To further investigate how the thin-and thick-disc stars contribute to the formation of the b/p structure, we calculated the thin-disc mass fraction, f thin (=1 − f thick ) at the end of the simulation run (t = 9 Gyr).Figure 14 shows the corresponding distribution of the thin-disc mass fraction in the edge-on projection (x−z-plane) for all thin+thick disc models.In each case, the bar is placed in side-on configuration (along the x-axis).As is seen clearly from Fig. 14, the thin-disc stars dominate in central regions close to the mid-plane (z = 0) and are responsible for giving rise to a strong X-shape of the b/p structure, in agreement with the findings of past studies (see e.g.Di Matteo 2016; Athanassoula et al. 2017;Debattista et al. 2017;Fragkoudi et al. 2017bFragkoudi et al. , 2020)).As one moves farther away from the mid-plane, the thick-disc stars start to dominate progressively.Furthermore, the appearance of the b/p structure changes from more X-shaped to more box-shaped as the thick-disc mass fraction steadily increases.These trends remain generic for all three geometric configurations (different thin-to-thick disc scale length ratios) considered here. Discussion In what follows, we discuss some of the implications and limitations of this work.First, our findings demonstrate that the b/p 7), for thin (in blue), thick (in red), and total (thin+thick) disc (in black) particles, for model rthickE0.5.The vertical maroon dotted line denotes the onset of bar formation (τ bar ), while the vertical magenta dotted line denotes the onset of buckling instability (τ buck ; for details, see text). Fig. 13.Variation of the b/p morphology with varying thick-disc mass faction ( f thick ).Top left panels: density contours of edge-on stellar (thin+thick) distribution (with bar placed along the x-axis) in central region at t = 9 Gyr for models rthickE0.1 and rthickE0.9.For the rthickE0.1 model, the contours display more prominent X-shaped feature, whereas for the rthickE0.9model, the contours display more prominent box-shaped feature.Bottom left panels: density profiles (normalised by the peak density value, and in log-scale) along bar major axis, calculated at different heights (from |z| = 0 to 6 kpc, with a step-size of 1 kpc) from mid-plane, for models rthickE0.1 and rthickE0.9.The density profiles have been artificially shifted along the y-axis to show the trends as height changes, and they do not overlap.Right panel: vertical stellar density distribution (normalised by the peak density value, and in log-scale), at a radial location around peak of the b/p structure (marked by blue vertical lines in top left panels) for rthickE0.1 and rthickE0.9models.Fig. 14.Fraction of thin-disc stars, f thin (=1− f thick ), in the edge-on projection (with the bar placed along the x-axis) compared to the total (thin+thick) disc, at the end of the simulation run (t = 9 Gyr) for all thin+thick disc models with varying f thick values.Left panels show the distribution for the rthickS models, whereas middle panels and right panels show the distribution for the rthickE and rthickG models, respectively.The values of ( f thick ) vary from 0.1 to 0.9 (top to bottom), as indicated in the left-most panel of each row.For each model, the fraction of thin-disc stars decreases with height from the mid-plane (z = 0).In addition, the appearance of the b/p structure changes from more X-shaped to more boxy-shaped as the thick-disc mass fraction steadily increases. structure can form even in the presence of a massive thick-disc component.This provides a natural explanation to the presence of b/p in high-redshift (z = 1) disc galaxies in the hypothesis that these high-z discs have a significant fraction of their mass in a thick disc (see e.g.Hamilton-Campos et al. 2023).A recent work by Kruk et al. (2019) estimated that at z ∼ 1, about 10% of barred galaxies would harbour a b/p.Therefore, our results are in agreement with the recent observational trends.In addition, bars forming in the presence of a massive thick disc (as shown in a recent work of Ghosh et al. 2023) and the present work showing that b/ps also form in the presence of a massive thick disc suggest that bar and b/p bulge formation may have appeared at earlier redshifts than what has been considered so far, as well as in galaxies dominated by a thick-disc component.The findings that the b/p morphology and length depend on the thick-disc fraction (the higher the thick-to-thin disc mass ratio, the more boxy the corresponding b/p and the smaller its extent) may be taken as a prediction that can be tested in current and future observations (the JWST, for example). Secondly, the occurrence of b/p in disc galaxies is observationally found to be strongly dependent on the stellar mass of the galaxy in the local Universe (Yoshino & Yamauchi 2015;Erwin & Debattista 2017;Marchuk et al. 2022) as well as at higher redshifts (Kruk et al. 2019).This implies that a galaxy's stellar mass is likely to play an important role in forming b/ps via vertical instabilities.In our suite of thin+thick models, although we systematically varied the thick-disc mass fraction, the total stellar mass remained fixed (∼1×10 11 M ).Investigating the role of the stellar mass on b/p formation via vertical buckling instability would be quite interesting; however, it is beyond the scope of the present work. Furthermore, in our thin+thick models, the stars are separated into two well-defined and distinct populations, namely, thin-and thick-disc stars.While this might be well-suited for external galaxies, this scheme is a simplification for the Milky Way.Bovy et al. (2012) showed that the disc properties vary continuously with the scale height in the Milky Way.Nevertheless, our adapted scheme of discretising stars with a varying fraction A196, page 13 of 19 of thick-disc stars provides valuable insight into the trends, as it has been shown for the MW that a two-component disc can already capture the main trends found in more complex, multicomponent discs (e.g.see Di Matteo 2016; Debattista et al. 2017;Fragkoudi et al. 2017bFragkoudi et al. , 2018aFragkoudi et al. ,b, 2020)). Finally, if these b/p structures also formed at high redshift, two additional questions arise.These concern the role of interstellar gas, which is particularly critical in high-z discs that are gas-rich, and the role of mergers and accretions, which highz galaxies may have experienced at high rates -in maintaining/perturbing/destroying bars and b/p bulges.The role of the interstellar gas in the context of the generation/destruction of disc instabilities, such as bars (Bournaud et al. 2005) and spiral arms (Sellwood & Carlberg 1984;Ghosh & Jog 2015, 2016, 2022), has been investigated in past literature.In addition, the b/p bulges play a key role in evolution of disc galaxies by regulating the bar-driven gas inflow (e.g.see Fragkoudi et al. 2015Fragkoudi et al. , 2016)).Furthermore, bars can be weakened substantially (or even destroyed in some cases) as a result of minor mergers (Ghosh et al. 2021).Therefore, it would be worth investigating the b/p formation and evolution scenario in the presence of the thick disc and the interstellar gas and how likely they are to be affected by the merger events. Summary In summary, we investigated the dynamical role of a geometrically thick disc on the b/p formation and their subsequent evolution scenario.We made use of a suite of N-body models of thin+thick discs and systematically varied mass fractions of the thick disc and the different thin-to-thick disc scale length ratios.Our main findings are listed below. -B/ps form in almost all thin+thick disc models with varying thick-disc mass fractions and for all three geometric configurations with different thin-to-thick disc scale length ratios.The thick-disc b/p always remains weaker than the thin-disc b/p, and this remains valid for all three geometric configurations considered here. -The final b/p strength shows an overall decreasing trend with increasing thick-disc mass fraction ( f thick ).In addition, the b/ps in simulated galaxies with shorter thick-disc scale lengths form at earlier times and show a rapid initial growth phase when compared to the other two geometric configurations.Furthermore, we found a strong (positive) correlation between the maximum bar and b/p strengths in our thin+thick models.-For a fixed geometric configuration, the time interval between the bar formation and the onset of vertical buckling instability becomes progressively shorter with an increasing thick-disc mass fraction.In addition, for a fixed thick-disc mass fraction, models with shorter thick-disc scale length display shorter time delay between the bar formation and the onset of a buckling event when compared to the other two geometric configurations.-The final b/p length shows an overall increasing trend with increasing thick-disc mass fraction ( f thick ), and this remains valid for all three geometric configurations considered here. In addition, for a fixed f thick value, the models with larger thick-disc scale lengths form a larger b/p structure when compared to the other two geometric configurations.Furthermore, the weaker b/ps are more extended structures (i.e.larger R b/p ).-The b/p structure changes appearance from being more X-shaped to being more box-shaped as the f thick values increase monotonically.This trend holds true for all three geometric configurations.Furthermore, the thin-disc stars are predominantly responsible for giving rise to a strong Xshape of the b/p structure.-Our thin+thick models go through a vertical buckling instability phase to form the b/p structure.The thin-disc stars display a higher degree of vertical asymmetry and buckling when compared to the thick-disc stars.Furthermore, the vertical asymmetry persists long after the buckling phase is over; the vertical symmetry in the inner region is restored relatively quickly, while the vertical symmetry in the outer region (close to the ansae or handle of the bar) is restored long after the buckling event is over.-The thin+thick models demonstrate characteristic signatures in the temporal evolution of different diagonal (σ zz /σ RR ratio) and off-diagonal (meridional tilt angle, Θ tilt ) components of the stellar velocity dispersion tensor, as one would expect if the b/p structure is formed via the vertical buckling instability.These kinematic signatures are more prominent when computed using only the thin-disc stars as compared to using only the thick-disc stars. To conclude, even in the presence of a massive (kinematically hot) thick-disc component, the models are to able to harbour a prominent b/p structure formed via vertical buckling instability.This clearly implies that b/ps can form in thick discs at high redshifts and is in agreement with the observational evidence of the presence of b/ps at high redshifts (Kruk et al. 2019).Our results presented here also predict that at higher redshifts, the b/p will have a more boxy appearance than a more X-shaped one, which remains to be tested in future observations at higher redshifts (z = 1 and beyond). Fig. 1 . Fig. 1.Edge-on density distribution of all disc particles (thin+thick) at the end of the simulation run (t = 9 Gyr) for all thin+thick disc models with varying f thick values.Black dotted lines denote the contours of constant density.Left panels show the density distribution for the rthickS models, whereas middle panels and right panels show the density distribution for the rthickE and rthickG models, respectively.The thick disc fraction ( f thick ) varies from 0.1 to 1 (top to bottom), as indicated in the left-most panel of each row.The bar is placed along the x-axis (side-on configuration) for each model.The vertical black dashed lines denote the extent of the b/p structure in each case (for details, see the text in Sect.3.1.2). Fig. 2 .Fig. 3 . Fig. 2. Radial profiles of median of absolute value of distribution of particles in vertical (z) direction, |z|, (normalised by the initial value, z0 ), for thin-disc (left panels), thick-disc stars (middle panels), and total (thin+thick) disc stars (right panels), as a function of time (shown in colour bar) for the model rthickE0.5.Here, z0i denotes the initial value (used for the normalisation; for details, see text) where i = thin, thick, thin+thick, respectively.The thin disc b/p remains much stronger as compared to the thick disc b/p. Fig. 5 . Fig. 5. Strength of b/p, S b/p (top panel), and extent of b/p, R b/p(bottom panel), calculated using thin+thick disc particles, at t = 9 Gyr shown as a function of thick-to-total mass fraction ( f thick ) and for different geometric configurations.With increasing f thick value, the b/ps progressively become weaker and larger in extent, and this trend remains true for all three geometric configurations considered here.The errors are calculated by constructing a total of 5000 realisations by resampling the entire population via a bootstrapping technique (for details, see text). Fig. 6 . Fig. 6.Temporal variation of b/p extent, R b/p , calculated using thin-, thick-disc, and thin+thick disc particles for model rthickE0.5.The b/p extent increases by a factor of ∼2 over the total simulation runtime.At t = 9 Gyr, the thick-disc b/p remains a bit larger than the thin disc b/p.The errors on R b/p are estimated by constructing a total of 5000 realisations by resampling the entire population via a bootstrapping technique (for details, see text). Fig. 8 . Fig.8.Distribution of mid-plane asymmetry, A Σ (x, z), in the edge-on projection (x−z-plane), computed separately for the thin (left columns) and thick (middle columns) discs, as well as thin+thick (right columns) disc particles using Eq.(4) at six different times (before and after the buckling event) for the model rthickE0.5.The bar is placed along the x-axis (side-on configuration) for each time-step.Black lines denote contours of constant density.A mid-plane asymmetry persists, even long after the model has gone through the buckling phase. Fig. 9 . Fig. 9. Bar-b/p strength correlation: distribution of all thin+thick models in maximum bar strength -maximum b/p strength plane.The maximum bar strengths are taken from Ghosh et al. (2023), whereas the maximum b/p strengths are determined from Eq. (1).The colour bar denotes the thick-disc mass fraction ( f thick ).Different symbols represent models with different geometric configurations (see the legend).The maximum strength of the bar correlates overall with the maximum b/p strength, and this remains true for all three geometric configurations. Fig. 10 . Fig. 10.Variation of b/p extent with varying thick-disc mass fraction.Top panel: temporal evolution of ratio of b/p length (R b/p ) and bar length (R bar ) for model rthickE0.5.Bottom panel: variation of ratio of b/p and bar length, calculated at the end of the simulation run (t = 9 Gyr), with thick-disc mass fraction ( f thick ). Fig. 11 . Fig. 11.Variation of time delay between the formation and onset of buckling instability, τ buck − τ bar , with thick-disc mass fraction ( f thick ), for all thin+thick models considered here.For a fixed geometric configuration, τ buck − τ bar becomes progressively shorter with increasing f thick values (for details, see text). Fig. C.1.B/p extent, at the end of the simulation run (t = 9 Gyr), computed for the thin and thick discs, as well as for the thin+thick case, for all the thin+thick models considered here.Left panels show the distribution for the rthickS models, whereas middle panels and right panels show the distribution for the rthickE and rthickG models, respectively.Thin disc b/ps always remain shorter than the thick disc b/ps.The errors on R b/p are estimated by constructing a total of 5,000 realisations by resampling the entire population via a bootstrapping technique (for details, see Sect.3.1.2). Table 1 . Key structural parameters for the equilibrium models. Quantifying the kinematic signature of teh vertical buckling instability and the subsequent b/p formation.Left panel: temporal evolution of vertical-to-radial velocity dispersion (σ zz /σ RR ), calculated within the b/p extent (R b/p ), (σ zz /σ RR ) 2 (t; R ≤ R b/p ), for thin (in blue), thick (in red), and total (thin+thick) disc (in black) particles, for model rthickE0.5.Right panel: temporal evolution of meridional tilt angle (Θ tilt ), calculated within R b/p using Eq. (
v3-fos-license
2017-06-23T14:21:27.646Z
2014-12-15T00:00:00.000
18485435
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://ijpeonline.biomedcentral.com/track/pdf/10.1186/1687-9856-2014-25", "pdf_hash": "ce0d5f75627b6c36ef1f4ee976040c37301a005c", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43550", "s2fieldsofstudy": [ "Environmental Science", "Medicine" ], "sha1": "b592b396c87fabd9ad733c4d6c622da14795f161", "year": 2014 }
pes2o/s2orc
GPS suggests low physical activity in urban Hispanic school children: a proof of concept study Background Urban environments can increase risk for development of obesity, insulin resistance (IR), and type 2 diabetes mellitus (T2DM) by limiting physical activity. This study examined, in a cohort of urban Hispanic youth, the relationship between daily physical activity (PA) measured by GPS, insulin resistance and cardiovascular fitness. Methods Hispanic middle school children (n = 141) were assessed for body mass index (BMI), IR (homeostasis model [HOMA-IR]), cardiovascular fitness (progressive aerobic cardiovascular endurance run [PACER]). PA was measured (GPS-PA) and energy expenditure estimated (GPS-EE) utilizing a global positioning mapping device worn for up to 7 days. Results Students (mean age 12.7 ± 1.2 years, 52% female) spent 98% of waking time in sedentary activities, 1.7% in moderate intensity PA, and 0.3% in vigorous intensity. GPS analysis revealed extremely low amounts of physical movement during waking hours. The degree of low PA confounded correlation analysis with PACER or HOMA-IR. Conclusions Levels of moderate and vigorous intensity PA, measured by GPS, were extremely low in these urban Hispanic youth, possibly contributing to high rates of obesity and IR. Physical movement patterns suggest barriers to PA in play options near home, transportation to school, and in school recess time. GPS technology can objectively and accurately evaluate initiatives designed to reduce obesity and its morbidities by increasing PA. Background Built environments can impede, or encourage active lifestyles for children [1,2]. Social environments influence how children choose, or are permitted, to interact with the environment. Reduced daily physical movement is one factor contributing to overweight/obesity, now the most common medical condition of childhood in the US. Certain ethnic minority populations, including Hispanic children, are at greater risk for obesity and its related morbidities [3][4][5]. Increasing numbers of children also fail to meet minimum recommendations for physical activity [6]. Both poor physical fitness and obesity are risk factors for T2DM and cardiovascular disease [7][8][9][10][11][12]. In fact, cardiovascular fitness (CVF) is a stronger predictor of mortality than obesity [13,14]. Increased physical activity and fitness in children is associated with reduced risk for diabetes and other improved health outcomes. Thus, identifying and altering modifiable barriers to physical activity and lifestyle behaviors during childhood is paramount [15,16,17]. Successful public health interventions often utilize the Social Ecological Model (SEM) to address interacting environments at the individual, home, school, community, and society levels [18,19]. GPS offers the technology to document where, when, and how much activity is taking place. One particularly attractive target for assessment and potential improvement of physical activity for children and adolescents is the school-day routine: travel to/from, recess movement, and after-school activities [16,17]. Since school attendance is an experience shared by the vast majority of children, the school environment and routine can potentially address low levels of physical activity (PA) and higher levels of sedentary behaviors, both of which are associated with IR. Most studies of childhood PA have relied on recall or physical activity logs, and very little objectively measured data is available regarding physical activity patterns of minority youth, including Hispanic children, who have a high rate of obesity, T2DM, and low fitness [20]. In this study, we utilize GPS to measure daily physical activity outside of classroom time in urban Hispanic youth and its relationship to BMI, cardiovascular fitness, and insulin resistance. Methods Children (n = 141) from the Bruce-Guadalupe middle school (grades [5][6][7][8] in Milwaukee, Wisconsin, were invited to participate. Human Subjects Committee at the University of Wisconsin approved all procedures, and informed written consent was obtained from student and parent before study enrollment. All consenting was done with both Spanish and English speaking investigators to allow families the best opportunity to ask questions. All students were Hispanic (78% Mexican and 21% Puerto Rican) children. Students had a mean age of 12.7 (±1.2), and 52% were females. Students underwent assessment of cardiovascular fitness measured by the progressive aerobic cardiovascular endurance run (PACER), calculation of body mass index (BMI), and a subset (n = 55) underwent fasting blood work performed for insulin and glucose (HOMA-IR) after a 12 hour fast. Children wore Global Positioning (GPS) receivers (QStarz 1300S carried in pocket, backpack or worn on a lanyard) to track and record daily physical and sedentary activity. GPS receivers with unique unit identifiers were assigned to students, who were asked to keep the GPS unit on their person and wear at all times outside of school classroom time, with the exception of sleeping and showering for a 7-day time period. The intent was to capture activity before or after school, or on weekends, rather than "in classroom" time. Each receiver was cleared and set to record time and position at a one second intervals using WAAS enabled satellite signal detection. Theoretical position resolution at this standard was less than three meters. School staff received training, software, and support for performing Fitnessgram® testing including PACER and BMI determination at the school. Staff was trained to download and store data and to recharge and clear GPS units. Installation of GPS download and viewer software was accomplished by installation in the school's information technology system. PA was assessed using a geospatial model to equate GPS-recorded movement with energy expenditure. Using ArcGIS 9.X software, a community level "map" of movement by type (e.g., walking, running, motorized transport) was created to predict children's energy expenditure (EE; in kilojoules) as they move through the community. For each child, energy expenditures were predicted from position (e.g., GPS) and movement, producing highly accurate records of spatio-temporally placed EE. GPS recordings were considered evaluable for analysis if the GPS unit was active for at least 90 minutes on a given day. In order to determine the daily average over the 7-day recording period for each participant, the mean values for each GPS measure (energy expenditure, distribution, minutes and percent time spent on activity) across the recording period were calculated and analyzed. Levels of EE were defined by velocity as recorded by the GPS units. The units automatically stopped recording when motionless for extended periods, thus only time spent in activity was included in "GPS active time." GPS data were processed into motion tracks in ArcGIS 9.x, using a standard spline function over three second intervals to smooth data spikes. Mode of travel was distinguished by acceleration signature coupled with peak and or sustained velocity. Time segments were manually interpreted from tracks and entered into a database for subsequent analysis. Sedentary was defined as either lack of significant motion, or motion at rates and track patterns indicating travel in a motorized vehicle. Lack of motion was defined as less than 0.45 m/s (about 1 mile/per hour). Moderate intensity of activity was between 0.45 and 1.35 m/s, and vigorous activity was greater than 1.35 m/ s in track patterns that did not correspond to vehicular travel. These distinctions are based on accepted definitions of NHANES datasets. Additionally, these GPS units included accelerometer triggers that helped us distinguish between the unit being at rest and unused, as opposed to being worn but with no positional change. Cardiovascular fitness (CVF) was assessed using the PACER test, in which subjects run back and forth along a 20-meter shuttle run, and each minute the pace required to run the 20 meters quickens. The pace is set from a prerecorded audio file or CD. The initial running speed is 8.5 km/hour, and the speed increases by 0.5 km/hour every minute. The test is finished when the subject fails to complete the 20-meter run in the allotted time twice. The PACER is expressed as number of laps completed [21]. PACER Z-score were calculated based upon Wisconsin references [22]. A single teacher performed all PACER testing after undergoing certified training in PACER testing procedures. A 5 ml fasting blood sample was obtained for insulin and glucose levels on a single 12 hour fasting blood specimen. Glucose was determined by hexokinase method and insulin by chemiluminescent immunoassay (University of Wisconsin Hospitals and Clinics Laboratory, Madison, WI). HOMA-IR was calculated from glucose and insulin values (fasting glucose (mg/dL) × fasting insulin (μU/ml)/405). All analyses were performed using SAS software version 9.2 (SAS Institute, NC). Demographic variables were summarized in terms of means ± standard deviations or as percentages. The distribution of GPS measures was highly skewed so that medians and ranges were used to summarize these measures. In order to account for the daily variability in the GPS measures across the 7 days recording period, the analyses of GPS measures were weighted using the inverse of the standard errors of the estimated mean GPS values as weights. Nonparametric, partial Spearman's rank correlation coefficients were calculated to evaluate the associations between each GPS measures and BMI, fitness and insulin resistance measures. Since there was a large variation in the time for which the GPS device was active, the correlation analysis was adjusted by the median daily time the GPS device was active. Fisher's z-transformation was used to construct 95% confidence intervals for the correlation coefficients. A two-sided 5% significance level was used for all statistical tests. There was a total of 519 days with evaluable (at least 90 minutes of active recording) GPS measurements across the 7-day period. The mean number of days with evaluable GPS measurements per student was 3.6 (±1.6). The GPS device was activated daily with a median duration of 310 minutes (range 97 -1086). Students spent a median time of 6 minutes (range 1 -318 minutes) on moderate intensity PA, 0 minute (range 0 -65 minutes) on vigorous intensity PA and 294 minutes (range 94 -780 minutes) on sedentary activities ( Table 2, and Figure 1). The median daily time spent in a motorized state was 18 minutes (range 0 -316 minutes). Seventy seven percent of participants spent ≤10% of waking time on moderate intensity activities, 90% spent <5% of the time on vigorous intensity activities, and 77% spent no time in vigorous intensity activity ( Figure 1). Forty-five percent of the participants spent at a daily average of least 20 minutes in a motorized vehicle. Discussion Daily physical activity, objectively measured by GPS, was extremely low in this school-based cohort of urban Hispanic youth. Most subjects engaged in almost no vigorous activity during the study period, only 22% spent >10% of waking time (~90 minutes) in moderate activity, and the median percentage time spent in sedentary state was 98%. Median energy expenditure attributed to moderate and vigorous PA was 18 and 0 calories/day, respectively. GPS analysis revealed extremely low amounts of physical movement. The degree of low PA confounded correlation analysis with PACER or HOMA-IR. High levels of time were spent in motorized transport and low amounts of PA occurred both at the school and home environment. Since current national guidelines recommend a daily total of 60 minutes of moderate-to-vigorous activity (MVPA) for children [23], these data indicate that for many urban youth, there is a large gap between such recommendations and the realities of their daily life. To evaluate the impact of public health interventions designed to increase physical activity, reliable measurements of movement are essential. This study demonstrates that GPS can provide reliable and objective measures of duration, distance, and intensity of physical movement. In addition, specific GPS-generated time-activity patterns can suggest barriers to movement and targeted interventions to address them. Students generally viewed using GPS very positively, and use of GPS enabled analysis of their specific physical activity behaviors and transportation choices and routes. This study provides proof-of-concept data that GPS tracking can be an effective research tool to accurately document duration of physical activity, intensity of physical activity, and specific location of physical activity [23]. For future intervention studies, linking GPS data to metabolic measurements in children provides an opportunity to objectively and accurately evaluate the impact of physical activity changes on metabolic health. Factors influencing children's lifestyle options and choices can generate differences in physical activity and energy expenditure which, over time, lead to healthaltering decreases in movement and fitness, and increases in adiposity. While there is agreement that socioeconomic and built environment conditions can promote or inhibit physical activity, there are inconsistencies in the association of built environments and physical activity [24]. These may be due in part to challenges of accurately measuring and recording physical activity [25]. The data from this study suggest (but do not show) that children in urban settings confront physical, cultural, and attitudinal barriers that severely limit physical activity. The urban built environment near the school in this study (i.e. high crime area abutting a major highway) could markedly impede children's unstructured activity (play). The extremely low levels of PA observed in the study group were particularly concerning in light of ongoing efforts at the school, involving parents and older generation family members, to promote healthy lifestyle changes. Instead, GPS data demonstrated that children were driven for almost all trips to and from school, that they moved little during the school day, and that they spent very little time moving in outdoor recreational facilities such as public parks after school. The large proportion of students demonstrating little to no amounts of physical activity raises questions about adherence to use of the GPS device. Most of the assessment occurred with the help of the school staff, and while parents were included during consenting procedures, it is possible that parents were not supportive of students wearing GPS devices. Thus if children did not wear their GPS during times of physical activity, "capturing" that activity data could conceivably have been lost. This resulted in a large variability in the GPS measures and, consequently, in low statistical power when correlating the GPS measures with weight, fitness and insulin resistance measures. We set a minimum "threshold" of ninety minutes per day of GPS usage to be included in the activity analysis. While this threshold may be considered somewhat arbitrary, this was done to eliminate data from GPS devices that were unused, or idle, so as not to falsely lower the amount of activity "measured". Additionally, these data were collected in a single school, with its own distinctive built and cultural environment, and the findings may not be generalizable to other Hispanic communities or other urban communities. Promoting physical activity during childhood and developing active patterns of moving through one's daily environment are positive steps toward reducing health consequences associated with obesity and poor fitness [26]. Hispanic youth show unique risks for obesity related illness [3,27] and, as suggested by the findings of this study of an urban environment, often display extremely low levels of PA which fall far short of federally recommended amounts of physical activity per day. These data provide evidence that reduced movement-associated energy expenditure is one factor contributing to susceptibility for obesity and T2DM risk in this group of children. If environmental interventions designed to increase physical activity can be envisioned and implemented, follow-up studies utilizing GPS will enable accurate assessment of their effect on children's movement, levels of physical activity, and energy expenditure. Conclusion Utilization of GPS to measure physical activity and its associated energy expenditure revealed that physical activity and EE were extremely low in a group of urban Hispanic children, far below recommendations for health during childhood. Analysis of movement at home, between home and school, and after school showed low levels of physical activity in all settings, indicating that previous school-based measures thus far have not increased PA, and suggesting limited outdoor play options and/or choices near home. This study strengthens the notion that there is need and opportunity for public health interventions that focus on environmental changes in the home, school, and community settings that facilitate daily physical activity for children living in an urban setting. Tools for accurately tracking physical activity are greatly needed. To provide objective documentation of changes in movement associated with and correctly attributable to such interventions, use of GPS technology may be helpful.
v3-fos-license
2021-05-21T16:56:57.315Z
2021-04-15T00:00:00.000
234813602
{ "extfieldsofstudy": [ "Biology" ], "oa_license": "CCBY", "oa_status": "GREEN", "oa_url": "https://chemrxiv.org/engage/api-gateway/chemrxiv/assets/orp/resource/item/60c7577a9abda2637bf8e73d/original/diversification-of-4-methylated-nucleosides-by-nucleoside-phosphorylases.pdf", "pdf_hash": "2f5ffdaf25c4c62b3daf5d78e62ae110bc223b16", "pdf_src": "Adhoc", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43553", "s2fieldsofstudy": [ "Chemistry" ], "sha1": "ef82c049b22095164e4e7c3e47c609295a021789", "year": 2021 }
pes2o/s2orc
Diversification of 4ʹ-Methylated Nucleosides by Nucleoside Phosphorylases The growing demand for 4'-modified nucleoside analogs in medicinal and biological chemistry is contrasted by the challenging synthetic access to these molecules and the lack of efficient diversification strategies. Herein, we report the development of a biocatalytic diversification approach based on nucleoside phosphorylases, which allows the straightforward installation of a variety of pyrimidine and purine nucleobases on a 4'-alkylated sugar scaffold. Following the identification of a suitable biocatalyst as well as its characterization with kinetic experiments and docking studies, we systematically explored the equilibrium thermodynamics of this reaction system to enable rational yield prediction in transglycosylation reactions via principles of thermodynamic control. The growing demand for 4'-modified nucleoside analogs in medicinal and biological chemistry is contrasted by the challenging synthetic access to these molecules and the lack of efficient diversification strategies. Herein, we report the development of a biocatalytic diversification approach based on nucleoside phosphorylases, which allows the straightforward installation of a variety of pyrimidine and purine nucleobases on a 4'-alkylated sugar scaffold. Following the identification of a suitable biocatalyst as well as its characterization with kinetic experiments and docking studies, we systematically explored the equilibrium thermodynamics of this reaction system to enable rational yield prediction in transglycosylation reactions via principles of thermodynamic control. File list (2) download file view on ChemRxiv Manuscript_Me-nucleosides_ChemRxiv.pdf (3.87 MiB) download file view on ChemRxiv SI_Me-nucleosides_ChemRxiv.pdf (11.41 MiB) The growing demand for 4ʹ-modified nucleoside analogs in medicinal and biological chemistry is contrasted by the challenging synthetic access to these molecules and the lack of efficient diversification strategies. Herein, we report the development of a biocatalytic diversification approach based on nucleoside phosphorylases, which allows the straightforward installation of a variety of pyrimidine and purine nucleobases on a 4ʹ-alkylated sugar scaffold. Following the identification of a suitable biocatalyst as well as its characterization with kinetic experiments and docking studies, we systematically explored the equilibrium thermodynamics of this reaction system to enable rational yield prediction in transglycosylation reactions via principles of thermodynamic control. Nucleosides are central biomolecules that play key roles in a variety of cellular processes by serving as enzymatic cofactors, building blocks of DNA and RNA and energy transport systems. As such, modified nucleosides mimicking their natural counterparts have a long history in medicinal and biological chemistry. [1][2][3][4] Today, modified nucleosides are indispensable pharmaceuticals for the treatment of various types of cancer and viral infections and further represent important tools in chemical biology for a spectrum of imaging applications. [5,6] Despite the great demand for these molecules, the synthesis of nucleosides is still regarded as challenging and inefficient. [7] While nucleosides with ribosyl or 2ʹ-desoxyribosyl moieties can be accessed from naturally occurring nucleosides or carbohydrates, [7][8][9][10] the preparation of sugar-modified nucleosides typically suffers from lengthy reaction sequences and low total yields. [11][12][13][14][15][16][17][18] Furthermore, a heavy reliance on protecting groups entails low overall efficiencies [7] and several sugar modifications at the 2ʹ or 4ʹ positions are known to limit diastereoselectivity in glycosylation approaches, [19,20] severely complicating the synthetic access to many target compounds. More importantly, established routes typically exhibit a lack of divergence as they tend to be specific to one nucleoside. As such, the introduction of desired substitutions at the nucleobase often requires complete or partial re-synthesis of the target molecule since a general strategy for the efficient diversification of modified nucleosides has not been reported to date (Scheme 1, top). With the advent of scalable routes for the de novo synthesis of selected 4ʹ-modified nucleoside analogs, as reported recently by Britton, [21] such a diversification strategy would readily provide access to a variety of sought-after nucleosides. We envisioned that nucleoside phosphorylases could provide a biocatalytic platform for late-stage diversification of 4ʹ-modified nucleosides. These enzymes catalyze the reversible phosphorolysis of nucleosides to the corresponding nucleobases and pentose-1-phosphates via an SN2-like mechanism. [22,23] The reaction sequence involving phosphorolysis of one nucleoside and in situ reverse phosphorolysis to the target nucleoside is generally known as a transglycosylation, and effectively transfers the sugar moiety from one nucleobase to another. [24] While this reactivity is well-established for ribosyl and 2ʹdesoxyribosyl nucleosides [9] and a few 2ʹ-modified nucleosides (Scheme 1, center), there are no examples in the literature of the enzymatic synthesis of 4ʹ-modified nucleosides, except for Merck's recent report of a 5-step enzymatic cascade for the synthesis of the 4ʹ-alkynylated nucleoside drug Islatravir. [25] Therefore, the feasibility of transglycosylation reactions with 4ʹ-modified nucleosides as well as the thermodynamics of such a cascade process are notably underexplored. Herein, we address this gap by reporting on the phosphorolysis and transglycosylation of the simplest 4ʹ-alkylated pyrimidine nucleoside, 4ʹmethyluridine (1a). Following the identification of a suitable biocatalyst, and a characterization of its reactivity with kinetic experiments and docking studies, we explored the thermodynamics of the phosphorolysis of 1a and leveraged this information in transglycosylation experiments to access a range of 4ʹ-methylated pyrimidine and purine nucleosides. In the absence of obvious pyrimidine nucleoside phosphorylase (PyNP) candidates for the phosphorolysis of 1a, we began our investigation by screening a small panel of PyNPs with known broad substrate spectra. To our surprise, only the PyNP from Thermus thermophilus (TtPyNP) [26,27] showed measurable conversion of 1a under screening conditions ( Figure S1). Other broad-spectrum PyNPs, such as those from Geobacillus thermoglucosidasius (GtPyNP) [28] or Bacillus subtilis, [23] displayed no activity with 1a ( Figure 1A). To substantiate the observed conversion of 1a by TtPyNP, we performed a series of control experiments. Reactions either without Scheme 1. Synthesis and biocatalytic diversification of nucleosides with modified sugars. NB = Nucleobase, NP = nucleoside phosphorylase. phosphate, without enzyme or with denatured enzyme gave no conversion. Similarly, no conversion was observed under reaction conditions outside of the working space of TtPyNP (pH 3 or pH 12, Figure 1A). [26] NMR analysis of a reaction mixture with TtPyNP and 1a corroborated the proposed reactivity and creation of the pentose-1-phosphate 3, as evident from the rise of an additional 1 H NMR signal at 5.57 ppm showing a strong H,P-HMQC signal ( Figure 1B). Consistent with the native reactivity of PyNPs, inversion at the anomeric position was evident by this signal lacking NOE contacts to the 4ʹ-methyl group of 3, while the corresponding anomeric proton in 1a showed clear correlation to the methyl substituent. Having established the activity of TtPyNP with 1a, we conducted kinetic experiments to provide further insights into this enzymatic transformation. Although TtPyNP is inhibited by pyrimidine nucleobases such as uracil (2a), [26] we could observe Michaelis-Menten behavior of the enzyme with 1a ( Figure 1C). Interestingly, the apparent Michalis-Menten constant KMʹ of the phosphorolysis of 1a (KMʹ = 3.37 mM) indicated that TtPyNP has a much lower affinity for 1a compared to natural nucleosides like uridine or thymidine (KM < 1 mM), [27] suggesting that productive binding of the modified substrate 1a might present a challenge due to the increased steric bulk. In addition to a lower affinity for 1a, TtPyNP also displayed a lower rate constant compared to uridine (0.59 vs 5.05 s -1 for 1 mM substrate at 60 °C and pH 9) [26] which showed a similar temperature-dependence as indicated by phosphorolysis experiments at different temperatures monitored by UV spectroscopy ( Figure 1D). [29,30] Collectively, these results demonstrate that, unlike other nucleoside phosphorylases, TtPyNP selectively converts the 4ʹ-methylated nucleoside 1a to the corresponding sugar phosphate 3, albeit with a lower rate constant and substrate affinity compared to the native substrates. Next, we performed preliminary in silico docking studies to rationalize why 1a is only converted by TtPyNP and not by other closely related and highly promiscuous enzymes such as GtPyNP. We hypothesized that conversion of this substrate would primarily be limited by steric hindrance during substrate binding, since i) uridine and 1a only differ by a single methyl group distant from the anomeric position and ii) TtPyNP displays significantly lower affinity for 1a than for uridine. PyNPs generally exhibit marked flexibility during their catalytic cycle with a transition from an open conformation to a closed state requiring a domain movement of approximately 8 Å. [31] Since all first sphere residues in the closed state are highly conserved and identical between the tested PyNPs, we anticipated that initial binding in the open conformation would be a limiting factor, as TtPyNP offers slightly more space than GtPyNP due to a threonine-serine substitution at the back of the active site, as evident from sequence alignments. [28] To examine this hypothesis, we obtained an X-ray crystal structure of GtPyNP at 1.9 Å resolution (see Supporting Information for details; PDB ID 7m7k) and used AutoDock Vina implemented in YASARA to dock uridine and 1a into the open conformations of this structure and the known Xray crystal structure of TtPyNP (PDB ID 2dsj). [32] Docking of uridine and 1a into TtPyNP yielded structures in good agreement with the native mode of substrate binding via Hbonding to the nucleobase and positioning of the anomeric carbon near the phosphate binding pocket (Figures 2A and 2B). Likewise, uridine could be docked into GtPyNP in a similar position to the cocrystallized substrate (Figures 2C and S8), where the 4ʹ-position of uridine is located in proximity to Thr84 (Ser83 in TtPyNP). However, we were unable to obtain sensible docking results for 1a with GtPyNP as the increased steric bulk at the 4ʹ-position consistently led to a rotation of the sugar scaffold into an unproductive pose ( Figure 2D). This suggested that the subtle space-creating mutation to a serine in TtPyNP might be a key factor for conversion of 1a. Consistent with this conclusion, the slightly more sterically congested TtPyNP-S83T mutant significantly lost activity compared to the parent enzyme (kobs = 0.25 s -1 vs kobs = 0.59 s -1 , Table S2), while the reverse substitution in GtPyNP installed a low but measurable level of activity in this enzyme (kobs = 0.02 s -1 for GtPyNP-T84S). Moreover, all other enzymes we screened initially, and which were inactive with 1a, also possess a threonine at this position, which likely impedes their ability to bind this substrate productively. Although such subtle but crucial space-creating mutations are rare, there is precedent from other enzymes in the literature. [34] Together, these results indicate that sufficient space in the open conformation of PyNPs is a prerequisite for conversion of sterically more demanding substrates such as 1a. Clearly, there are other factors influencing the rate constant of this Figure 1. Phosphorolysis of 4ʹ-methyluridine (1a). The data for uridine in D were taken from ref. 26. Please see the Supporting Information for details and the externally hosted supplementary information for raw data. [33] transformation, as evident from the order of magnitude difference between the rate constants of the active enzymes, but these must arise from mutations far from the active site, as all other residues in possible contact with the substrate are identical between the tested enzymes. Since the phosphorolysis of ribosyl and 2ʹ-desoxyribosyl nucleosides is under tight thermodynamic control, [23] we were then interested in the thermodynamics and reversibility of the phosphorolysis of 1a to enable a diversification of the scaffold via transglycosylation. Time-course experiments with 1a and varying excesses of phosphate revealed incomplete conversion of the substrate, with the equilibrium positions being consistent with an equilibrium constant K of 0.16 (at 60 °C and pH 9, Figure 1D). Further experiments to monitor the equilibrium at 75 °C and 90 °C revealed that the phosphorolysis of 1a has an apparent reaction enthalpy Δ ′ of 8.9 kJ mol -1 and an apparent reaction entropy Δ ′ of 11.7 J mol -1 K -1 ( Figure S2). Interestingly, these values closely resemble the equilibrium constants and thermodynamic parameters of the phosphorolysis of uridine, [23] indicating that substitutions distant from the anomeric center have little influence on the equilibrium thermodynamics of nucleoside phosphorolysis. These results also pointed to the reversibility of this transformation, opening the door for transglycosylation reactions from the sugar phosphate 3 to yield other nucleosides. With a solid understanding of the thermodynamics and kinetics of the phosphorolysis of 1a by TtPyNP, we proceeded to diversify this scaffold by subjecting the sugar phosphate 3 to subsequent enzymatic catalysis with different nucleobases in situ. Using this transglycosylation approach ( Figure 3A and Scheme 1, center), we aimed to access a variety of 4ʹ-methylated nucleosides from 1a in a one-pot manner. After confirming the stability of 3 through equilibrium shift experiments ( Figure S6), [35] we subjected 1a to phosphorolysis using only minimal phosphate in the presence of different pyrimidine nucleobases 2b−2e belonging to a panel of 5-substituted uracil analogs ( Figures 3A and 3B). Analysis of the reaction mixtures by HPLC revealed consumption of 1a and the respective uracil analog with concurrent formation of new products ( Figure 3B), which HRMS analysis identified as the nucleoside products arising from glycosylation of 2b−2e with 3. Equilibrium state thermodynamic calculations [24] based on transglycosylation experiments with different sugar donor concentrations revealed apparent equilibrium constants of phosphorolysis of 0.12−0.73 for these products 1b−1e ( Figure 3B and S3). The trifluoromethylated pyrimidine 2f could also be converted, although the instability of the starting material and product in aqueous solution [36] [a] 2f is converted, but 1f and 2f hydrolyse to the corresponding carboxylates under the reaction conditions. [b] Reaction mixtures with purines additionally contained the purine nucleoside phosphorylase from Geobacillus thermoglucosidasius (PNP). Please see the externally hosted supplementary information for raw data and calculations. [33] precluded us from obtaining equilibrium data ( Figure S4). A similar elaboration of in situ generated 3 with purine nucleobases proceeded smoothly using the promiscuous purine nucleoside phosphorylase from Geobacillus thermoglucosidasius. [37] Notably, the adenosine analogs 1g−1i were generated in much higher conversions, corresponding to equilibrium constants of phosphorolysis of 0.01−0.02, reflecting the more favorable thermodynamics typically observed for 6-aminopurines. [35,[38][39][40] The guanosine and inosine analogs 1j and 1k could also be accessed, although with lower conversions indicative of higher equilibrium constants ( Figure 3C). These experiments not only confirmed that nucleoside transglycosylations with the methylated precursor 1a can deliver a range of modified nucleosides in a one-pot manner, but also that the equilibrium thermodynamics of this system largely resemble those of the well-described ribosyl nucleosides. These findings further indicated that these transglycosylations would offer themselves to rational reaction engineering using established principles of thermodynamic reaction control to predict and maximize conversions in these reactions. [24] Indeed, thermodynamic calculations based on the obtained equilibrium constants suggested that 1b could, for instance, be obtained in 84% conversion from 1a using 4 equivalents of nucleobase, which we confirmed experimentally ( Figures 3D and S5). Similarly, 1i could be obtained in quantitative conversion with 4 equivalents of 2i, in agreement with our predictions. As a proof of synthetic utility, we subjected 1a to transglycosylation with 5 equivalents of 2e and obtained the iodinated 1e in 68% conversion (61% predicted) and ca. 40% isolated yield. In conclusion, we identified and characterized TtPyNP as a biocatalyst for the diversification of 4ʹmethylated nucleosides. Reversible phosphorolysis of a methylated precursor 1a yields stable the pentose-1phosphate 3 which can be employed as a sugar synthon to access a range of modified nucleosides in one pot. Our investigations revealed that sufficient space near the active site in the open conformation of PyNPs appears crucial for binding and conversion of 1a. Furthermore, the equilibrium thermodynamics of the phosphorolysis of 4ʹ-methylated nucleosides largely resemble those of ribosyl nucleosides, indicating that substitutions distant from the anomeric position have only minor effects on the conversions in these systems. Leveraging principles of thermodynamic reaction control enabled us to access a spectrum of 4ʹ-methylated nucleosides bearing different pyrimidine and purine bases in transglycosylation reactions. Lastly, we expect that other 4ʹ-modified nucleoside analogs can be obtained with such biocatalytic systems in a similar fashion (probably with comparable equilibrium thermodynamics), although bulkier 4ʹ-substitutions will likely require some extent of protein engineering to improve activity. Author contributions (with definitions as recommended by Brand et al. [1] ) Data availability All data depicted visually in the items in the main text (Figures 1−3 ) as well as in the Supplementary Information (Figures S1−S16, see below) are available as tabulated data from the text below and from the externally hosted Supplementary Information at zenodo.org. [2] The data and model of GtPyNP with bound uridine were deposited to the Protein Data Bank (PDB) under accession code 7m7k. General remarks All chemicals used in this study were of analytical grade or higher and purchased from Sigma Aldrich (Steinheim, Germany), Carbosynth (Berkshire, UK), Carl Roth (Karlsruhe, Germany), TCI Deutschland (Eschborn, Germany) or VWR (Darmstadt, Germany) and used without prior purification. 4′-Methyluridine (1a) was synthesized as described recently. [3] Water deionized to 18.2 MΩ•cm with a Werner water purification system was used for the preparation of all enzymatic reactions as well as purification and storage buffers. For the preparation of NaOH solutions for quenching, deionized water was used. Analytical HPLC analyses were carried out with an Agilent 1200 series system equipped with LibreOffice, spectral unmixing with data_toolbox, [5,6] modelling and docking in YASARA and protein viewing in ChimeraX. [7] Crystallographic software is described below. 4 Experimental details and supplementary items Wild-type enzymes were cloned as described in previous reports [5,8] and the available glycerol stocks of the enzymes from previous projects [5,9,10] were used directly for this work. BsPyNP was obtained as a freeze-dried enzyme from Sigma Aldrich and dissolved to 1 g L -1 in 2 mM phosphate buffer (pH 7). Cloning of the mutant enzymes was carried out via BamHI/HindIII sites in the plasmid pGW3 (gift by Matthias Gimpel, unpublished). Codon-optimized genes were obtained (GeneArt Invitrogen/Thermo Fisher Scientific, Massachusetts, USA) and cloned into pGW3 using the recipient strain Escherichia coli DH5α. The correct sequence was confirmed with Sanger Sequencing (LGC Genomics, Berlin, Germany). pGW3 is a 2 nd generation derivative of pCTUT7, which was optimized with respect to tightness of the LacO in comparison to the 1 st generation derivate used in a previous work. [8] Protein expression and purification was performed as described recently [5,8] in Escherichia coli BL21 using the EnPresso protocol for 50mL (Enpresso, Berlin, Germany). Briefly, all enzymes were heterologously expressed in E. coli as His6-tagged proteins through IPTG-induced overexpression. Purification was achieved through cell disruption, heat treatment of the crude extract (80 °C for . Samples were withdrawn at timely intervals after reaction initiation, as detailed in the metadata files freely available online. [2] 6 Reaction monitoring of phosphorolysis reactions was achieved via spectral unmixing. From live reactions, samples were withdrawn and quenched in 100 mM aqueous NaOH as described previously. [5,6] Sample dilution factor was adjusted to reach final concentrations of 100−200 µM UVactive reaction components (please note that the exact concentration is not relevant here since spectral unmixing only takes spectral shape and not absolute intensity into account). For instance, from reactions with 1 mM 1a 50 µL of the reaction mixture were pipetted into 250 µL 100 mM NaOH for quenching and dilution. Of the diluted alkaline sample, 200 µL were transferred to UV/Vis-transparent 96-well plates (UV star, GreinerBioOne, Kremsmünster, Austria) for analysis. UV absorption spectra were recorded from 250−350 nm with a BioTek PowerWave HT platereader and subjected to spectral unmixing using analogously obtained reference spectra of 1a and 2a. [2] Reference spectra used in this study are freely available from the externally hosted Supplementary Information. [2] The degree of conversion was determined directly from the spectral fit which considers the UV-active substrate and product in relation to one another. [5] For activity determination, only sampling points showing 3−10% conversion of the nucleoside substrate were considered. This lower bound was set due to the inherent inaccuracy of the UV-based method employed (roughly ±0.3 percentage points, due to the inherent error in spectral acquisition, as described in the original publication) [5] and the upper bound was applied as recommended by Cornish-Bowden [11] for equilibrium reactions. All datapoints outside this window were not included for calculation of activity and marked accordingly in the datasets available in the Supplementary Information. [2] Datapoints that displayed baseline shifts or other spectral anomalies were also excluded from consideration. Background correction was performed as described recently. [6] Experimental spectra were fitted either across the entire spectrum or over one of the information-rich shoulder regions of pyrimidine nucleosides/nucleobases, as appropriate for the analysis. All background corrections and the corresponding datafiles are detailed in the metadata files in the externally hosted supplementary information. [2] Enzymatic activity was determined by linear approximation of the conversion over time with a forced intercept at the origin. All raw data and the datapoints considered for calculation are freely available online with outliers and excluded datapoints clearly marked. [2] The observed rate constant was obtained by considering the degree of conversion (mol per second) per mol enzyme applied, using the molar extinction coefficient of TtPyNP of 26,930 cm -1 M -1 as predicted by Protparam [12] (i.e. the stock solution of 1 g L -1 had a concentration of 37.1 µM). Enzyme screening ( Figure 1A) was performed using reaction mixtures of 1 mM 1a, 20 mM potassium phosphate and 30 µg mL -1 enzyme (TtPyNP, GtPyNP, BsPyNP or EcTP) at 50 °C in 50 mM MOPS buffer pH 7 in a final volume of 50 µL. These reactions were carried out at a neutral pH in this buffer system 7 to accommodate for the working space of the enzymes used. Later reactions were performed at pH 9 since TtPyNP retains excellent activity and stability under alkaline conditions [9] and pentose-1phosphates (such as 3) are much more resistant to hydrolysis under alkaline conditions. [13,14] The reactions were quenched by addition of 250 µL 100 mM NaOH to the reaction mixtures after 30 min. The resulting samples were analyzed by UV spectroscopy as described above. For each protein, control reactions with uridine were performed under identical conditions, all of which gave conversion of the nucleoside to or near equilibrium ( Figure S1). For both uridine and 1a, control reactions without protein were carried out, which resulted in no conversion of the starting material. only submaximal activity at 50 °C. [9] Only TtPyNP displayed appreciable activity with 1a (B). The raw data are available online. [2] For illustrative purposes, all UV spectra shown here were background corrected and normalized to the isosbestic point of base cleavage (271 nm for this substrate). [6] The control reactions for TtPyNP ( Figure 1A) [9] prior to addition of 1a. The reaction at pH 3 was carried out in a buffer mix consisting of 5 mM citrate, 10 mM MOPS and 20 mM glycine (all final concentration) adjusted to pH 3. The reaction at pH 12 was carried out with 25 mM NaOH instead of MOPS buffer. All reactions were quenched through addition of 200 µL 100 mM NaOH and analyzed as described above. The NMR spectra of the sugar phosphate 3 ( Figure 1B) were recorded directly from a reaction mixture. Table S1. Raw data and calculations for this experiment are freely available online. [ The temperature-dependence of the activity of TtPyNP with 1a ( Figure 1D) was determined using reaction mixtures of 1 mM 1a and 50 mM potassium phosphate in 50 mM glycine/NaOH buffer at pH 9 and the indicated temperature in a total volume of 150 µL. Depending on the temperature (and, therefore, on rate of phosphorolysis), 6−24 µg mL -1 TtPyNP were used (6 µg mL -1 for 70 °C, 12 µg mL -1 for 60 °C and 24 µg mL -1 for 50 °C), to permit sampling of all reactions within the same time domain. The thermodynamic control of the phosphorolysis of 1a ( Figure 1E) was probed with reaction mixtures of 1 mM 1a and 100 µg mL -1 TtPyNP (3.71 µM, 0.37 mol%) in 50 mM glycine/NaOH buffer at pH 9 and 60 °C in a total volume of 160 µL with either 2, 5, 10 or 20 mM potassium phosphate (equivalent to 2, 5, 10 or 20 equivalents of phosphate over the nucleoside 1a). The reaction mixtures were incubated in a PCR cycler with lid heating (70 °C). Samples of 25 µL were quenched in 225 µL 100 mM NaOH after 2, 8, 30, 60, 111 and 165 min and analyzed by spectral unmixing as described above and in the metadata files available online. [2] Likewise, the raw data for this experiment are freely available in the externally hosted supplementary information. [2] The obtained data were fit to equation (S4) which was derived as detailed below. Derivation of equation (S4) for the determination of phosphorolysis equilibrium constants from phosphorolysis equilibria with different phosphate excesses: Nucleoside phosphorolysis is a thermodynamically controlled reaction which closely adheres to the law of mass action. [ This quadratic equation can be rewritten to equation (S14) and solved via equations (S15) and (S16). Information. [2] Figure S2. Temperature-dependence of <= . Data for 60, 75 and 90 °C were used for calculation. (Table S2, and 60 min, quenched in 300 µL 100 mM NaOH and analyzed by spectral unmixing. Kinetic constants were calculated as described above, using the extinction coefficient of 21,890 cm -1 M -1 of GtPyNP. The raw data and calculations for this experiment are available online. [2] (13,000 rpm, 10 min) and analyzed by HPLC. Conversion was calculated according to equation (S18), which assumes that the molar extinction coefficients of the nucleobase 2 and the corresponding nucleoside 1 are equal. The activity of the mutant PyNPs where $ 9 is the conversion in the transglycosylation reaction (i.e. conversion of the nucleobase to the target nucleoside), . < is the peak area of the target nucleoside (1) and . > is the peak area of the nucleobase (2). Typical retention times of the compounds used herein are given in Table S3. The identity of all target compounds was confirmed by high-resolution mass spectrometry (HRMS) as detailed below in Table S4. The raw data for all HPLC runs used for calculation of equilibrium constants are freely available online. [2] Since opening and processing of the files requires Agilent software, all chromatograms are depicted below ( Figure S3) and integration results are listed in Table S3. The equilibrium constants of phosphorolysis of the target nucleoside were determined by fitting the conversion of the nucleobase to the corresponding nucleoside as a function of the excess of 1a according to equation (S19), which is derived below. Derivation of equation (S19) for the determination of phosphorolysis equilibrium constants from transglycosylation equilibria with different excesses of the donor nucleoside: The basis for this equation is given by an expression reported and derived in our previous report (equation (4) in the original paper). [16] nucleoside ( ) ; as described in a previous work). [16] Since this deviation is well within the inherent error of HPLC, equation (S19) provides an accurate output for realistic experimental data. Figure S3. labile to the alkaline conditions needed for stability of 3. Therefore, neither remaining starting material (as expected for a thermodynamically controlled reaction), nor product can be observed at pH 9 after 4 h (B). At pH 7 and with 4 equivalents of 2f, clear product formation is visible, which was confirmed by HRMS. However, significant hydrolysis is also apparent under these conditions, as is obvious by the large peak at the solvent front, corresponding to the hydrolysis product 2f*. Formation of 2f* as well as 1f* was also confirmed by HRMS analysis (Table S4, see below). were also weakly detected. Table S3. The stability of the sugar phosphate 3 was assessed through an equilibrium shift experiment which provided stability information for this compound without having to isolate it or detect it directly (for details on the approach and equations, please see our method paper). [14] To this end, reaction mixtures for this experiment are freely available online [2] and the fit results are shown in Figure S6. Considering the data reported by Bunton [13] and us, [14] NMR experiments with a reaction mixture provided further insights into the stability of 3 under relevant conditions. In phosphate buffer at pH 7, we observed no loss of 3 from a reaction mixture incubated for 2 months at room temperature, indicating that the sugar phosphate is quite stable under moderate conditions. However, 3 is, like other sugar phosphates, labile to acidic conditions. At pH ≈1 Reactions with excess of nucleobase (achieved via addition of HCl to a reaction mixture in equilibrium), full hydrolysis of 3 was apparent by disappearance of the signal corresponding to the anomeric proton after 1 month of incubation at room temperature ( Figure S16). 4′-Methyl-5-iodouridine (1e) was prepared by TtPyNP-catalyzed transglycosylation. To this end, 5iodouracil (2e, 13.9 mg, 0.059 mmol, 5 equivalents) and 4'-methyluridine (1a, 3 mg, 0.012 mmol, 1 equivalent) were dissolved in 40 mL 10 mM glycine buffer (pH 9) with 0.09 mM potassium phosphate (0.3 equivalents) and 4 µg mL -1 TtPyNP (0.15 µM, 0.05 mol%). The reaction mixture was intentionally kept very diluted since TtPyNP is inhibited by nucleobases such as 2e and more concentrated mixtures severely compromise the productivity of the enzyme. The reaction mixture was heated to 60 °C in a water bath. After 3 d, HPLC analysis revealed 68% conversion of 1a to the iodinated analogue 1e (please see Figure S7 and the externally hosted Supplementary Material for HPLC trace). [2] The mixture was then concentrated to ≈7 mL in vacuo, filtered to remove precipitated protein and injected into preparative HPLC. An HPLC method consisting of 10 min isocratic elution with 1% MeCN in water, followed by linear gradient to 10% MeCN over 40 min, cleanly afforded 1e after 35 min retention time. The fraction containing 1e was concentrated in vacuo. Quantification of recovered 1e proved surprisingly difficult and inaccurate since the compound is quasi-intractable and practically insoluble in all solvents we tried. HRMS data were collected directly from the dilute eluate from the preparative HPLC and 1 H-NMR analysis was performed with a saturated solution of 1e in D2O (ca. 0.5 mM, Figure S14). We estimate the isolated yield to be around 1.5−2 mg, corresponding to around 40% from 1a. of 1a were converted. Please note that 1a/2a and 1e/2e have significantly different extinction coefficients at 260 nm. [5,6] Docking of uridine and 1a was performed by using the crystal structure of TtPyNP (PDB ID 2dsj) as a receptor structure. Dockings were performed in AutoDock VINA [17] implemented in YASARA (Yet Another Scientific Artificial Reality Application). All water molecules were removed from the structure prior to the docking calculation. The receptor was treated as a rigid structure and the substrate was treated as a flexible molecule. Point charges on 2dsj were initially assigned according to the AMBER99 [18] force field and point charges on the nucleosides were generated with AM1-BCC. [19] Docking results obtained for each ligand with the receptor were analyzed based on docking energy (kcal mol -1 Figure S8. Superposition of the proteins with docked (orange sticks) and cocrystallized uridine (white sticks) in the GtPyNP active site. Only the original crystal structure is shown. Residues interacting with the nucleobase (R168, S183, K187) and the relevant threonine (T84) are shown as grey sticks. Crystallographic methods The crystal structure of GtPyNP bound to uridine at a resolution of 1.9 Å was determined to enable a comparison of TtPyNP and GtPyNP via YARASA-docking of the uridine and 1a ligands (Table S5). The structure of GtPyNP revealed the typical two-domain architecture of a NP-II family PyNP enzyme, [20] composed of an α-helical (α) domain and a mixed α-helical and β-sheet (α/β) domain ( Figure S9A). Inspection of the active site after molecular replacement revealed positive density in the unrefined F0-Fc density map, suggesting presence of the uridine ligand within the catalytic pocket ( Figure S9B). Modelling of the uridine revealed that the substrate is recruited to the active site by a set of specific interactions, mediated by polar and aliphatic side chains extending from the cleft in between the αand α/β-domains ( Figure S9C). Notably, the positive electron density in the unrefined F0-Fc density map is more pronounced for the uracil moiety, as compared to the ribose residue ( Figure S9B), suggesting flexibility of the ribose, or degradation of the substrate. It is possible that sulfate, which was present in the crystallization solution, might have allowed for partial turnover of the substrate in crystallo. This phenomenon has been observed for the structurally only distantly related uridine phosphorylases, [21] but is, to the best of our knowledge, unprecedented for pyrimidine nucleoside phosphorylases. Thus, we ascribe the lower electron density observed for the sugar moiety to the flexibility of this moiety in the open confirmation. Inspection of crystal packing contacts further revealed that GtPyNP homodimerizes via the α-domains, as suggested from the interaction with a symmetry mate ( Figure S9D) and as expected for NP-II family PyNP enzymes. [20] The cloning, expression, and purification of GtPyNP were adjusted from the methods described above since the original construct did not crystallize. The gene encoding GtPyNP was amplified by PCR using oligonucleotides A&B (see below) and cloned into pET-24d (Novagen) using the restriction enzymes Diffraction data were collected at Beamline ID29 of the European Synchrotron Radiation Facility (Greanoble, France). [22] Data were processed with the XDS program package for data reduction, [23] and merging and scaling was performed using the AIMLESS program as implemented in the CCP4 package. [24] The data set was solved by molecular replacement using the crystal structure of Bacillus stearothermophilus PyNP (PDB ID 1brw, chain A) [25] via the CCP4 implemented program Phaser. [26] Coot [27] in combination with phenix.refine, as implemented in the Phenix software suite, [28] were used for iterative model building and refinement. NMR spectra (the full raw data are freely available online [2] and tabulated data are listed above) Figure S10. 1
v3-fos-license
2019-08-20T06:21:33.094Z
2016-04-10T00:00:00.000
54654812
{ "extfieldsofstudy": [ "Psychology" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "http://article.sciencepublishinggroup.com/pdf/10.11648.j.ijll.20160403.17.pdf", "pdf_hash": "f6361cef828811d5894145fb9d9ae40196de0fc4", "pdf_src": "MergedPDFExtraction", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43554", "s2fieldsofstudy": [ "Education" ], "sha1": "7f1c82a2b76925a45798eaa3ce525a007116c849", "year": 2016 }
pes2o/s2orc
Impacts of the COERR Writing Project on Cambodian Students’ Attitudes and Writing Performance This study aims to assess the impacts of the Catholic Office for Emergency, Relief and Refugees (COERR) Writing Project (COERRWP) on students' attitudes and their writing performance in correlation with age, gender and language proficiency. In the first stage, a study was conducted to assess 45 students’ actual writing performance based on score analysis of their writing test through 2011 to 2014. In the second stage, a quantitative survey questionnaire was administered to a further 80 students from the initial cohort of intermediate, upper and advanced levels. The study found that the COERRWP was effective in improving macro performance such as paragraph or essay structures rather than micro performance or the accurate use of lexico-grammar. Introduction On January 2012, the Writing Project (WP) was introduced by the COERR Language Skills Center, the Catholic Office for Emergency, Relief and Refugee (COERR). The WP framework was grounded on the literatures of the writing process [18,25,27,28]. The literatures suggested that good writing must go through several writing steps. Hence, the COERRWP, focused on helping students gain confidence in writing through the institutionalizing of the following six steps: selecting a topic, pre-writing, outlining, drafting, revising, editing and publishing. As argued by other researchers, "Good writers are produced by good writing classes [26]", because the product of writing requires students to go through several stepping stones [16,25]. Further, a teacher needs to give clear writing direction [18] and to create an engaging learning environment [3]; otherwise teachers and students may feel frustrated by language errors and a lack of improvement in students' writing [10]. By the end 2012, the COERRWP Evaluation indicated that students achieved various outcomes and formed differing opinions on the effects of the COERRWP. Some students said the COERRWP was useful in enhancing their writing performance and achieving a better grade, while other students indicated that the COERRWP was not very effective and even boring since it required many steps before students could complete the required writing activities. The different results achieved from the writing tests by students at the Centre may be indicative of various impact factors, for instance, the poor planning and inconsistent teaching activities, the lack of writing resources, and the insufficient writing time. This difference hence needs to be further examined in order to ascertain what the most significant factors are influencing disparate outcomes achieved by the Cambodian students. For this purpose, the study was conducted; that is, in order to identify the impacts of the COERRWP on students' writing attitudes and performance. Also, the study attempts to assess whether there are any significant impacts of the COERRWP that correlate with the age, gender and/or language proficiency of Cambodian EFL students at the Centre. The COERRWP The COERRWP was designed as an extra writing lesson to the regular writing classes. A list of guided questions was designed by COERR teachers for students so that the students can use for revising and improving their writings in order to have positive results. Students were assigned to write a cause-effect, an argumentative, a comparative, and a problem-solution paragraph about various topics. The topics are, for example, the use of pesticides; the advantages and disadvantages of computers on children, and what constitutes a successful restaurant. Grades were assigned provided that the students completed all steps in the writing process. The timeframe was flexible so that the teachers could adjust the dates to complement their teaching activities for reading, correcting, and giving feedback for the students' writings [5]. Followings are the COERRWP's writing steps: Writing Process Selecting a topic guides students to choose the appropriate topic for writing. They can work in pairs or groups to generate ideas. Prewriting gets students to do their writings through: listing (by brainstorming), freewriting, clustering or Journalists' questions (by asking questions). Students can exchange their prewriting and make comments following guided questions Outlining gets the students to arrange the topic, topic sentence, and details/examples into a logical or chronological order [from less important to more important, from general to specific, from chronological order, from negative to positive, from causes to effects, from problems to solution etc.] The students exchange their outlines and use guided questions to improve their outlines. Drafting gets the students to start their writings without worrying about spelling, punctuation, capitalization, and grammar. Students submit their first (rough) draft along with their prewriting, and outline. Teacher makes comments. Revising gets the students to change their ideas and make their writings clearer, better, and more interesting. They can work in pair and group to improve their writings. Editing & Publishing gets students to polish the errors in grammar, spelling, punctuation, and capitalization. The students exchange their revised copies (second draft) and do peer-editing. English Writing Instruction in Cambodia English Writing Instruction is one of the core courses for the Cambodia's Higher Education Institute. Especially, the course has been offered to students in the degree of English education. For example, Institute of Foreign Languages (IFL) at the Royal University of Phnom Penh (RUPP) has offered the courses of Writing Skill I, II, III, IV and V for developing students' skills and competence in writing regarding paragraph and essay structures; article and book reviews; and citation and referencing [14]. Similar course syllabuses have been used by other universities in Cambodia, especially in Battambang. By and large, the Writing Course has not been intensively designed and taught in the general English program, let alone the COERR Language Skills Center. Therefore, most students tend to have poor writing skills since there is a lack of specific course offered to students in the general English program. English Writing Research in Cambodia The studies of writing in English in Cambodia have mostly been conducted in Phnom Penh, especially using participants selected from the Royal University of Phnom Penh (RUPP). For example, the researcher applied an action research to investigate social awareness through reading and writing. Another researcher employed a qualitative approach to examine writing self-efficacy (an ability to write in English), writing goal orientation (an expectation to write better English paragraph), and writing achievement of 244 Cambodian university students [1]. Another study was also conducted to focus on the trends and patterns of learning styles among 215 Cambodian students in the English Faculty at RUPP [13]. The literatures provide a fertile account of writing in English among Cambodian EFL students; however, student's attitudes and motivation of writing in English seems to be new phenomenon. In addition, several challenges have given rise to English language education, for example, the lack of an improved English Curriculum, the lack of learning facilities, the pay of teachers, and the lack of Cambodia's government decision to solve these problems [31]. Not only are these problems occurring in Phnom Penh, but these problems may also be challenging Cambodian EFL students at provincial levels such as Battambang. Therefore, there are seen as being worthwhile to carry out a study of the impact factors such as motivation, engagement and learning environment which are relatively new considerations for additional language researchers in Cambodia. That is, this study is especially conducted to investigate the correlative impacts of students' behaviour on the development of second language learning in Battambang province. Theories of Attitude and Performance in Writing The impetus of theories and models of attitudes into L2 writing research may be started around the 1980s [26], while the motivational theories of language learning can be noted from several works such as [20,9,29]. The other studies also provide important findings in relation to L2 writing and learner's attitudes, [7,4] for example. The aforementioned authors have asserted that there is a correlation between ESL students' attitudes and their writing performance. In addition, the literatures found a positive correlation between L2 attitudes and writing skills [12] while other asserted that the increase in student engagement and motivation in writing helps them improving their writing skills [19]. Further, applying peer feedback in the writing process helps students developing a positive attitude and improved writing skills [17]. These findings are similar to the study [15] pointed out that students' writing proficiency correlates with their attitudes. Overall, literature indicates a significant benefit for research that evaluates an ESL / EFL student's attitudes towards their writing performance; however, this area of research is relatively new to Cambodia especially in Battambang province although English language education has been booming here for decades. Participants The study was conducted in two stages. In the first stage, this study surveyed 45 students, selected from the intermediate level, upper level and an advanced level. The participants were taking the English course, used the Hemisphere Series [8]. Coupled with the Writing Project, students have been taught to write different essays, for instance, a problem-solution essay, a narrative experience essay, a process writing essay, a comparison-contrast essay, a definition essay, and a summary essay. The initial criteria of selection were based on achievement grades, for instance, above average, average, and below average. In the second stage, a quantitative survey was randomly administered to 80 students (35 students were selected and added to the initial cohort). More participants were randomly invited to participate in the study in order for generating strong reliability and validity of the results. Instruments and Data Collection The qualitatively in-depth study initially assessed 45 students, selected according to their achievement grades from the final exams through 2011, 2012, 2013 and 2014. The assessment of the written texts was based on the criteria; pre-writing, topic sentence, supporting details (content), conclusion, and lexico-grammar (structure and vocabulary). Writing scores from 2011 were considered the basis while scores in the following years were considered as progress scores. Control group was not included since the study was periodically conducted based on the researchers' experiences in teaching and applying the Writing Project. That is, the sampling technique was considered the limitation of the study. Survey questionnaires were distributed to 80 students, randomly selected from the cohort of intermediate, upper and advanced levels. The questionnaires consist of two parts. Part A collects the respondent's background information i.e., sex, age, years of study, study level and occupation. Part B contains questions for assessing the impact of the Writing Project on the students' attitudes and their writing performance. Students were asked to rate the statements using a five-point Likert Scale, ranging from "strongly agree" to "strongly disagree." Data Analysis Data from writing score analyses and survey questionnaires were computerized into MS Excel and imported into the SPSS version 20. A descriptive statistics data tool was applied to see the mean scores and standard deviation from writing tests each year in correlation with ages, gender and English levels and to analyze data from survey questionnaire in order to see the frequency, percentage, crosstab (Chi-square test) and to understand the overall pattern of students' responses and the relationship between variables. The impact factors were interpreted and discussed based on the correlation of the writing scores, student's attitudes and performances manipulated by the project application. The changes in means and standard deviation of writing scores would represent a positive or negative impact of the project. Results of the Study The results were presented in three parts: respondents' profiles (age, gender, language proficiency, and educational level); the impact of the writing project on attitudes and writing performance of students; and the impact of COERRWP in correlation with age, gender, and language proficiency. Respondents' Profiles The results from the survey questionnaire showed that 52.6% of the respondents' ages were below 25; 25.6% others were under 20 years old; 9.0% of them were below 18, and 12.8% were over 25 years old. This reflects that the majority of the enrolled students at the COERR language programs are adults; the majority (71.8%) were studying at the university. This is in accordance with the Centre's admittance policy. Moreover, more female students (61.5%) than male (38.5%) participated in the research study, and they have taken the English classes the Center. Further, 82.1% of the total respondents completed an English course over 4 years, and no students did an English course for less than one year. Due to the method used in the study, no difference in number of the respondents selected from the three cohorts participated in the study from each level. Research Question 1 Are there any impacts of COERRWP on attitudes and writing performance of ESL / EFL learners? Results from the analysis of writing scores in Table 2 indicate that the COERRWP has impacted students' actual writing performance. Comparing the progress scores in 2012, 2013 and 2014 with the baseline scores in 2011, for example, the advanced students improved their mean writing scores [7.95] from the baseline year 2011 to [10.10] in 2014, and the upper-level students' mean scores increased from [7.63] in 2011 to [10.78] in 2014. In contrast, the intermediate level students' mean scores in 2014 were reported below the mean score of baseline data in 2011, decreasing from [8.10] to [7.41]. The improvement of the mean scores of higher level students indicates that students with higher language proficiency tend to hold more positive attitudes to writing. Even though mean scores of students from the three levels in 2012 were lower than the mean scores of the baseline year 2011. Additionally, results in Table 3 indicate that the COERRWP has been very effective in creating a positive attitude in students toward writing in English. The majority of students hold positive perceptions; for example, 73% of them responded optimistically that the COERRWP motivates them to do more writing in classes or elsewhere; 59% of them thought, through the project, they gain better understanding of the importance of writing skills; 77% of others like having their writings edited; 83% of the responses reflect that the editing guide helps improve grammar; and 66% agreed that the writing steps motivate their writing." These results demonstrate that the COERRWP has strongly impacted students to write in English. Especially, the findings seem to highlight that the guided questions appear to have provided students with the tools to write better since the combination of students' "strongly agree" and "agree" responses amounted to 83%. Further, students' writing performance seems to have been fostered through the application of COERRWP. The claims have been evidenced from the analysis that 73% of the respondents agreed that the COERRWP helps them reduce Khmer-English sentence styles; 88% of them agreed that the COERRWP enables them to use a variety of English sentence structures; 81% of them thought they are able to summarize and paraphrase, and more than 94% of the responses indicated that students are able to write better paragraphs, and 76% of others thought they have improved in essay writing. The students also thought the COERRWP helps them develop ideas for writing (70%) and achieve good grades (83%) ( Table 4). However, the overall responses suggest the improvement of writing is at the macro performance (paragraph and essay) level rather than the micro performance. More specifically, the analyses have revealed that respondents have improved writing habits and skills, especially as pertaining to sentence structures, paragraphs and essays. Research Question 2 Are there significantly different impacts of COERRWP on students in correlation with age, gender and level of language proficiency? The analysis of writing scores of 45 students has revealed that students have positively improved their attitudes towards writing in English, yet the study has suggested a slight difference in mean scores of female students. There could be various reasons of gender differences in writing performance. For example, this could be because female students are more participatory, or due to teachers' anecdotal accounts of performance when undertaking the writing class with mixed genders. As illustrated by Table 5 in the baseline year, the male students' mean scores were higher than female students, [8.43] vs [7.50]; in contrast, mean scores of female students gradually increased from [8.24] in 2013 to [9.63] in 2014. This reflects the change of attitudes and performance female students pertaining to English language skills, especially writing. The issue of gender differences might suggest questions for further study, in particular as regards writing in English in Cambodian context. The writing performance of students was also assessed by mean score according to criteria: (a) pre-writing (2 marks), (b) topic sentence (2 marks), (c) introduction with a thesis statement (2 marks), (d) paragraph content (6 marks), (e) concluding sentence / paragraph (1 mark), and (f) layout (1 mark). Comparing the mean scores for these criteria, they can be seen to be above average-[1.32] for pre-writing, [1.30] for topic sentence, [3.50] The results of assessing scores by these criteria revealed that students' grades are above average except the mean scores for lexico-grammar and referencing. For example, the mean score of cause and effect connectors appears to be below average at [0.13], and the mean scores in using metaphors are also lower than the average at [0.40]. These figures mean that the knowledge and/or the accurate use of sentence structure are still limited among advanced level students. In short, there is no overall pattern regarding the positive results toward writing performance on each test item, but the remaining mean scores suggest that students tend to do better writing with regard to paragraph or essay structure rather than in detailed application of linguistic accuracy in their actual writing performance. In correlation with age, the reports from the Chi-Square Test (Table 6) indicate that nearly all p-values are greater than 0.05, apart from the reports on the following statements whose p-values are less than 0.05 and hence, these are statistically significant results. I feel that the activities set in each writing step motivate me. (p = 0.024*<0.05) The writing project made me understand the importance of the writing process. (p = 0.001*<0.05) The writing project helps me to get better grades from the writing tests. (p = 0.050**<0.05) These results imply that the significant effect of the COERWP correlates with the age of students as well as encouraging them to write, guiding them to better understand the importance of writing and to achieve better scores. However, results from Chi-Square indicate no statistically significant effects between the Writing Project and gender were generated since all P-values of all statements related are greater than 0.05. In contrast, the Chi-square test reports statistically significant effects in correlation between the COERRWP and language level as represented by the following statements: The project taught me to reduce Khmer sentence styles (p = 0.044* <0.05). The writing project made me realize that writing is an essential skill to master for both in academia and career purposes (p = 0.031*< 0.05). The project has guided me to write better essays (p=0.016*<0.05). These results reveal that students who study at a higher level are able to see the correlative impacts of the COERRWP on their academic studies and careers. Hence, they feel that the project is really useful for them. Most importantly, the results imply that students who study higher language levels have clear purposes for academia and employment. Therefore, the project has had a positive effect on the development of language proficiency in that students feel it has scaffolded them through the necessary steps that will enable them to write better essays in English. However, significant effects could only be ascertained for items correlated with age and language, not gender. Discussion This study reveals that the COERRWP has been effective in fostering positive attitudes of students to write in English because it was structurally designed with series of mini lessons to help students gain confidence in applying the writing steps for their writings. As commended by scholars, "writing skill has correlated with the development or maintenance of positive attitudes of students [12]," through "an integrated learning context, motivation and language achievement [11]," with "a course-specific design relevant to the learner's interest, expectancy and satisfaction [10]". However, students' actual performance in accuracy, lexicogrammar and citation remained below average as indicated in Table 4. This is perhaps the COERRWP design is best suited and aimed at improving the overall (macro) writing performance rather than micro performance or detailed accuracy. The low performance on these items might partly account for the strong influence of Khmer language structures. As highlighted by the literatures, Chinese students' writing can decline in language accuracy due to direct / inappropriate translation from Chinese to English and a lack of confidence among students in peer-editing [19,22]. On the other hand, the impact of COERRWP appears to significantly correlate with age and language levels, not gender, although the mean scores of female students tend to improve more in their writing ( Table 6). As indicated by previous studies, boys have far less interest in writing than girls [23]; hence, boys have lower performance indicators in writing as well as reading [6]. Moreover, an unexpected result has been revealed from this study that student's attitudes and performance relates to "the educational context, a term proposed by Gardner [11]" or "an external connection suggested by Schmidt [29]" that both terms are used to define variables that correlate learner's need and the components of education system, including the quality of the program and the profession and skills of the teacher, the teaching and learning materials, the curriculum, the learning environment, etc." This claim was supported by Tweed and Som [31], who reasserted, "English language has become a core subject for the Cambodian students since English knowledge is regarded one of the key employability in the ASEAN region." Therefore, the application of COERRWP seems to have coincided with the current situation, enhancing students' interest in English writing in light of the strong connections with local and regional contexts. However, poor performance in lexico-grammar among Cambodian students in writing may reflect the weakness of COERRWP components and/or the feedback from teachers although the guided questions, particularly the revising and editing part, were designed to improve students' language accuracy. Also, the lack of opportunities for Cambodian students to express ideas through writing in English in social life may be one of the impact factors on their English skills. As supported by Chan [2], writing motivation may be increased when students choose to write their own topics about real-life problems. This indicates that English education in Battambang province lacks "sociability, a term associated with Schmidt, [29]" or "a target language community in Norton and Gao, [24]", which has been considered the the factor to arouse "students' motivation, and the one possible way to enhance students' motivation in writing [30]". Conclusions In conclusion, the COERRWP has positive impacts on students' attitudes and writing improvement with regards to macro or overall writing performance. First, the COERRWP has made students gradually improve their writing performance and increase their writing scores through four years, from 2011 to 214. Second, the impact of COERRWP has motivated students to write better paragraph and essay structures and other macro writing performance indicators such as being able to do pre-writing, topic sentences, provide supporting details, organize, and construct a thesis statement. Third, the COERRWP has significantly impacted student's attitudes and performance in correlation with ages and language level since the senior students with higher level tend to receive better scores. Therefore, the project has sparked students' interest in studying writing, using peerediting, and developing language proficiency so that they feel confident in writing in the target language (English) and upgrading their writing skills. Recommendations Although the COERRWP has positively impacted students' attitudes and their writing performance, the COERRWP components and teaching process should be modified to increase individual student's participation in writing. First, teachers should invest more time especially with the poor writer. By doing so, teacher can increase student's motivation and performance in language accuracy. Individual feedback may also help students feel less anxiety in writing. The COERRWP could extend the writing activities beyond classroom settings; for example, by including a writing competition or writing clubs so students become more socially situated to writing in English. Further professional development such as weekly or monthly writing courses should also be undertaken to strengthen the quality of teachers and their teaching that they might be better able to effectively employ the COERRWP in classes, thereby enabling students to construct better English sentence structures and improve student's lexico-grammar in writings. Limitations The reliability of writing scores obtained from the Center and sampling method can be considered the limitations for this study. Technically, a larger sample size should be employed for the study to generate more reliable and valid findings in order for generating statistically significant results. However, the study focuses on the case of the COERR Language Skills Center. The later studies should include control and free control group, and the assessment of writing should be designed with checklist or essay assessment scales to ensure the reliability and consistency of the results. Further studies using other methodologies including mixed methods and/or qualitative or action research might also generate different findings or corroborate these findings. The future studies should also be conducted to include larger populations such as teachers, students and school administers. This inclusion may generate more reliable and valid results. Therefore, the relationship between students' attitudes and writing skills needs to be further investigated by other researchers, in similar contexts, using different methods to ensure reliability and make it possible to generalize these findings.
v3-fos-license
2019-03-16T13:07:39.002Z
2016-06-21T00:00:00.000
55363534
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://www.omicsonline.org/open-access/occurrence-of-multidrug-resistance-among-e-coli-o157h7-isolated-fromstool-samples-obtained-from-hospitalized-children-2329-8901-1000150.pdf", "pdf_hash": "39bd24f2a18ba74f2508cc30e10dbb16f1460fad", "pdf_src": "Unpaywall", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43556", "s2fieldsofstudy": [ "Medicine", "Biology" ], "sha1": "39bd24f2a18ba74f2508cc30e10dbb16f1460fad", "year": 2016 }
pes2o/s2orc
Occurrence of Multidrug Resistance among E. coli O157:H7 Isolated from Stool Samples Obtained from Hospitalized Children A survey of the antimicrobial resistance pattern of Escherichia coli O157:H7 strains obtained from stool samples collected from children with diarrhea attending General hospital Warri, General Hospital Agbor, Eku General Hospital and University of Benin teaching Hospital was carried out. All isolates were obtained using standard microbiological and biochemical procedures. Serological analysis to detect E. coli O157 strains was carried out using the dry spot E. coli O157 test kit. Antimicrobial susceptibility testing was carried out using disc diffusion method. A total of 46 Escherichia coli isolates were obtained from the 60 stool samples. All Escherichia coli isolated were 100% resistant to cefixime. The lowest level of resistance was observed in nitrofuratoin (15%). Serotypes O157 exhibited 100% resistance to ceftazidime, cefuroxime and cefixime. The ability of E. coli O157 strains to transfer antimicrobial resistance traits by conjugation was detected using Pseudomonas aeruginosa as recipient. High level of resistance transferred was observed. The ease of transfer exhibited by E. coli O157 strains amongst children in this study is an issue of concern. As such, an early identification and understanding of the epidemiology of this resistance will enable the development of preventive strategies which can curtail this emerging resistance, thereby facilitating a timely and appropriate public health response. Introduction The continued spread of multidrug resistant pathogens remains a huge public health problem worldwide. Chemotherapeutic agents employed in the treatment of serious infections have experienced a steady diminishing efficacy due to this scourge. This is even more highlighted in developing countries where misuse of drugs, poor regulation of over-the-counter sales of drugs, inadequately equipped diagnostic laboratories and relatively inadequate healthcare provision among other factors that serve to promote antibiotic resistance are widespread [1]. Diarrhoea is one of the leading causes of death in children in developing countries [2]. UNICEF/WHO [3] described diarrhoea as the passage of loose or watery stools at least three times per day or more frequently than normal for an individual. E. coli O157:H7 infections in children under the age of 5 have been associated with risk factors such as domestic use of contaminated water, premature weaning, bottle-feeding, and malnutrition [3][4][5]. In its report in 2008, the Word Health Organization stated that the highest number of diarrheagenic E. coli isolated in their study belonged to the O157:H7 serogroup [6]. Several studies have documented an observed decline in the mortality rates of diarrhoea infections in children worldwide and have suggested that this might be due to improved exclusive breastfeeding practices as well as the awareness generated towards the efficacy of oral rehydration treatments in reducing the mortality rates of these infections [7,8]. Despite this decline, however, the burden of diarrhoea and its mortality in children still exists in developing countries and it is reported that Africa and Asia account for 80% of children deaths due to diarrhoea with Nigeria ranking second with an estimated annual total of 151,700 child deaths due to diarrhoea [3]. Limited reports on the occurrence of E. coli O157:H7 infections in children in Nigeria, and more specifically, Delta State as well as the increase in documentations of multidrug resistant pathogens in this serotype have necessitated this study. This study was designed to detect the occurrence of multidrug resistant E. coli O157:H7 serotypes in children (0-5 yrs) having diarrhoea in Central Hospital Warri, Delta State, Central Hospital, Agbor, Delta State, Eku General Hospital, Delta State, and University of Benin Teaching Hospital, Edo State as well as to determine the transmissibility of these plasmid-borne resistance genes through conjugation experiments. Sample collection Stool samples were collected from children (0-5 yrs) having diarrhea using a sterile universal container and was labelled. A total of sixty samples were collected in sterile leak-proof universal containers within 4 months. These were transported immediately in ice packs to the Delta State University Microbiology laboratory for analysis. Isolation and identification of E. coli O157:H7 The samples were inoculated into 5 ml of sterile MacConkey broth in sterile test tubes and incubated for 24 hours. A loopful was taken from the test tubes individually, inoculated onto freshly prepared Eosin methylene blue (EMB) agar plates, and incubated at 37°C for 24 hours. Observed colonies were subcultured onto freshly prepared nutrient agar plates and incubated at 37°C for 24 hours. Presumptive E. coli colonies were subjected to confirmatory gram staining and biochemical tests as described Cheesebrough [9]. Confirmation of E. coli O157 was done by testing for agglutination with E. coli O157 antisera (Oxoid). Curing of isolates Multi-drug resistant isolates were that were resistant to gentamicin and sensitive to nitrofuratoin were selected for plasmid curing [11]. Plasmid curing was carried out using Sodium Dodecyl Sulphate (SDS). An SDS solution was added to Lauria-Bertani (LB) broth. An LB normal strength was prepared and an inoculum of 100 µL of the isolate was then used to seed the SDS containing LB. The cultures were incubated overnight with shaking at 45°C and sub cultured for 6 days. A dilution from the culture was plated on EMB agar. To confirm loss of antimicrobial resistance, antimicrobial susceptibility was carried out as previously described. Conjugation experiment Conjugation was carried out using the method described by Thompson [11,12] and Kreuzer and Massey [13]. Pseudomonas aeruginosa strain that is Gentamicin sensitive and nitrofuratoin resistant obtained from the department of Microbiology, Delta State University Research laboratory was used as recipient. Donor and recipient strains were incubated separately on nutrient broth at 37°C for 24 hrs. Fifty microliter (50 µl) each of donor and recipient broth culture were transferred to the same spots on Mueller Hinton Agar plates supplemented with Nitrofurantoin (30 µg/ml) and Gentamicin (30 µg/ml) for selection of Trans conjugants. Incubation followed at 37°C for 24 hours. The Trans conjugants were then screened for antibiotic resistance as previously described. Results and Discussion Diarrhea remains a global public health problem significantly higher burden in developing countries evidenced by the higher incidences of childhood morbidity and mortality due to diarrhea in developing countries [8,14,15]. In several of the reports of diarrhea incidence in children, Escherichia coli have been implicated as being important etiologic agents [15][16][17]. In this study, a total of 46 Escherichia coli non-repetitive isolates were obtained from the 60 samples cultured, corroborating these aforementioned findings. A more alarming trend, however, is the increasingly high prevalence of multidrug-resistant diarrheagenic E. coli organisms, especially in developing countries. The rate of multidrug resistance in this study is in concordance with reports which have documented high resistance in E. coli causing diarrhea infections in children [18][19][20]. A total of 45 out of the 46 isolates obtained in this study were resistant to at least 3 of the antibiotics used, with highest resistance rates observed against the cephalosporins viz cefixime (100%), cefuroxime (98%), and ceftazidime (91%) (Tables 1 and 2). Much of the reasons for these high rates of resistance are related to the fact that reports have shown that antibiotics, despite not being required for the treatment of acute diarrhea, are widely prescribed for these forms of infections [21]. In children, this is made worse as cheap drugs are available over the counter and the wide majority of parents are unaware that antibiotics rarely alter the course of diarrhoeal infections and so administer one form of antibiotic or the other to their wards whenever a diarrheal infection is suspected. The continued use and abuse of these drugs thus allows for the selection of resistant strains which are easily disseminated. At this point, education of the masses on the management of diarrheal infections as well as the implementation of more stringent policies governing the availability of antibiotics is advised. Twelve isolates (an incidence rate of 20%) were identified to belong to the serogroup O157 using the dry spot E. coli O157 test kit (oxoid) from the sixty stool samples collected. All 12 E. coli O157 harbored plasmids as indicated by the plasmid curing results. Compared with other reports, our result is similar to other reports in India and Iraq [20][21][22] but suggests a rise in the prevalence of diarrhea due to E. coli O157:H7 in children in Nigeria [23][24][25][26]. In addition to contamination of water sources, animals consumed for food such as cattle and goats have been implicated as being asymptomatic carriers of E. coli O157:H7 strains [27,28]. Although the exact reasons for this increase in occurrence in comparison with other prevalence reports were not investigated in this study, Isibor and Ekundayo [26] suggested that the previous low reports of E. coli O157:H7 occurrence in Nigeria could be attributed to the inability of many medical laboratories in the country to detect its presence. Furthermore, in Nigeria, most clinicians do not readily request for the specific culture of these strains, much less in infected children. Multidrug resistance has also been reported in E. coli O157 strains. The antibiotic resistance patterns of E. coli O157 isolates is shown in Tables 3 and 4. All isolates of Escherichia coli O157 strains were 100% resistant to ceftazidime, cefuroxine and cefixime, 67% resistant to augmentin and ciprofloxacin, 58% resistant to gentamicin, 42% resistant to ofloxacin and 17% resistant to Nitrofurantoin. In addition to the abuse of antibiotics in the control of diarrhea infections in children by their parents earlier stated, the indiscriminate use of antibiotics by livestock farmers could also have contributed to this high resistance rates. These high multidrug resistant rates of E. coli O157:H7 isolates, while alarming, are consistent with other findings in Nigeria and in other developing countries [28][29][30]. This represents a major problem not just to the chemotherapeutic control of diarrheal infections but also other infections caused by several genera within the family Enterobacteriaceae. This is largely due to the ease of transmissibility of these resistance genes from E. coli cells to other members of the family Enterobacteriaceae and Pseudomonads. This could greatly confound treatment of other non-diarrhea E. coli infections as well as infections by Enterobacteriaceae and Pseudomonads. To detect the transmissibility of these resistance genes, conjugation experiment was carried out. Table 4 also shows the antimicrobial resistance patterns of the transconjugant Pseudomonas aeruginosa strain used as recipient in the experiment. The conjugation rates in this experiment were 100% efficient for the transfer of ceftazidime, cefuroxine and cefixime, augmentin, and gentamicin resistance traits while 75% of the ofloxacin-resistant isolates transferred their resistance markers. Transfer of resistance traits by conjugation was less efficient for the transfer of ciprofloxacin resistance traits as only 2 out of 8 (25%) ciprofloxacin-resistant E. coli isolates transferred ciprofloxacin resistance markers. Nitrofurantoin resistance was not accounted for because the recipient strain harbourednitrofuratoin resistance There are several reports of high efficiency of transfer by conjugation of antimicrobial resistance traits by E. coli O157 strains [21,31]. The potentially grave implications of this are related to the fact that several factors within developing countries including urban migration, overcrowding and improper sewage disposal allow the easy exchange of antibiotic-resistant bacteria between individuals as well as the exchange of resistance genes among bacteria [1]. The inability of the E. coli O157 strains to transfer nitrofurantoin resistance could be due to the presence of the resistance markers on the chromosomes albeit it is essential to point out that transferrable plasmid-mediated resistance remain an important mechanism of nitrofurantoin resistance in Escherichia coli [32]. The emergence of multidrug resistance in the already notorious pathogen, E. coli O157 in Nigeria has grave public health consequences. More so, this study has highlighted the ease with which resistance markers to the commonly used antibiotics can be transferred even inter-generically. This therefore elicits urgent measures for the control of this scourge. Efficient management of the spread of this resistant serotype requires the involvement of many stakeholders. There is the need for effective government policies to help strictly control the availability of certain drugs to the general population. In Nigeria, even prescription drugs can be readily obtained over the counter without prescription from a clinician. The government is also saddled with the responsibility of providing better healthcare and diagnostic facilities to improve the detection of serotypes such as E. coli O157 which are not detectable using the existent diagnostic protocols in the country. Also, administration of antibiotics by healthcare providers needs to be strictly based on laboratory sensitivity tests results, whilst reducing empirical administration to the barest minimum. Finally, extensive education programs directed at members of the public to educate them on the hazards of self-medication, as well as the non-antibiotic control of acute diarrhea in children need to be instituted.
v3-fos-license
2019-05-20T13:04:56.692Z
2018-04-29T00:00:00.000
158586508
{ "extfieldsofstudy": [ "Political Science" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://antitraffickingreview.org/index.php/atrjournal/article/download/323/265", "pdf_hash": "6aa24df495f50073ad4c5c8f2aad6d1c699fa51d", "pdf_src": "MergedPDFExtraction", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43559", "s2fieldsofstudy": [ "Political Science" ], "sha1": "cff32440209d42e4100b62079eea21f08afa8941", "year": 2018 }
pes2o/s2orc
From Passive Victims to Partners in Their Own Reintegration: Civil society’s role in empowering returned Thai fishermen Despite the significant international attention to human trafficking in the fishing industry in Southeast Asia, victims continue to experience poor outcomes after their return to Thailand. The Labour Rights Promotion Network (LPN) has assisted many returned fishermen in the difficult journey that begins after their rescue and repatriation. In this paper, we argue that the poor outcomes are the product of systemic failures in the aftercare processes, which are not sufficiently victim-centred and discourage trafficked fishermen‘s participation in prosecutions. This is the case in the criminal justice system, where flaws in victim identification and evidence collection can undermine trafficked persons‘ rights and make it extremely difficult for them to obtain compensation—a significant factor in their recovery and reintegration. This same cycle of disenfranchisement is pervasive in reintegration services at large in Thailand, many of which are overly paternalistic and neglect survivors‘ individual needs and interests. Civil society organisations can remediate these problems by supporting the government in its efforts to strengthen prosecutions and make the criminal justice system more victim-friendly. More broadly, civil society can contribute to a victim-centred approach that places aftercare in a larger perspective—one that extends beyond the purview of the criminal justice system. This paper will examine two emerging models in post-trafficking service provision: Unconditional Cash Transfers (UCTs) and volunteer social networks, which recognise victim empowerment not just as a means towards better law enforcement, but as an end in itself. Introduction The year 2015 marked a turning point in the fight against human trafficking in the Thai fishing industry. A rescue operation of stranded Burmese, Thai, Cambodian, and Lao fishermen in Indonesian waters brought international attention to human trafficking in Southeast Asia. The rescue operations were the culmination of a series of exposés published by four Associated Press (AP) reporters, 1 which chronicled how the Thai fishing industry was exploiting workers in slave-like conditions to supply seafood to American supermarkets and restaurants. The series documented how thousands of impoverished labourers were lured into captivity, locked in cages, beaten, subjected to sleep deprivation, and forced to perform dangerous work to catch and process seafood. The impact of this report cannot be overstated. Due to the efforts of the International Organization for Migration and the Indonesian government, more than 2,000 captives were released from a ‗slave island' in Indonesia, a scale not seen before in human trafficking cases. It led to the arrests of a dozen people, the seizure of ships worth millions of dollars, the introduction of legislation in the US Congress to create greater transparency for food suppliers, as well as a threat from the European Union (EU) to completely ban Thai fish imports. 2 1 The Associated Press, ‗Seafood from Slaves. An AP investigation helps free slaves in the 21st century', Associated Press, retrieved 11 June 2017, https://www.ap.org/explore/seafood-from-slaves/. 2 A Nelsen, ‗EU Threatens Thailand with Trade Ban over Illegal Fishing', The Guardian, 21 April 2015, retrieved 11 November 2017, https://www.theguardian.com/environment/2015/apr/21/eu-threatens-thailand-with-trade-banover-illegal-fishing. While the renewed international pressure and attention forced the Royal Thai Government (RTG) to enact important reforms to address human trafficking in the seafood industry, this was not the end of the story for the almost 1,500 Thai fishermen who returned home from Indonesia after years, sometimes even decades, of abuse. This paper examines the enormous challenges trafficked fishermen face after their rescue, drawing on the frontline anti-trafficking work conducted by the Labour Rights Promotion Network Foundation (LPN), a Thai labour rights NGO based in the port city of Samut Sakhon. LPN played an integral part in the 2015 rescue operations and provided direct assistance (food, shelter, legal services, healthcare) to the approximately 300 trafficked Thai fishermen it helped repatriate from Indonesia. The paper builds on this case study, using data collected through semi-structured interviews with LPN staff and service beneficiaries, as well as trafficking case statistics compiled by LPN and the RTG between 2014 and 2016. Expanding on this data and secondary research, this paper argues that Thailand's post-trafficking aftercare system undermines trafficked fishermen's reintegration prospects, primarily through its failure to provide victims with access to financial compensation for the losses and damages they suffered during their trafficking ordeal. The article is divided into three sections. The first examines how obstacles to providing legal redress to victims through successful prosecutions are exacerbated by victim assistance programmes that discourage trafficked persons' participation in the judicial process. The second section explores how the RTG and civil society can address these challenges by developing criminal justice interventions that marry the desired goals of prosecution and conviction with the needs and rights of victims. The third and final section considers the limitations of these interventions by arguing that the criminal justice system was designed to prosecute and punish criminals, not to protect victims. The paper contends that civil society is better placed to develop innovative integration models that place victims' needs and interests at the very centre of the aftercare system. Unconditional Cash Transfer (UCT) programmes and volunteer social networks will be showcased as two effective grassroots approaches that empower survivors from the bottom-up. Trafficking in Persons Prosecutions in Thailand Human trafficking can be a complex transnational crime that overlaps with other criminal activities, involves many different actors, and poses inherent challenges to mounting a successful prosecution. These challenges are compounded when applied to less developed criminal justice systems whose legal frameworks and mechanisms do not properly protect victims' rights and do not adequately address the specific hurdles that victims face in building their case. Successful prosecutions for human trafficking remain particularly challenging in Thailand. Only fifty-seven of the 1,476 Thai fishermen rescued from Indonesia in 2015 3 pursued a trafficking case against their exploiters, and of these, not one obtained a successful conviction. While government reforms have addressed many flaws in the legal system that posed obstacles to a successful prosecution, for instance, by improving identification of victims and streamlining the evidence collection process, the poor application of procedures continues to disadvantage victims. In its current form, the prosecution system is not victim-friendly and often ‗leads to poor quality, unfair and unsafe prosecutions that do not respect basic criminal justice standards'. 4 Our first-hand experience working with the Thai fishermen rescued in Indonesia has allowed us to identify critical areas where the criminal justice system continues to produce poor prosecution rates and discourages victims' participation, robbing them of the justice they so desperately need and rightfully deserve. Obstacles to Effective Prosecution The failures of trafficking in persons (TIP) prosecutions seem to occur downstream in the lead-up to prosecution, beginning with victim identification. Despite the implementation of important reforms in the past years, only forty-three TIP cases involving workers in the fishing sector were under investigation in 2016. 5 The Royal Thai Government, Thailand's Country Report on Anti-Human Trafficking Response, 2017, p. 41, DOI: 10.14197/atr.201218106 number is extremely low compared to the estimated scale of the problem, especially given that there are approximately 145,000 workers in the Thai seafood industry. 6 In our view, the misidentification of trafficked fishermen can be attributed in part to the inherent difficulties of recognising the act, means, and purpose of human trafficking. The definition put forward in the United Nations Trafficking Protocol, which serves as the basis for the definition of human trafficking in Thailand's Anti-Trafficking Act B.E. 2551 (2008), describes trafficking as the recruitment, transportation, harbouring, or receipt of persons by means of threat, force, or other forms of coercion, with the purpose of exploitation. 7 While the development of an international legal definition was a ‗genuine breakthrough' 8 in that it helped establish a binding normative framework for trafficking cases, key elements of the Trafficking Protocol's definition have been criticised for being relatively broad and open-ended. Essential terms and concepts such as the ‗abuse of a position of vulnerability', ‗consent,' or ‗exploitation' are vague and undefined, resulting in fluid parameters that leave room for interpretations of human trafficking that can either be too expansive or too narrow. These definitional ambiguities ‗cause significant problems at the national level where criminal justice agencies in particular struggle to draw an appropriate line between the crime of -trafficking‖ and other forms of exploitation' 9 such as prostitution or forced begging. These inherent challenges are exacerbated when law enforcement officials or first responders are not properly trained, or identification procedures are not standardised or consistently applied. The 2017 US State Department Trafficking in Persons (TIP) Report on Thailand describes how officials continue to fail to recognise non-physical indicators of trafficking such as debt bondage or deception. 10 One NGO worker quoted in a recent study explained, ‗We (NGOs) don't have a clear idea about how the police decide who is a victim and who is not…. It is not a transparent process and the police do not always explain why cases are accepted as victims of human trafficking or not.' 11 The complex nature of the activities associated with human trafficking also makes trafficking cases inherently difficult to prove. The people involved in human trafficking conduct a sophisticated and complex web of operations involving multiple levels of intermediaries (e.g. labour brokers, middlemen, employment agencies, or recruiters) who may operate in relative legality, making links between the accused and the victim extremely hard to follow and even harder to substantiate. What is more, trafficking in the fishing sector may occur under the jurisdiction of several countries and fall under the purview of a myriad of different national agencies, such as the Navy, police, Department of Fisheries, and Ministry of Labour. In Thailand, the close partnerships required to build evidence for a successful case are hindered by weak interagency coordination and poor cooperation between the prosecution and law enforcement. The ability of most governments to gather evidence is also seriously compromised by overreliance on trafficked persons' testimonies. Survivors may be unable to recall specific facts or events due to trauma or the sheer long-term nature of their ordeal. They may also be unwilling to cooperate due to intimidation from their traffickers, a problem that corruption and poor witness protection may accentuate. Case Study Somchai (not his real name), now twenty-one years old, is a living example of the failures of the victim identification process. Trafficked on a fishing boat at the age of fourteen, he was made to work eighteen-hour days in difficult and often dangerous conditions, continuously fixing nets, pulling in and sorting fish, and moving them below deck. He remembers working without sleep for three days at a time and being caged like https://www.jica.go.jp/project/thailand/016/materials/ku57pq00001yw2db-att/thailands_country_report_01.pdf. 6 According to the Thai government as reported in: D Irvine, ‗Seafood from Slavery: Can an animal. He reports watching as crew members were savagely beaten until dead or unconscious, and their bodies thrown into the sea. As he got older in this brutal culture, he was forced to fight to survive. Somchai eventually escaped from his boat during a port inspection in Ambon Island (Indonesia) and was found by LPN during one of its initial rescue operations in 2014. After being repatriated in a Royal Thai Army plane, Somchai immediately went through the government's trafficking victim identification process. After it came to light that he initially joined the boat willingly, and seeing that he had no obvious signs of abuse, the multidisciplinary team tasked with victim identification ruled that he was not a victim of trafficking. As a result, LPN could not help Somchai mount a trafficking case against his employer or labour broker. Instead, it was forced to make a complaint for unpaid wages to the Ministry of Labour. At the labour court mediation, the government mediator, along with the employer, barred LPN from accompanying Somchai during the proceedings. Somchai was then convinced to settle for compensation of THB 50,000 (around USD 1,450) for three years of exploitation. 12 Paternalistic Victim Assistance Programmes In LPN's experience, the Thai criminal justice system's deficiencies are further exacerbated by low rates of victim participation in the judicial process. Government victim assistance programmes often fail to properly consider victims' individual needs and interests, undermining their ability and willingness to effectively cooperate in prosecutions. The disregard for victims is first apparent during initial identification, when victims may be pressured into acting as witnesses without due consideration of their physical or mental state. Law enforcement officials tasked with identification often disregard factors such as gender, immigration status, fear of reprisals, trauma, language barriers, and cultural background, which may all constitute significant barriers to victims' cooperation. Moreover, in the name of witness protection, government-run shelters restrict a trafficked person's freedom, mobility, and employment opportunities. Shelters can be overly paternalistic and may dissuade victims from cooperating with law enforcement if they believe long stays will cause them to forego livelihood opportunities. 13 Assistance programmes that are not well adapted to victims' needs or interests undermine the criminal justice system's ability to deliver redress for victims. Trafficked persons who are not properly supported and protected are less likely to report the crime and contribute to investigations by identifying and testifying against the offenders. As a consequence, ‗criminal justice systems lose important evidence and are unable to enforce criminal law against traffickers'. 14 This leads to a self-perpetuating cycle whereby victims' lack of participation in the judicial process renders TIP prosecutions even less effective, providing even greater disincentives for trafficked persons to come forward and cooperate. Why Compensation Matters Because of the inherent challenges in mounting a successful trafficking case, the legal system has been unable to provide rescued fishermen with the compensation they deserve. In our experience, returned fishermen's inability to obtain compensation poses a significant obstacle to their reintegration. ‗For victims of trafficking, access to financial compensation is crucial. It helps them to rebuild their lives and prevent falling back into the hands of the traffickers. It can also go some way to making up for the pain and financial losses they have suffered.' 16 One recent study on the reintegration of trafficked persons in the Greater Mekong Subregion found that ‗economic empowerment' was often the primary need identified by trafficked persons because of the debt they incur during migration and the difficulties they face finding work after returning home. 17 survivors with the financial means to support themselves and their families without having to pursue risky job opportunities, compensation also ‗counters the contributing vulnerability factors of poverty and deprivation in human trafficking '. 18 Unfortunately, LPN's own experience working with the group of around 300 Thai fishermen rescued from Indonesia illustrates the difficulties victims face in obtaining adequate compensation. Just thirty-nine 19 of these men were officially recognised as victims of human trafficking in the period from August 2014 to August 2015. Not one has obtained a conviction or received subsequent compensation under human trafficking laws so far. Identified victims are entitled to financial assistance through the Anti-Trafficking in Persons Fund, which was established by the RTG in 2008 and covers expenses such as medical costs, repatriation, legal fees, a living allowance, etc. However, compensation under criminal laws is only awarded following a successful conviction. 20 In the absence of such a conviction, compensation claims can only be made through the Court of First Instance in Civil Prosecution. This option presents a major disadvantage since victims have to pay a court fee equal to 2.5% of the claim (but not exceeding THB 200,000). 21 Under these circumstances, initiating a complaint for unpaid wages through the labour court remains the most effective means for trafficked fishermen to obtain any type of financial redress. Each one of the 300 fishermen assisted by LPN originally approached the organisation to help them claim unpaid wages. A total of 217 pursued a wage complaint case with the Department of Labour Protection and Welfare between 2014 and 2016, 22 while the rest settled with their employer out of court with LPN and the Ministry's help. However, only about half of these 217 returned fishermen received their unpaid wages from the labour court. The rest are still in process, years after the fact. For those who did receive their back wages, it was usually just a small fraction of the amount they were owed. Most never signed contracts and were not aware of the terms of their work agreement, making it easy for their employers to cheat them out of years of salary. While successful criminal and civil prosecutions would have had the potential to award these victims with larger sums of money, it should be noted that compensation in the Thai justice system is typically limited to actual damages (e.g. lost and unpaid wages and medical expenses) and may be difficult to obtain in practice. It is interesting to note that for a comparable number of claimants, the sum awarded to victims through the wage complaint system in 2016 was more than twice as high as the compensation that was disbursed through section 35 of the Anti-Trafficking in Persons Act. 23 Strengthening Prosecutions and Incentivising Victim Participation In order to improve access to justice and compensation for trafficked fishermen and facilitate their long-term reintegration, the RTG and civil society must work together to strengthen the criminal justice process and make it more victim-centred. The Human Trafficking Criminal Procedure Act, B.E. 2559Act, B.E. (2016, which introduces an inquisitorial system in TIP cases to make the court ‗actively involved in proof taking by investigating the facts of the case', 24 has been lauded as an important step in this direction. However, significant gaps remain between government reforms and their implementation. Corruption, official complicity, or poor application of laws and procedures can limit and even undermine the effectiveness of new measures, particularly with regard to victim identification and evidence collection. Effective action is also hindered by the compartmentalisation that exists between prosecutors, police, and social service agencies. Brian Brislin, the Regional Legal Expert on Human Trafficking of the United Nations Office on Drugs and Crime, went so far as to describe the ‗inability of all parties in the anti-trafficking community to come together and create a comprehensive, truly multi-sector strategy' 25 as the number one barrier to an effective anti-trafficking response in Thailand. The Organization for Security and Co-operation in Europe (OSCE) has developed a comprehensive, multistakeholder strategy to combat trafficking, dubbed the ‗National Referral Mechanism' (NRM) that addresses the problem of interagency cooperation. The NRMs are designed to formalise cooperation among government agencies and non-governmental organisations dealing with trafficked persons ‗to ensure that the human rights of trafficked persons are respected and to provide an effective way to refer victims of trafficking to services'. 26 The OSCE offers an innovative approach to interagency cooperation that should be adopted by all antitrafficking stakeholders in Thailand. A national multi-stakeholder approach is sorely needed to outline the respective roles and responsibilities of both state and non-state actors and clarify the nature and format of their collaboration. As it stands, civil society organisations involved in anti-trafficking can be fractious and disorganised, with conflicting styles and priorities that can impede effective collaboration with the government. As per the OSCE's recommendation, an initial country assessment should be conducted to ‗determine which agencies and civil society organizations are the key stakeholders in anti-trafficking activities, which of them should participate in an NRM, what structure might be most effective…and what issues require most attention'. 27 Only when all agencies and stakeholders that deal with human trafficking are coordinated in their efforts can some of the most serious obstacles to interagency cooperation be addressed. Incentivising Survivors' Collaboration State and civil society stakeholders can also help strengthen the criminal justice system by placing greater emphasis on trafficked persons' individual needs and interests throughout the aftercare process. Research shows that countries with the most comprehensive measures for assisting victims (e.g. Belgium, Italy, the Netherlands, United States) fare better in prosecuting and convicting traffickers for various crimes. 28 One model developed by the Council of Europe Convention on Action against Trafficking in Human Beings serves as a good example of how government protection and assistance measures can respect victims' needs while encouraging their participation in the criminal justice proceedings. Article 13 of the Convention recommends that countries ‗introduce a recovery and reflection period of at least thirty days' to ‗give the individual a chance to recover and to escape the influence of traffickers and/or to make an informed decision on co-operating with the authorities'. 29 A key stipulation attached to the ‗recovery and reflection' period is that assistance not be made conditional on victims' willingness to act as witnesses. This human rights-centred approach has shown to be effective in the countries where it has been implemented. In Belgium and The Netherlands, victims who are granted the reflection period were more likely to press charges against their traffickers. 30 The OSCE further builds on the Convention's model by recommending that assistance be extended to ‗presumed' victims that may not have been formally identified as soon as the ‗the competent authorities have the slightest indication that she or he has been subject to the crime of trafficking'. 31 Introducing the concept of ‗presumed victims' to the aftercare system is essential to making prosecutions more effective and victim-friendly. Not only does this concept provide better protection of probable victims who may be reluctant to be identified, it allows the criminal justice system to retain potential witnesses that would have otherwise been unable to cooperate in prosecutions. While interventions that make the criminal justice system more effective, efficient, and victim-friendly provide an important way forward, the government and civil society must also work together to address the economic disincentives that discourage victims from cooperating in prosecutions. One way to encourage trafficked persons' participation in the legal process is through financial assistance. compensation. The amendment addresses what was previously a major flaw in the victim compensation scheme: offenders' inability to pay or unwillingness to comply with the court order effectively denied victims their compensation. More recently, the Human Trafficking Criminal Procedure Act, B.E. 2559Act, B.E. (2016 has authorised Thai courts ‗to increase restitution for victims as appropriate in a form of punitive damages' in ‗cases of wrongdoings that involve cruelty, detention, imprisonment, physical abuse, or persecution that are deemed inhumane and serious'. 32 The RTG has also taken steps to improve employment and earning opportunities for victims staying in government shelters. According to the RTG's report on its anti-trafficking response for the year 2016, employment opportunities were provided to 196 out of 561 victims both inside and outside shelters, a 350.1 per cent increase compared to 2015. 33 However, it should be noted that significant gaps remain between the positive measures described above and their implementation. Traffickers can hide away their assets or transfer them to friends or relatives before seizure, limiting the effectiveness of the recent amendment to the Anti-Money Laundering Act. And despite positive changes, LPN has seen how the government's economic assistance and empowerment programmes remain overly paternalistic and continue to undermine victims' rights. Towards More Empowering Forms of Assistance Despite the implementation of victim-centred criminal justice reforms ‗that marry the desired goals of policing and punishment of traffickers with the needs and rights of trafficking victims', 34 the judicial system is limited in its ability to provide victims with interventions centred in their needs. The fact remains that the government privileges a criminal justice approach to human trafficking that places more emphasis on prosecuting perpetrators and securing convictions than on supporting victims' rights. The RTG has been under considerable pressure to whet the United States TIP Report appetite for prosecutions numbers and avoid the political embarrassment and potential economic sanctions associated with a downgrade in its ranking. As a result, from the ‗3Ps' (prevention, protection, prosecution), prosecutions have tended to receive the most attention. We have seen how this approach not only diverts attention away from victims' rights but may also violate their rights in the process and discourage them from even participating in prosecutions. More fundamentally, however, the disregard for crime victims has its origins in the criminal justice system itself, ‗since it was established in order to control crime, but not necessarily to support crime victims'. 35 While the judicial system has the potential to further victims' interests by convicting their abusers and awarding them compensation, this has proved elusive in practice. It can therefore be said that the disregard for victims is inevitable in the criminal justice system. Because civil society organisations are non-state actors that are not driven by the imperative to prosecute, they are better placed to provide grassroots interventions that empower survivors and facilitate their long-term reintegration. Civil society can use its close interactions with the individuals and communities affected by human trafficking to develop innovative reintegration models that place victim empowerment at the core of the aftercare system. Unconditional Cash Transfers One way the government or civil society actors can support trafficked persons is by empowering them financially immediately after their rescue. Unconditional Cash Transfers (UCTs) offer financial support to victims and allow them to meet their individual needs. The premise is fairly straightforward: provide recipients with a series of cash transfers and leave the management of those funds entirely up to them. Until recently, the mainstream development and aid organisations were sceptical about this approach, expressing concerns that recipients might waste their transfers on non-essential items like alcohol. However, recent studies conducted around the world have shown that these concerns are largely unfounded. Recipients of cash grants tend to invest their money wisely or spend it on such basic items as food and better shelter. 36 The Issara Institute, a Bangkok-based migrant rights NGO, provided UCTs to 174 victims of human trafficking in a pilot project from 2015 to 2016. Fifty-four of the participants were former fishermen who had been rescued from 32 Liberty Asia, Legal Gap Analysis of Thailand's Anti-Trafficking Legislation, June 2017, p. 11, http://unact.org Indonesia. The evaluation of the pilot found no negative effects at the individual, household, or community level and confirmed the hypothesis that trafficked persons could manage cash grants responsibly. The findings of the study also indicated that UCTs could help address some of the inherent challenges associated with administering economic assistance programmes. Providing individualised support is costly and complex, as different individuals may have different needs at different stages of their recovery. UCTs resolve this problem by making beneficiaries responsible for meeting their own needs. They are therefore an attractive reintegration model in that they empower victims from the bottom-up while enhancing the effectiveness and efficiency of service provision. 37 Volunteer Network Groups While we believe UCTs offer a promising model for economic empowerment, it is important for service providers to develop programmes that empower trafficked persons beyond the economic sphere. Trafficking survivors are ‗forced physically and mentally to do things against their will and have to stand the use of force, coercion, abuse, or even torture'. As a result, many feel ‗degraded in their identity'. 38 Victim assistance programmes must therefore address the psychological factors of agency and self-worth. While recent government improvements in the shelter conditions have addressed some of these needs by developing empowering activities for victims, these programmes are often imposed in a top-down manner. In our view, trafficked persons need ‗to become independent and self-sufficient and be actively involved in their recovery and reintegration'. 39 Over the past few years, LPN has developed volunteer networks of rescued fishermen, many of whom were trafficked and experienced abuse. One example, the Thai and Migrants Fishers Union Group (TMFG), operates under a rather straightforward premise. While the network's organisational structure has been laid out by LPN, the TMFG is entirely autonomous. Members field calls involving labour rights complaints in the fishing sector, which can range from issues such as wage violations to cases of human trafficking. When a potential case has been identified, the group informs the authorities and helps the victims file a civil or criminal complaint to the relevant government agencies. The TMFG then accompanies victims throughout the process, gathering evidence to support their case and assisting them with vocational training and reintegration. Volunteer networks can be a particularly useful tool for reintegration because they empower survivors by turning them from passive victims to partners in their own reintegration, engaging in activities that they consider important and valuable. As Somsak, who works both as LPN's cook and as a TMFG member, explained, ‗I like the work that I do. I can help other former fishermen during their prosecutions and that makes me feel proud.' 40 The volunteer network model also contributes to a two-way exchange of information that can provide a better understanding of the needs of survivors, while helping to inform best practices. The TMFG is made up of former trafficking victims who share similar socio-economic backgrounds with those they assist. They have a holistic understanding of the factors that expose people to exploitative working conditions, the ordeal they experience, and the specific challenges they face in reintegrating. The TMFG engages in direct communication with the communities it supports through in-person workshops and training activities as well as through social media. One TMFG member, Surichai, has as many as 400,000 followers on Facebook. He posts regular videos on Facebook Live with useful information for migrant workers: a single post can generate up to a half a million views. This grassroots understanding of the issues and challenges victims face serves as an excellent tool for informing policy. As Sompong, the Executive Director of LPN, emphasised, ‗The ultimate objective is for the group to become visible to the public and speak for itself. These fishermen can bring about change from the bottom-up by using their knowledge to improve justice for abused fishermen, promote more just operating practices in the fishing sector, and help shape fishing-related policies at the government level.' 41 One major advantage of LPN's volunteer network model is that it is cost-effective, easy to implement, and can be easily replicated. Provided the question of funding is addressed, they can sprout out organically wherever a civil society organisation is providing assistance to a population of returned fishermen. And since they are almost entirely self-sufficient, they place little stress on an organisation's operations. One criticism that can be levelled at this model is that the high turnover associated with volunteership might undermine the group's ability to deliver a consistent and coherent approach to service provision. However, we have not found this to be the case. While volunteers may come and go, senior TMFG staff receive a salary and ensure continuity in operations and strategic direction. LPN has already helped develop twenty such groups of volunteer migrant networks across Thailand and the number is on the rise. Conclusion Trafficked fishermen in Thailand continue to experience significant challenges in their long road to recovery and reintegration. Despite important government reforms, successful prosecutions under human trafficking laws remain extremely rare. The vast majority of trafficked persons are never properly identified, and those that are face serious obstacles to building enough evidence to mount a case. What is more, we have seen how the process of prosecutions can actually bring further harm. Too often, survivors escape exploitation at the hands of traffickers only to be disenfranchised by the very criminal justice system and aftercare programmes that are meant to protect them. Civil society must therefore work together with the government to develop victimcentred approaches that balance the human rights of victims with the interests of effective prosecution. Several good practices in place in Europe such as the recovery and reflection period and the National Referral Mechanism offer effective models that could be implemented in Thailand. Such initiatives have been shown to strengthen TIP prosecutions by encouraging survivors' participation in the judicial process. However, it should be noted that criminal justice approaches to human trafficking are inherently limited in their ability to deliver positive outcomes for victims. The criminal justice system was created to punish and convict, not to provide victims with services centred in their needs. The persistence of woefully inadequate compensation schemes and overly paternalistic assistance programmes in the Thai judicial system attests to this reality. If the reintegration of trafficked persons is to be successful, then the needs of survivors should be placed in a broader perspective that extends beyond the criminal justice system. Because the primary goal of civil society organisations is to protect victims rather than punish perpetrators, they are best positioned to develop innovative bottom-up models that empower trafficked persons. UCTs and volunteer social networks present effective approaches that can be used by both the Thai government and civil society to make victim assistance programmes more efficient, effective, and victim-friendly. David Rousseau, a dual French-American citizen, received a Bachelor's degree in political science from McGill University in Montreal and a Master's from the Sorbonne University in Paris. He started his professional career in New York in the energy sector before moving to Thailand to pursue his interests in human rights and international development. David has lived and worked at the Labour Rights Promotion Network for the past nine months, providing assistance in several areas including research, donor communications, and corporate partnerships. Email: davidjasonrousseau@gmail.com.
v3-fos-license
2023-04-20T15:10:29.140Z
2023-04-01T00:00:00.000
258225009
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://doi.org/10.1016/j.heliyon.2023.e15548", "pdf_hash": "f165f62bfe4b793dcad806152487b83644ea59d5", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43560", "s2fieldsofstudy": [ "Medicine" ], "sha1": "05f9c1e9f4548563bc3e8fe126c4e4dabb0927b3", "year": 2023 }
pes2o/s2orc
The efficacy of intrathecal methyl-prednisolone for acute spinal cord injury: A pilot study Study design Randomized clinical trial. Objectives To evaluate the safety and effectiveness of intrathecal methyl-prednisolone compared to intravenous methyl-prednisolone in acute spinal cord injuries. Setting Imam Reza Hospital, Tabriz University of Medical Sciences. Methods Patients meeting our inclusion and exclusion criteria were enrolled in the study and divided randomly into two treatment arms: intrathecal and intravenous. Standard spinal cord injury care (including surgery) was given to each patient based on our institutional policy. Patients were then assessed for neurological status (based on ASIA scores, Frankel scores) and complications for six months and compared to baseline status after injury. To better understand the biological bases of methyl-prednisolone on spinal cord injuries, we measured two biomarkers for oxidative stress (serum malondialdehyde and total antioxidant capacity) in these patients at arrival and day three after injury. Results The present study showed no significant difference between the treatment arms in neurological status (sensory scores or motor scores) or complications. However, the within-group analysis showed improvement in neurological status in each treatment arm within six months. Serum malondialdehyde and total antioxidant capacity were analyzed, and no significant difference between the groups was seen. Conclusion This is the first known clinical trial investigating the effect of intrathecal MP in acute SCI patients. Our finding did not show any significant differences in complication rates and neurological outcomes between the two study arms. Further studies should be conducted to define the positive and negative effects of this somehow novel technique in different populations as well. Introduction Acute spinal cord injury (SCI) resulting from traumatic events is one of the leading causes of disability in societies [1]. It mostly impacts the younger age population because of its most common cause [2] (which is vehicle collision.). Hence, it carries a tremendous socioeconomic and individual burden worldwide [3][4][5]. The injury consists of two types of injury: the primary and the secondary. Primary injuries result from the exact force and physical impact as a consequence of the trauma itself for example hemorrhage, axonal damage, vascular shearing, etc. Secondary injuries occur just after the primary damage and are a result of a cascade of signaling and downstream events which ultimately result in free radical formation, apoptosis induction, and inflammation. The recognition of secondary damage had led to various medical and surgical treatments [6][7][8][9][10]. Intravenous Corticosteroids, mainly methyl-prednisolone (MP) are probably the most debated and studied since the 60s. Several human clinical trials have been conducted throughout these decades for investigating the benefits and adverse systemic effects of MP in SCI the 3 staged NASCIS trail is the most famous. Despite these trials, and the benefits of MPSS acting as a neuroprotective agent, the role of MP in SCI is still controversial and experts suggest its use should be considered individually in each patient [11][12][13][14][15]. Outside this field of study, another form of corticosteroid has been suggested for the treatment of some conditions. Intrathecal administration of prednisolone has been utilized for postherpetic neuralgia and chronic complex regional pain syndrome [16][17][18][19][20]. Intrathecal administration of drugs is an important route for drug delivery. It bypasses the brain-blood barrier and acts exactly on the central nervous system, therefore, decreasing the total dose and reducing the systemic effects and further complications of the drug. We aim to study the safety and efficacy of MP on acute SCI patients. Patients and methods This study was designed as a randomized, double-blinded, and prospective clinical trial. The study was single centered and took place at Imam Reza Hospital, Tabriz, Iran from 2014 to 2016. Acute traumatic patients diagnosed with acute SCI who met our inclusion and exclusion criteria enrolled in our trial. Then they would be grouped randomly into two interventional arms: Intrathecal and intravenous MP groups. Exclusion criteria: Ref. [1] non-thoracolumbar cause of SCI [2] penetrating cord injuries [3] any condition which would not allow intervention in the first 8 h [4] previous spinal deformity [5] previously disabled patients [6] hemodynamically unstable [7] unconsciousness [8] any absolute or relative contraindication for Lumbar puncture [9] the use of high dose steroids within the last month before injury [10] serious medical condition which drug administration safety is unclear [11] patients with diabetes mellitus [12] Pregnant and breastfeeding patients [13] any systemic immunodeficiency or known infection [14] any relative or absolute contraindication for high-dose intravenous steroid [15] any condition which would interfere with consenting [16] patients with normal sensory and motor function. Design: Each patient was assessed and treated based on our institutional protocol for spinal cord injury. Spinal and brain initial computed tomography (CT) was obtained for each patient based on the indication. Based on our institutional protocol, we obtained thoracolumbar MRI for every patient for further evaluation. Based on clinical examination, American Spinal Injury Association Impairment score (ASIA) for motor and sensory deficits and modified Frankel scores were measured and documented for each patient by at least two neurosurgery residents in the early hours of admission. The decision for surgery was based on institutional and individual senior attending decisions. Intervention: two groups of intervention were designed as mentioned earlier: Intrathecal (IT) and intravenous (IV) arms. For the Intravenous group, based on previous studies, a bolus dose of 30 mg per kg MP was given intravenously in 15 min followed by 4-5 mg per kg which was infused based on the interval between injury and injection. Patients treated in the first 3 h after injury, were treated for 23 h, and patients treated after 3 h-8 h of injury were treated for 47 h. On the IT treatment arm, patients we log rolled to lateral position and lumbar puncture was done ideally in the L4/5 interspinous space. After removing 1 mL of CSF, 1 mg per kg of MP was injected slowly, in the first 8 h after injury. This protocol was repeated 24 and 48 h later. To better understand the biological bases of MP on SCI, we measured two biomarkers for oxidative stress in these patients as well. Serum malondialdehyde (MDA) and total antioxidant capacity (TAC) were measured in these patients at arrival and day 3 after injury. Outcome: the efficacy of treatment was measured using changes in ASIA sensory and motor scores from baseline. Three checkpoints were chosen for assessment: baseline at admission, just before discharge, and 6 months after injury. Any adverse effects, complications, or death were evaluated throughout follow-up. Randomization, blinding, and analysis: the statistical significance was chosen at p < 0.05, and the power was set to 80% (α = 0.05 and β = 0.2). Using similar previous studies [21,22] total sample size was estimated at 53 which for simplicity we concluded 60 patients overall. Using the R program, we used a blocked randomization technique (for equal treatment arm sizes) and a double blinding strategy was used. All analysis was done using SPSS ver 16. For comparison of within groups' results, we used repeated measure ANOVA, and for in-between measures, in each period we utilized paired t-test/Wilcoxon test. Results 73 patients were diagnosed with acute SCI in Imam Reza hospital and 13 of them were excluded. The remaining 60 patients were randomized into two treatment arms accordingly. 3 patients (one from intrathecal and 2 from the intravenous group) were lost in follow-up. Therefore, ultimately the data of 57 patients were gathered and analyzed (Fig. 1). Baseline data The mean age of the IT and IV group was 26.10 ± 10.62 and 27.21 ± 11.05 respectively. Further analysis showed no significant difference between the ages (p = 0.79). Overall, 8 patients were female (4 in each group) and 49 were male (25 for the IT group and 24 for IV) and there was no significant difference between the two groups statistically (p = 0.64). Pinprick score The total score for pinprick score at admission, on discharge, and after 66-month follow-up was 92.65 ± 10.87, 96.82 ± 10.7, and 100.03 ± 10.18 respectively (Table 1). A repeated measure ANOVA was conducted to compare the pinprick ASIA score of patients in two different treatment arms. There was a significant effect of time on pinprick score regardless of group (p < 0.000). The analysis showed that there was no interaction of time and group on light touch ASIA score and at every given time no significant difference was observed (p > 0.6) (Fig. 2). Light touch score The total score for light touch at admission, on discharge, and after 6-month follow-up was 91.6 ± 11.4, 95.42 ± 11.13, and 99.67 ± 10.61 respectively ( Table 2). A repeated-measures ANOVA was conducted to compare the light touch ASIA score of patients in two different treatment arms. There was a significant effect of time on light touch ASIA score regardless of group (p < 0.000). The analysis showed that there was no interaction of time and group on light touch ASIA score and at every given time no significant difference was observed (p > 0.5) (Fig. 3). A repeated-measures ANOVA was conducted to compare the ASIA motor score of patients in two different treatment arms. There was a significant effect of time on motor scores regardless of group (p < 0.005). The analysis showed that there was no interaction of time and group on light touch ASIA score and at every given time no significant difference was observed (p > 0.5) (Fig. 4). Frankel score Overall, at admission, 75% in the IV group and 65.5% in the IT were either Frankel A or B (Table 4). For within analysis of the Intravenous group, a Friedman's test showed a significant difference among Frankel scores in patients measured at admission, at discharge, and after a 6-month follow-up, X2F [2] = 26.755, p < 0.001. Post hoc test using a Wilcoxon signed-ranked test with a Bonferroni-adjusted alpha level of 0.017 (0.05/3) the analysis showed Frankel score improvement was significant at any given time (p < 0.017). In the intrathecal group, within analysis showed a significant difference among Frankel scores in patients measured at admission, at discharge and after a 6-month follow-up, X2F [2] = 28.50, p < 0.001. Post hoc test using a Wilcoxon signed-ranked test with Bonferroni-adjusted alpha level of 0.017 (0.05/3) the analysis showed Frankel score improvement was significant between admission and any other time (P < 0.001) but not significant between discharge and 6 months follow up (p = 0.10). Using multiple independent t-tests, a between-group analysis was done. The tests showed there were no significant differences among Frankel scores of two treatment arms in any given time (p values were measured at 0.78, 0.6, and 0.94 for admission, at discharge, and after 6 months of follow-up, respectively). Serum MDA and TAC measurement These two serum biomarkers were measured on admission and day three after injury. Data shows day zero MDA serum level of 2.12 ± 0.56 and 2.36 ± 0.52 for IV and IT groups respectively. Also, MDA serum levels on day 3 after injury were measured 2.08 ± 0.54 and 2.24 ± 0.64 for IV and IT groups respectively. Further analysis did not show any statistical difference between MDA levels in IV and IT groups in either admission (p = 0.1) or after three days (p = 0.32). Complications As shown in Table 5, five major complications were seen. The most common was urinary tract infection (UTI) which occurred in 15 patients (9 IV group and 5 IT group). Other complications included pneumonia, deep vein thrombosis (DVT), pulmonary embolism (PE), and gastrointestinal bleeding (GI bleeding). Further analysis showed no significant difference between these complications in the two groups (Table 5). Discussion Intrathecal MP was first described in 1960 and was mainly used for treating Multiple sclerosis and sciatica pain [16]. Throughout the literature, the intrathecal administration of steroids-particularly MP-has been used for different diseases. Kotani there is a possible benefit for intrathecal injection of MP in postherpetic neuralgia (PHN) patients in a randomized trial [17]. But other studies following this study showed either no benefit or adverse effects and therefore this technique never became part of the standard treatment for PHN and remained somehow controversial [17][18][19]23]. Chronic complex regional pain syndrome (CRPS) was another condition in which intrathecal MP was investigated. For instance, Munts et al. concluded that single bolus administration of intrathecal MP is not efficacious in chronic CRPS patients [20]. An important rationale for using the intrathecal route for MP in spinal cord injury is that studies have shown systematic injections do not lead to a measurable amount of MP in CSF [24,25]. The reason hypothesized for this is the effect of the brain-blood barrier (BBB), particularly a protein called P-Glycoprotein. This protein is an efflux transporter that acts on MP as a substrate and reduces the bioavailability of MP in CSF [23,24]. Our present understanding of MP in CSF is complex and incomplete. Studies suggest that MP is hydrolyzed by cholinesterase once injected intrathecally [23,26]. Free MP has three main routes: entering cells, reaching the systemic circulation, and getting metabolized [23]. Both animal and human studies have investigated the peak plasma concentration of MP after intrathecal injections. These studies showed a peak in plasma concentration after 24 h and measurable amounts after 21 days following 80 mg intrathecal injection (8,130). This could be the possible explanation for systemic effects and complications of intrathecal treatments. The present study was designed to evaluate the effect of using intrathecal compared to intravenous steroids in acute spinal cord injuries. Accordingly, we observed favorable results in both sensory and motor scores after 6 months in each treatment arm. Our results failed to demonstrate any superiority of the two treatment arms over each other. Moreover, our results did not suggest any difference in complication rates among the groups. We did not include patients with prior spinal deformity due to potential side effects of lumbar puncture. The existence of concomitant traumatic brain injury (TBI), which may result in elevated intracranial pressure, is another significant clinical complication that should be taken into account in trauma patients (ICP). Patients with TBI were not included in this study if imaging or clinical examinations revealed any indications of elevated ICP. This study has several limitations. First, surgical treatment strategies and indications for surgery were not standardized and despite being a single-center study, it could vastly influence our conclusion. Second, we did not have a control group to compare the baseline effect of MP. Due to the previous controversy in the effectiveness of steroids, utilizing a control group could result in a better interpretation of the results. Third, our population was relatively young and as we know from previous studies, the recovery rate is higher in younger patients [27][28][29][30]. Due to our sample size, we could not analyze older patients separately without affecting the validity of our results. Conclusion This is the first known clinical trial investigating the effect of intrathecal MP in acute SCI patients. Our finding did not show any significant differences in complication rates and neurological outcomes between the two study arms. Further studies should be conducted to define the positive and negative effects of this somehow novel technique in different populations as well. Ethics The study was approved by the Research Ethics Committee of Tabriz University of Medical Sciences (IR.TBZMED.REC.1396.971). Author contribution statement Ali Meshkini: Conceived and designed the experiments; Contributed reagents, materials, analysis tools or data; Wrote the paper. Mohammad Kazem Sarpolaki: Analyzed and interpreted the data; Wrote the paper. Ali Vafaei: Contributed reagents, materials, analysis tools or data; Wrote the paper. Farhad Mirzaei: Conceived and designed the experiments; Performed the experiments. Abolfazl Badripour; Ebrahim Rafiei; Mohammad Reza Fattahi: Performed the experiments; Contributed reagents, materials, analysis tools or data. Morteza Khalilzadeh: Conceived and designed the experiments; Performed the experiments; Contributed reagents, materials, analysis tools or data. Arad Iranmehr: Conceived and designed the experiments; Analyzed and interpreted the data; Contributed reagents, materials, analysis tools or data; Wrote the paper. Data availability statement Data will be made available on request. Clinical trial registration Iranian Registry of Clinical Trials approval was obtained before initiating the study (IRCT number: IRCT20190116042374N1).
v3-fos-license
2023-07-15T05:09:42.253Z
2023-07-12T00:00:00.000
259856221
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": null, "oa_url": null, "pdf_hash": "61d4f4e0208923f15eeeae1e4be80906644356e0", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43561", "s2fieldsofstudy": [ "Medicine", "Biology" ], "sha1": "61d4f4e0208923f15eeeae1e4be80906644356e0", "year": 2023 }
pes2o/s2orc
Integrated multiomic analysis and high‐throughput screening reveal potential gene targets and synergetic drug combinations for osteosarcoma therapy Abstract Although great advances have been made over the past decades, therapeutics for osteosarcoma are quite limited. We performed long‐read RNA sequencing and tandem mass tag (TMT)‐based quantitative proteome on osteosarcoma and the adjacent normal tissues, next‐generation sequencing (NGS) on paired osteosarcoma samples before and after neoadjuvant chemotherapy (NACT), and high‐throughput drug combination screen on osteosarcoma cell lines. Single‐cell RNA sequencing data were analyzed to reveal the heterogeneity of potential therapeutic target genes. Additionally, we clarified the synergistic mechanisms of doxorubicin (DOX) and HDACs inhibitors for osteosarcoma treatment. Consequently, we identified 2535 osteosarcoma‐specific genes and several alternative splicing (AS) events with osteosarcoma specificity and/or patient heterogeneity. Hundreds of potential therapeutic targets were identified among them, which showed the core regulatory roles in osteosarcoma. We also identified 215 inhibitory drugs and 236 synergistic drug combinations for osteosarcoma treatment. More interestingly, the multiomic analysis pointed out the pivotal role of HDAC1 and TOP2A in osteosarcoma. HDAC inhibitors synergized with DOX to suppress osteosarcoma both in vitro and in vivo. Mechanistically, HDAC inhibitors synergized with DOX by downregulating SP1 to transcriptionally modulate TOP2A expression. This study provided a comprehensive view of molecular features, therapeutic targets, and synergistic drug combinations for osteosarcoma. Project of Technology Innovation of Hunan Province, Grant/Award Number: 2021SK1060; the Scientific Research Program of Hunan Provincial Health Commission, Grant/Award Number: B202304077077 molecular features, therapeutic targets, and synergistic drug combinations for osteosarcoma. K E Y W O R D S drug combination, high-throughput screen, multiomic analysis, osteosarcoma, therapeutic target INTRODUCTION Osteosarcoma is the most common primary bone cancer in children, adolescents, and young adults. It is a rare cancer type, with an annual incidence of 1−3 per million personyears. 1 It mostly occurs in the metaphysis of long bones near growth plates and less in the skull, jaw, or pelvis. 2 The 5-year event-free survival for patients with localized osteosarcoma is approximately 70%, while for patients with metastatic or recurrent disease, it is less than 20%. 3 The rarity of osteosarcoma greatly limits the development of new therapies targeting osteosarcoma. The current standard neoadjuvant chemotherapy (NACT) for osteosarcoma treatment is mainly based on a three-drug combination of methotrexate (MTX), doxorubicin (DOX; adriamycin), and cisplatin (DDP). 4 However, the therapeutic effects of these drugs vary greatly among patients and accompanied with multiple side effects. 5 Sorafenib, a multikinase inhibitor, has been added to the second-line drugs for osteosarcoma. 6 Clinical trials reported that 45% of patients with unresectable high-grade osteosarcoma were progression free at 6 months when treated with the combination of sorafenib and everolimus. 7 Further, drugs targeting core regulators or signaling pathways in osteosarcoma show therapeutic potential. For instance, the inhibitors of PI3K, mTOR, WEE1, and ATR yielded suppression of osteosarcoma cells. 1,8,9 Immunotherapies, such as drugs targeting macrophages or improving immune infiltration, have also been investigated for osteosarcoma treatment. Monoclonal antibodies against the tumor membrane proteins RANKL and IGF-1R have shown therapeutic potential in preclinical studies. 10 Mifamurtide could activate innate immunity via the pattern-recognition receptor NOD2, further providing therapeutic benefit for patients with recurrent and/or metastatic disease. 11 However, immune checkpoint inhibitors, such as antibodies targeting PD-1, showed poor responses in advanced osteosarcoma. 12 Moreover, adoptive cell therapies (ACTs), such as chimeric antigen receptor T-Cell (CAR-T) therapy, have been introduced to osteosarcoma treatment. CAR-T therapy targeting cell surface antigen disialoganglioside-2 (GD2) could specifically recognize and kill osteosarcoma cells and is investigated in ongoing clinical trials. 13 Other surface proteins such as CD166 and B7-H3 have been reported as potential targets for CAR-T therapy in preclinical studies. 14,15 Nevertheless, effective strategies for osteosarcoma therapy remain quite limited. A better understanding of the molecular characteristics of osteosarcoma is urgently required. High-throughput analyses provide an integral understanding of the molecular basis of tumors at multiomic levels in addition to phenotypic drug screening. For instance, whole exome sequencing demonstrated crucial genetic alterations in osteosarcoma. 16 Clinical genomic sequencing of osteosarcoma using Integrated Mutation Profiling of Actionable Cancer Targets revealed distinct molecular subsets with potentially targetable alterations. 17 A study showed highly heterogeneous somatic copy number alterations (SCNA) and structural rearrangements across osteosarcoma cases, suggesting the requirement of systematic genome information for SCNA-based targeted therapy 18 More recently, the single-cell landscape of osteosarcoma has revealed intratumoral heterogeneity and an immunosuppressive microenvironment. 19,20 Previous proteomics studies in osteosarcoma mainly focused on cell lines or identified a quite limited number of proteins in clinical samples due to the limitation of proteomic approaches in the past. 21,22 High-throughput drug screening previously identified several potential drugs for osteosarcoma. 23 However, more studies of integral molecular and cellular analyses and drug synergy are urgently needed. In this study, by utilizing multiomic approaches including long-read RNA sequencing, next-generation RNA sequencing, single-cell RNA sequencing (scRNA-Seq), and tandem mass tag (TMT)-based quantitative proteomics in combination with high-throughput drug screening, we provided an integral molecular landscape of osteosarcoma, with potential therapeutic drug targets and hundreds of effective synergistic drug combinations. Moreover, we revealed the molecular mechanism that HDAC inhibitors synergize with DOX by downregulating the transcription factor SP1, which further modulates TOP2A expression. These findings deepen our understanding of osteosarcoma biology and potential therapeutic targets, which could translate into clinical practice. Transcriptomic profiling of osteosarcoma To comprehensively reveal the transcriptomic characteristics of osteosarcoma, we performed Oxford Nanopore Technologies (ONTs) long-read RNA-Seq of tumor and adjacent normal tissues from 23 patients with osteosarcoma. In total, 4699 differentially expressed genes (DEGs) were identified (criteria: |fold change| ≥ 2, adjusted p value < 0.05) ( Figure 1A and Table S1). The Kyoto Encyclopedia of Genes and Genomes (KEGG) pathway analysis of DEGs showed that the PI3K-AKT signaling pathway, involved in osteosarcoma tumorigenesis, 24 was significantly enriched in osteosarcoma compared with adjacent normal tissues ( Figure S1A and B). Among the 4699 DEGs, 2535 were upregulated in osteosarcoma and further defined as tumor-specific genes (TSGs) ( Figure 1A). The gene ontology (GO) analysis of the TSGs showed an enrichment of genes related to ATPase activity, collagen-containing extracellular matrix, skeletal system development, and nuclear division ( Figure 1B and S1C). Gene set variation analysis (GSVA) showed that these TSGs were more enriched in DNA repair and glycolysis ( Figure 1C), consistent with previously reported tumor characteristics. 25 In addition, by excluding genes that are upregulated in other cancer types compared with their corresponding normal tissues in the TCGA Pan-Cancer Atlas dataset, we identified 609 osteosarcoma-specifically upregulated genes (OSUGs) in the 2535 TSGs ( Figure S1D and Table S2). KEGG pathway analysis showed that OSUGs were significantly enriched in pathways including herpes simplex virus 1 infection, protein processing in endoplasmic reticulum, and axon guidance ( Figure S1D). Alternative splicing (AS), a critical type of posttranscriptional regulation, could be optimally investigated by longread sequencing with full-length transcript recovery. 26 In total, 58,800 AS events in 3938 genes were detected in the 23 osteosarcoma samples and 13 adjacent normal tissues ( Figure S2A and Table S3). The average number of AS events per sample was significantly higher in osteosarcoma compared with normal tissues ( Figure S2B), suggesting that AS-related variations were involved in the tumor development. These AS events were further grouped into five types, and average numbers of different AS types per sample, including exon skipping (ES), alternative 3′ splice sites (A3SS), alternative 5′ splice sites (A5SS), and intron retention (IR) were significantly higher in osteosarcoma compared with normal tissues ( Figure S2C). No significant difference in average numbers was found in mutually exclusive exons (MEE) between groups ( Figure S2C). Consistent with previously reported studies of AS in other cancers, 27,28 ES was the most frequent AS event, accounting for over 60% of events in both osteosarcoma and adjacent normal tissues ( Figure S2D). Furthermore, the differentially expressed AS events between the 13 matched pairs of osteosarcoma and adjacent normal tissues (criteria: |ΔPSI (percent spliced in) | > 10%, adjusted p value < 0.01) were ranked in a descending order ( Figure S2E and Table S4). These differential splicing events, such as ES in the LMO7 and SLC37A4 loci, were further defined as osteosarcoma-specific AS events (Figures S2F and G and Table S4). More interestingly, expression of several genes with differential splicing events such as LMO7, PTS, and RPS24 were significantly correlated with prognosis ( Figures S2H and I). Of note, 0.28% of all splicing events, such as ES in NDEL1 and ACIN1, exhibited osteosarcoma specificity but also patient heterogeneity (Figures S2F and S2J and K). These AS events might be potentially associated with the tumorigenesis of specific osteosarcoma patients. 2.2 Transcriptomic analysis revealing potential therapeutic targets of osteosarcoma Next, to identify potential therapeutic gene targets in osteosarcoma, we compared TSGs with a series of genes according to their biological functions and cellular locations, including genes encoding kinases, epigenetic factors, transcription factors, metabolism-related proteins, and cell surface proteins [29][30][31][32][33] (Figure 1D and Table S5). A total of 91 tumor-specific kinases (TS-K), 83 epigenetic factors (TS-EF), 192 transcription factors (TS-TF), and 401 metabolism-related genes (TS-M) were identified ( Figure 1D). We further compared these sets of TSGs with target genes of approved drugs in the DrugBank 34 ( Figure 1E and Table S6). Among them, 58 TS-K, 20 TS-EF, 9 TS-TF, and 114 TS-M were target genes of approved drugs in the DrugBank ( Figure 1E). For instance, AKT1, TOP2A, DHFR, and RARG were target genes of capivasertib, DOX, MTX, and ethacridine lactate, respectively ( Figure 1E). The expression level of overlapped target genes could be instructive in the future drug development ( Figure S3A). Furthermore, a total of 566 TSGs (33 TS-K, 63 TS-EF, 183 TS-TF, and 287 TS-M) were identified beyond the reported approved-drug target genes in the DrugBank ( Figure 1E). More interestingly, the expression of several of these genes, including CHST13, FKBP11, SGMS2, TRPS1, PRKX, SP7, and DNAJC1, were highly associated with osteosarcoma prognosis ( Figure S3B). These data provided a list of potential new target genes for novel drug development for osteosarcoma treatment. F I G U R E 1 Transcriptomic profiling and potential therapeutic targets of osteosarcoma. (A) Volcano plot indicating significantly differentially expressed genes (DEGs) between osteosarcoma (n = 23) and the adjacent normal tissues (n = 13) (criteria: |fold change| ≥ 2, adjusted p value < 0.05). (B) Gene ontology (GO) analysis of the 2,535 DEGs upregulated in osteosarcoma. (C) Differences in hallmark pathways scored by gene set variation analysis (GSVA) between osteosarcoma (n = 23) and normal tissues (n = 13). (D) Number of tumor-specific genes (TSGs) and a series of genes according to their biological functions and cellular locations. (E) Venn diagrams of overlapping genes between tumor-specific annotated gene sets and the target genes of the approved drugs in the DrugBank. Examples of corresponding drugs were listed in the grey box. (F) Heatmap of mRNA expression of cell-surface targets for adoptive cell therapy (ACT) between osteosarcoma (n = 23) and the normal adjacent tissues (n = 13). Black, commonly used targets for solid tumors. Blue, identified cell surface targets already investigated in the ACT of osteosarcoma. Red, novel cell surface genes specifically and highly expressed in osteosarcoma. (G) mRNA expression of novel cell surface genes in osteosarcoma and normal human organs based on the TCGA and GTEx databases. ACT is increasingly promising for solid tumor therapy. 35 We thus looked into genes encoding cell-surface proteins in osteosarcoma from our sequencing data. Most gene targets commonly investigated in ACT of other solid tumors except osteosarcoma, 1,36 such as CEACAM1, PSCA, MSLN, IL13RA2, and GPC3, showed low or nonspecific expression in osteosarcoma compared with adjacent normal tissues ( Figure 1F). Further, some gene targets already investigated in the ACT of osteosarcoma, including B4GALNT1 (encoding beta-1,4-N-acetyl-galactosaminyltransferase 1 involved in the biosynthesis of GD2), ERBB2, and EGFR, also showed low expression in osteosarcoma compared with adjacent normal tissues ( Figure 1F). In our data, we identified nine novel cell surface genes specifically and highly expressed in osteosarcoma (criteria: |fold change| ≥ 4 and average counts per million ≥ 90 in TSGs) ( Figure 1F). More interestingly, ALPL, UNC5B, and CADM1 were also highly and specifically expressed in osteosarcoma compared with most normal human organs based on the TCGA and GTEx databases (Figures 1F and G and S3C). Besides, expression of UNC5B, CAM1, PTH1R, and FCGR3A was significantly associated with survival of patients with osteosarcoma ( Figure S3D). The above data suggested that these genes might be used as new targets of ACT for osteosarcoma therapy. Identification of hub-genes in osteosarcoma Hub-genes are defined as highly connected genes in genetic interaction networks and considered to play essential roles in gene regulation and biological processes. 37 To obtain the hub-genes and their potential use as drug targets for osteosarcoma, a protein-protein interaction (PPI) network was constructed based on the 2535 TSGs in our transcriptome data. In total, 54 hub-genes were identified in the shared gene list from 10 independent computational methods specific for hub-gene identification 38 ( Figure S3E and Table S7). At the pan-cancer level, the majority of the 54 hub-genes were highly expressed in multiple cancer types compared with the corresponding normal tissues (criteria: |fold change| ≥ 2 and adjusted p value < 0.05) ( Figure 2A). SCNAs are somatic changes to chromo-some structure prevalent in multiple types of cancer and are the major drivers of many cellular malfunctions. 39 Among the 54 hub-genes, ACTB, ASPM, AURKA, EXO1, MYBL2, and SRC showed gain of copy numbers, while TTK and ZWINT showed a proclivity of copy number loss in multiple cancers ( Figure 2B). The expression of most hub-genes was negatively associated with gene signatures of several immune cell types in osteosarcoma, such as CD8 + T cells, cytotoxic cells, and natural killer cells ( Figure 2C), suggesting that these hub-genes might suppress immune infiltration in osteosarcoma. There were significantly increased scores of naïve CD8 + T cells in the group of CD8 + T cells with high expression of specific hub-genes based on scRNA-Seq data ( Figure S4A). The T helper 2 (Th2) enrichment in tumor had been shown to predict worse prognosis in multiple malignancies. 40,41 Intriguingly, we observed a significant positive correlation between expression of several hub-genes and the signatures of Th2 cells ( Figure 2C), raising the possibility that high expression of these hub-genes associated with increases of Th2 cells may contribute to worse prognosis in osteosarcoma. Proteomic profiling of osteosarcoma To gain a broad view of protein expression in osteosarcoma, quantitative TMT-based mass spectrometry (MS) was performed with six pairs of osteosarcoma specimens and adjacent normal tissue. A total of 4974 proteins were identified ( Figure 2D). Among them, 314 (6.31%) were significantly upregulated in osteosarcoma, while 461 (9.27%) were more expressed in the adjacent normal tissues (criteria: |fold change| ≥ 2 and adjusted p value < 0.05) ( Figure 2D and Table S8). KEGG pathway analysis of the 461 proteins showed a major enrichment of oxidative phosphorylation and lipid and steroid metabolism in adjacent normal tissues ( Figure 2E). In contrast, the 314 osteosarcoma-upregulated proteins were mainly enriched in ribosome, spliceosome, phagosome, and the PI3K-AKT signaling pathway ( Figure 2E). By comparing the transcriptome and the proteome data, we identified 139 upregulated genes and 356 downregulated genes at both mRNA and protein levels ( Figure 2F). hub-genes were identified and 13 of the 15 proteins showed an upregulation in osteosarcoma compared with the adjacent normal tissues ( Figures 2G and S4B). In addition, 49 out of 767 previously identified potential target genes were still identified in proteome and significantly upregulated in osteosarcoma compared with the normal adjacent tissues ( Figures 1E and 2H). Among them, MAN2B1, P4HA2, ANPEP, BCAT1, CTSS, and DAPK1 were significantly correlated with the prognosis of patients with osteosarcoma ( Figure 2I). Further, three previously identified potential gene targets of ACT for osteosarcoma, including CADM1, CDH11, and MMP14, also showed high protein expression in osteosarcoma compared with normal adjacent tissues ( Figures 1F and S4C). Our proteomic data further supported the clinical significance of the potential target genes identified by our transcriptome data. Molecular regulation of NACT in osteosarcoma NACT has been one of the cornerstones for osteosarcoma treatment. 4 To gain insight into the molecular responses of osteosarcoma to NACT, we performed nextgeneration sequencing (NGS) on five pairs of pre-NACT and post-NACT tumor samples from five patients with osteosarcoma. We identified 858 DEGs (334 downregulated and 524 upregulated) in osteosarcoma tissues before and after NACT (criteria: |fold change| ≥ 2 and adjusted p value < 0.05) ( Figure 3A and Table S9). GO analysis of the 524 upregulated genes showed an enrichment of immunerelated processes including humoral immune response, phagocytosis, and complement activation in post-NACT osteosarcoma ( Figure S5A). In contrast, the 334 downregulated genes were mainly enriched in skeletal developing processes including skeletal system morphogenesis and ossification ( Figure S5A). Besides, metabolism-related pathways including hypoxia targets of VHL, CYP2E1 reactions and amino acid deprivation were also enriched after NACT ( Figure 3B). Several amino acid transporters, such as SLC44A5, SLC9B2, and SLC37A2, were significantly decreased after NACT ( Figure S5B), which might contribute to the amino acid deprivation in osteosarcoma after NACT. 42 Of note, the citrate cycle pathway was less enriched in osteosarcoma after NACT ( Figure 3B), suggesting a decrease activity of oxidative phosphorylation in tumor microenvironment (TME) after NACT. 43 The GSVA showed an increase of IFN-α response, IFNγ response, and TNF-α signaling via NF-κB in post-NACT osteosarcoma ( Figure S5C), suggesting an activation of antitumor immune responses after NACT. To better understand the effects of NACT on the TME of osteosarcoma, we estimated the immune and stromal cell infiltration in osteosarcoma based on the ESTIMATE algorithm 44 ( Figures 3C and S5D and E). The stromal scores were not significantly changed after NACT ( Figures 3C and S5D). However, the signature scores of overall immune cells and the innate and adaptive immune cells were increased after NACT ( Figures 3C and S5E), suggesting an increasing of immune cells infiltration in the TME. Among the previously identified 2535 TSGs, we found that NACT upregulated expression of 63 TSGs while downregulated 164 TSGs ( Figure 3D). GO analysis of the 63 upregulated TSGs showed similar enrichment of certain gene sets, such as humoral immune response and phagocytosis ( Figures S5A and 3D). Notably, NACT did not alter the expression of most cell-surface protein coding genes of potential ACT targets we identified previously ( Figures 1F and 3E), suggesting the potential value of using these genes as ACT targets in combination with NACT in osteosarcoma. Eight out of the 54 hub-genes (ASPM, CENPF, KIF14, KIF20A, RCC1, SMC2, SRC, and TOP2A) were significantly decreased after NACT (criteria: |fold change| ≥ 2 and adjusted p value < 0.05) (Figures 3F and G), consistent with the central role of these hub genes in osteosarcoma. Fifty-seven out of 767 potential therapeutic gene targets that showed previously were significantly changed after NACT (criteria: |fold change| ≥ 2, adjusted p value < 0.05) (Figures 3H and S5F and Table S10). Collectively, these results revealed the molecular regulation of NACT on the TME and potential therapeutic targets of osteosarcoma. 2.6 High-throughput drug screen identification of potential therapeutic candidates and combination strategies for osteosarcoma To identify potentially effective drugs in osteosarcoma, we first performed a high-throughput screen with 1971 United States Food and Drug Administration (US FDA)approved drugs in four human osteosarcoma cell lines (MG-63, 143B, HOS, U2OS) ( Figure 4A and Table S11). At the drug concentration of 10 μM, our screen identified a total of 215 drugs with significant inhibitory effects (>60%) in at least one cell line (80 in MG-63, 146 in 143B, 148 in HOS, and 106 in U2OS cell line) ( Figure 4A and B and Table S12). The dose-response relationships (DRRs) of several identified drugs randomly selected from 215 drugs were further quantified and confirmed the reliability of our primary drug screen ( Figure 4C). Based on the DrugBank and PubChem database, 34,45 we found that 179 out of 215 drugs (83.3%) had reported drug targets corresponding to a total of 282 human genes (Table S12). KEGG pathway analysis of these genes further revealed an enrichment of PI3K-AKT, chemical carcinogenesis-receptor activation, Figure 4D), which were reported involved in tumorigenesis. 46,47 Of note, 46 out of the 282 target genes showed significantly higher expression in tumor samples compared with adjacent normal tissues ( Figures S6A and B), suggesting them to be promising targets in clinical osteosarcoma treatment. Next, we performed a combination drug screen to investigate potential synergetic effects of the above identified effective drugs. By choosing only one representative drug with the same human target genes and validating their effectiveness with DRR in all four osteosarcoma cell lines ( Figure S6C), a total of 50 effective drugs were subjected to the next combination screen ( Figure 4E and Table S13). In total, 1225 pairwise drug combinations were evaluated based on the Bliss Independence (BI) model 48 ( Figure 4E). As a result, a total of 236 (19.3%) drug combinations showed obvious synergistic inhibition (BI > 1.3) ( Figure 4E and Table S13). Among them, notable synergistic effects were observed between the VEGFR (encoded by KDR) inhibitor cediranib and 30 other drugs, such as the pan-Aurora kinase (encoded by AURKs) inhibitor VX-680, ALK tyrosine kinase receptor (encoded by ALK) inhibitor LDK378 and cyclin-dependent kinase 4/6 (encoded by CDK4/6) inhibitor LY2835219 ( Figure 4E and Table S13). Besides, the multikinases inhibitor Dasatinib and ALK inhibitor LDK378 also synergistically enhanced efficacy of multiple drugs ( Figure 4E and Table S13). Moreover, we noticed that DOX (targeting topoisomerase II), one of the first-line chemotherapeutic drugs, synergistically suppressed osteosarcoma growth with pan-HDAC (encoded by HDACs) inhibitor PXD101, ALK inhibitor LDK378, PRKCA (encoded by PRKCA) inhibitor Midostaurin, and NTRK1 (encoded by NTRK1) inhibitor Entrectinib ( Figure 4E and Table S13). Considering the cell-type heterogeneity within solid tumor, we investigated expression patterns of target genes of the identified effective drugs in osteosarcoma scRNA-Seq data from 13 patients reported previously. 19,20 ( Figures S6D-G). We identified 10 major cell clusters based on reported cell markers ( Figures 4F and S6H). As illustrated by feature plots, 111 of the 282 target genes of effective drugs showed significant expression in the scRNA-Seq data (criteria: percent expression of one gene >25% in at least one cell type) (Figures 4G and S6I). Notably, 49 genes (e.g., AKT1, TUBB, HDAC1, and HIF1A) were widely expressed in most cell types (criteria: significantly expressed in at least five cell types), while nine genes (TOP2A, AURKA, AURKB, CDK1, PLK1, TYMS, DHFR, GMNN, and RRM1) were more specifically detected in osteosarcoma cells, especially in proliferating osteoblastic osteosarcoma cells ( Figures 4G and S6I-O). The distinct expression patterns of target genes provided valuable information to guide the usage of potential drugs for osteosarcoma. HDAC inhibitors synergized with DOX to suppress osteosarcoma Interestingly, only TOP2A and HDAC1 were in the shared gene list of the previously identified 139 TSGs by transcriptomic and proteomic analysis, the 54 hub-genes and the 282 target genes of effective drugs in osteosarcoma ( Figure 5A). TOP2A is a well-studied and widely used gene target for osteosarcoma treatment. 49 Our result suggested the biological and therapeutic significance of TOP2A and HDAC1 in osteosarcoma. High expression of HDAC1 was confirmed at transcriptional and protein levels in osteosarcoma compared with adjacent normal tissues ( Figures 5B and S7A). In addition, the expression levels of several other HDAC family members, such as HDAC2, HDAC3, HDAC5, and HDAC8 were also upregulated ( Figure S7A), suggesting a potentially redundant function of HDACs in osteosarcoma. 50 Further, four tested HDAC inhibitors, including PXD101, Pracinostat, PCI-24781, and Romidepsin, significantly inhibited the growth of all four human osteosarcoma cell lines ( Figure 5C). These results indicated the potential therapeutic value of HDAC inhibitors for osteosarcoma. We next investigated the synergistic effects of DOX with HDAC inhibitors for osteosarcoma treatment. Three HDAC inhibitors (PXD101, PCI-24781, and Mocetinostat) were tested in combination with DOX in vitro. As indicated, DOX with each of the three tested HDAC inhibitors synergistically inhibited growth of 143B and HOS cells (Figures 5D and E). Synergistic effects of HDAC inhibitors were also explored in combination with two commonly used chemo drugs, DDP and MTX. Of these, only DDP demonstrated a significant synergy with HDAC inhibitors in the treatment of osteosarcoma ( Figure S7B). Furthermore, cotreatment of DOX with either PXD101 or PCI-24781 strikingly increased apoptosis and reduced proliferation in 143B and HOS cells (Figures 5F-I). We further assessed the synergistic effects of DOX with HDAC inhibitors in a human 143B xenograft mouse model in vivo. As shown, DOX in combination with PXD101 (chosen as a representative HDAC inhibitor here) resulted in significantly better suppression of tumor growth compared with other treatment groups ( Figures 5J and K). Increased tumor apoptosis and decreased proliferation were also observed in the DOX and PXD101 combination group, as indicated by TUNEL assay and Ki-67 staining, respectively ( Figures 5L and M). Mice were tolerated to the cotreatment of DOX and PXD101, since there was no significant decrease of average body weight and no significant increase of toxicity in solid organs in the cotreated group of mice compared with other groups (Figures S7C and D). Collectively, our results showed that DOX and HDAC inhibitors synergistically inhibited osteosarcoma growth. HDAC inhibitors synergize with DOX by downregulating SP1 to transcriptionally modulate TOP2A expression To investigate the underlying synergistic mechanisms of DOX with HDAC inhibitors, we first performed transcriptomic analysis of 143B cells treated with different HDAC inhibitors including PXD101 and PCI-24781. A total of 1810 shared DEGs were identified in PXD101and PCI-24781-treated groups compared with controls (criteria: |fold change| ≥ 2 and adjusted p value < 0.05) ( Figure 6A). GSVA suggested that the DOX resistance pathway was significantly downregulated in both PXD101 and PCI-24781-treated groups ( Figure 6B), indicating that the DOX resistance could be potentially alleviated by HDAC inhibitors. In addition, the DNA repair pathways, reported to play critical role in DOX resistance, 51 were remarkably upregulated after DOX treatment while downregulated after HDAC inhibitor treatment ( Figures S8A and 6B). Furthermore, DOX in combination with HDAC inhibitors (PXD101 or PCI-24781) markedly suppressed the DNA repair pathways compared with DOX treatment alone ( Figures 6C and Figure S8B). DOX, but not PXD101, treatment alone significantly increased DNA damage in osteosarcoma cells, as indicated by significant increase expression of γH2AX ( Figures 6D and E) (a biomarker of DNA damage). 52 In contrast, DOX cotreated with PXD101 significantly increased DNA damage in osteosarcoma cells compared with DOX or PXD101 treatment alone ( Figures 6D and E), suggesting that PXD101 could enhanced the DNA damage effects induced by DOX. Of note, transcription of TOP2A was significantly decreased after PXD101 or PCI-24781 treatment alone and combined with DOX compared with controls 53 ( Figure 7A). We further confirmed that single HDAC inhibitors (PXD101 or PCI-24781) or combined HDAC inhibitor/DOX, but not DOX treatment alone, significantly reduced the expression of TOP2A, but not TOP1, at both transcriptional and protein levels ( Figures 7B-D). A previous study reported that HDAC inhibitors could directly suppress the expression of SP1, a zinc finger family transcription factor. 54 We found that SP1 was significantly downregulated at both mRNA and protein levels in osteosarcoma cells treated with HDAC inhibitors (Figures 7E-G). Among the reported SP1 target genes from the Cistrome database, 55 117 (6.6%) of 1786 genes were significantly differentially expressed both in PXD101 and PCI-24781-treated groups compared with controls ( Figure 7H). These genes were less enriched in DNA repair pathways in PXD101 and PCI-24781-treated groups compared with control groups ( Figure 7I). Interestingly, TOP2A was also among the 117 SP1 target genes ( Figure 7I), suggesting that SP1 may directly regulate TOP2A expression. As expected, inhibition of SP1 by mithramycin A (a selective SP1 inhibitor) significantly inhibited the expression of TOP2A in 143B cells ( Figure 7J). In addition, inhibition of SP1 by mithramycin A also resulted in a dose-dependent suppression of 143B and HOS cell growth ( Figure 7K). These results suggested that HDAC inhibitors synergize with DOX by downregulating SP1 to transcriptionally modulate TOP2A expression. Indeed, the chromatin immunoprecipitation (ChIP) assay analysis showed that SP1 directly bound to the promotor regions of TOP2A, and HDAC inhibitor PXD101 treatment significantly decreased the SP1 binding to promotor regions of TOP2A ( Figure 7L our results suggested the synergistic mechanisms between DOX and HDAC inhibitors that inhibition of HDACs suppressed TOP2A transcription through downregulation of SP1. DISCUSSION It remains challenging to prolong survival or provide a potential cure for patients with osteosarcoma because of its complex and heterogeneous nature. The latest major advances in therapy against osteosarcoma were made over 30 years ago by combining DOX, DDP, MTX, and/or ifosfamide (IFO) in NACT. 4 Novel effective therapeutic approaches for osteosarcoma are therefore urgently needed. Several studies have investigated new therapeutics such as the targeted therapy, immunotherapy, and molecularly informed precision medicine. 4 Recently, multiomic analysis has become a promising method in revealing tumor biology and potential therapeutic targets, especially in tumors with low incidence. 56 However, there is a paucity of comprehensive multiomic studies incorporating investigated potential targets for osteosarcoma. In the present study, we integrated the bulk and singlecell transcriptome, proteome, and a high-throughput drug screen to identify potential target genes and synergetic drug combinations, which helped understand more about osteosarcoma biology and further optimize treatment. AS is a critical mechanism to increase gene complexity and plays a key role in numerous biological processes. 57 In the present study, we identified differential AS events between osteosarcoma and adjacent normal tissues. Many of them exhibited heterogeneity between different patients. More interestingly, expression of several genes with these AS events were significantly associated with prognosis of osteosarcoma patients. These osteosarcoma-specific or patient-heterogenic splicing isoforms might be potential targets for osteosarcoma therapy. As the main focus was primarily on the DEGs based on expression level, the comprehensive information of AS would serve as a resource for researchers. Future investigations are required to determine the biological roles of these AS isoforms in osteosarcoma. Additionally, we found that several genes encoding cell surface proteins were more highly and specifically expressed in osteosarcoma compared with most normal human organs at transcription level. These genes could be potentially used as gene targets of ACT for osteosarcoma. However, the protein expression of these genes in osteosarcoma and in different human tissue, as well as the real effectiveness and safety of using these genes as targets of ACT need to be carefully determined. Notably, we found that the expression of most identified hub-genes in osteosarcoma were negatively associated with gene signatures of several immune cell types while positively associated with the gene signature of Th2 cells. As osteosarcoma is characterized by poor immune infiltration and immunosuppressive TME, 19 these hub-genes may be potential predictive markers for the immune situation in the TME of osteosarcoma. Our results suggested that targeting the hub-genes may also improve immune infiltration in osteosarcoma. Indeed, NACT, which including the DOX targeting TOP2A (one of the 54 hub-genes), significantly increased the signature scores of overall immune cells and innate and adaptive immune cells. In addition, as high infiltration of Th2 has been linked to multiple tumor progression and metastasis through induction of cytokine release and T cell anergy, 58 our results suggested that targeting Th2 cell responses might also benefit osteosarcoma treatment; this requires further investigation. Combinatorial drug therapy is a core strategy for osteosarcoma treatment. 4 However, combination chemotherapy protocols have remained unchanged over the past 30 years. Recent advances in phenotypic screening could provide effective compounds for diseases without prior knowledge of treatable targets. 59 For instance, Gu et al. 60 conducted a high-throughput drug screen of cells derived from 56 patients with head and neck squamous cell carcinoma (HNSCC) using 2248 compounds. Multiple drugs were identified and could be repurposed for different HNSCC subtypes. Thus, we performed a high-throughput drug screen using 1971 US FDA-approved compounds in four osteosarcoma cell lines and identified hundreds of potentially effective drugs for osteosarcoma. Some of these drugs were in clinical trials for osteosarcoma treatment, such as docetaxel (NCT03598595) and topotecan (NCT04661852). However, most drugs were still in preclinical phases for osteosarcoma. The integrative approach based on multiomics allowed further understanding of the target genes and corresponding signaling pathways of effective drugs. Distinct expression patterns of target genes reflected the different mechanisms of drug action. Notably, most effective drugs exerted effects by targeting the PI3K-AKT and RAS signaling pathways, which could shed light on drug application strategies. 24 Synergistic drug combinations are promising in cancer treatment, as they can overcome compensatory mechanisms and/or increase individual drug effectiveness. 61 In a study by Jaaks et al., 62 2025 clinically relevant two-drug combinations in 125 human cell lines using high-throughput methods, and a landscape of drug combinations was established for tumor therapy. Similarly, we conducted a combinatorial drug screen for osteosarcoma using targeted drugs identified from our single agent screen. A total of 236 synergistic combinations with significant inhibitory effects were noticed, and these findings could provide potential strategies for future clinical trials. Furthermore, by integrating the expression patterns of target genes of synergistic drug pairs with the osteosarcoma single-cell transcriptome, we found that cell types showed differential expression of synergistic targets. For example, AURKA was specifically expressed in proliferating osteoblastic osteosarcoma cells, while CDK4 was widely expressed in all subtypes of osteosarcoma cells. AURKA and CDK4 have been reported as important molecular targets mediating the cell cycle. 1 The distinct expression patterns of target genes reflected the differences in cell types targeted by drugs. Thus, our data provide an integral understanding of combination strategies considering the effectiveness of drugs and the expression patterns of target genes. Our multiomic data further indicated that TOP2A and HDAC1 played critical roles in osteosarcoma through different analysis. The topoisomerase IIα, encoded by TOP2A, is responsible for topological structure transformation during gene expression and could also regulate the DNA repair. 53 The topoisomerase IIα is the main target of DOX. HDACs could modulate the function of other proteins by deacetylating its ε-amino lysines. 63 HDACs are reported as promising targets for cancer therapy, and their inhibitors have been evaluated in clinical trials. 64 Here, we further demonstrated that DOX and HDAC inhibitors synergistically suppressed osteosarcoma growth. The combination of TOP2A and HDAC inhibitors has been assessed in clinical trials for soft-tissue sarcoma and T-cell lymphoma, but not in osteosarcoma (NCT00878800, NCT01902225), and showed good efficacy and tolerance. 65,66 The efficacy of this combination in osteosarcoma was highlighted in our study and could provide a significant indication for clinical applications. Moreover, the underlying mechanisms of these synergistic effects were illustrated, as the HDAC inhibitors decreased the mRNA expression of TOP2A by the transcriptional regulation of SP1. Our study complements and extends previous studies on the combination of HDAC inhibitors and DOX in leukemia and Ewing sarcoma by providing a more comprehensive multiomics analysis specific to osteosarcoma. 67,68 Thus, in addition to direct effects targeting HDACs, indirect effects of HDAC inhibitors targeting DNA topoisomerase IIα resulted in an increase of DOX sensitivity. Whether coinhibition of one pathway or even the same protein (e.g., DNA topoisomerase IIα) will contribute to supraadditive efficacy remains unclear, 69 but DOX and HDAC inhibitors showed supra-additive effects in osteosarcoma. Moreover, SP1 is a promising target for cancer treatment. 70 Although we showed an HDAC-SP1-TOP2A regulation axis in osteosarcoma, other HDAC and/or SP1 downstream pathways/targets beyond TOP2A may also contribute to the supra-additive effect of HDAC inhibitors and DOX in osteosarcoma. There are still several limitations in the current study. First, the long-read RNA-seq, TMT-based quantitative proteome, and NGS RNA-seq were conducted on different samples, which may have limited the integrative analysis. In order to avoid the impact of NACT on molecular characteristics of osteosarcoma, only tumor samples before NACT were collected. The needle biopsy can only provide a limited amount of tissue, which was not enough for multiple types of analyses. However, we analyze the data in a systematic, objective, and rigorous manner as previously reported, 71 which could also generate valuable information and insights into osteosarcoma. Second, sample size of osteosarcoma before and after NACT is relatively small, which could limit the statistical power. There would be an interval of more than 2 months for patients with osteosarcoma to receive NACT, since the first biopsy performed. The long interval between biopsy and final surgery also increased the difficulty of collecting paired tumor samples. Only five pairs of eligible samples were collected from our cancer center at the end of this study. However, the relatively small sample size could also produce reliable results as reported. 72,73 In this study, we elucidated the molecular landscape of osteosarcoma before and after NACT based on these precious samples, which would improve understanding of the influence of NACT on osteosarcoma characteristics. We are actively working to collect more samples and enrich our data in future studies. Third, the analysis of AS was not well integrated into the overall analysis for identification of potential targets. But we recognized that the comprehensive information of AS in osteosarcoma will serve as a valuable resource for researchers who are interested in AS. The AS events as promising therapeutic targets represent important goals for future studies. Last, it is important to note that our findings are based on the cohort of Chinese patients, the generalizability of our results to other populations and ethnicities requires further investigation. Overall, our work provided an integral understanding of osteosarcoma at multiomic levels and identified synergis-tic drug combinations with potential clinical implications. The data presented in this study could also serve as a valuable resource and greatly augment the knowledge of therapeutics for osteosarcoma. Future studies with more osteosarcoma models and mechanisms are warranted to extend the possibility for translation to clinical trials. Study population and sample collection The study was approved by the Institutional Review Board of Second Xiangya Hospital, and all participants provided written informed consent. Ethical approval for this study was granted under the Second Xiangya Hospital Ethical Committee. Tumor and adjacent normal tissue samples were obtained during surgery or biopsy from patients with pathologically confirmed osteosarcoma. Twenty-three osteosarcoma samples and 13 paired normal adjacent tissue samples were analyzed using ONT long-read RNA sequencing (Table S14). An additional six patients with paired osteosarcoma and adjacent normal tissue samples were used for MS analysis. Matched osteosarcoma samples before and after NACT from another five patients were sequenced using NGS. Pre-NACT specimens were collected by biopsy from patients without receiving any antitumor therapy prior to the biopsy. These patients were treated with the traditional first-line NACT composed of a cocktail of four drugs including MTX, DOX, DDP, and IFO. All samples were assessed by two independent experienced pathologists. Clinical characteristics including age, gender and primary tumor site were retrieved from medical records. 4.2 Long-read transcriptome sequencing RNA samples were extracted from specimens, and cDNA libraries were constructed following the standard ONT long-read RNA sequencing protocol for SQK-PCS109. 74 cDNA PCR was performed using LongAmp Taq Master Mix (New England Biolabs, Ipswich, MA, USA). The adapters needed for sequencing the DNA fragments were ligated using T4 DNA ligase (New England Biolabs). Amplified libraries were purified on Agencourt AMPure XP beads. Final library sequencing was performed using FLO-MIN109 flowcells based on the PromethION platform at Biomarker Technology Company (Beijing, China). Sequencing reads were mapped to the reference genome using minimap2. 75 Alignments of coverage <85% and identity <90% were filtered out. DEseq2 was used for the differential expression analysis between groups. 76 Next-generation sequencing Total RNA of cells and pre-and post-NACT osteosarcoma specimens were isolated using the RNeasy Mini Kit (Qiagen, Hilden, Germany). Library preparation was conducted as previously described. 77 Briefly, library preparation was performed using the TruSeq RNA sample preparation kit (Illumina, San Diego, CA, USA TMT quantitative proteomics Protein extracts from tissue samples were prepared and disrupted by sonication. Proteins were digested with trypsin, and TMT labeling was performed following manufacturer instructions. The resulting peptides were subjected to liquid chromatography-MS using an EASY-nLC 1200 (Thermo Fisher Scientific, Waltham, MA, USA). Raw data were processed with Proteome Discover 2.4 (Thermo Fisher Scientific) and compared against the Uniprot database. Public bulk and scRNA-seq data preprocessing Publicly available bulk RNA-Seq of osteosarcoma was obtained from the Therapeutically Applicable Research to Generate Effective Treatments (TAREGT) database (https://ocg.cancer.gov/programs/target/data-matrix). Clinical and outcome data corresponding to the patients were also retrieved from this database. Additionally, the expression data of normal human tissues were derived from GTEx database. All above public RNA-Seq data were downloaded via the UCSC Xena browser (https://xenabrowser.net/datapages/). To maximize compatibility and minimize batch effects between databases, RNA-Seq data were processed as previously described. 79 The mRNA expression matrix, SCNA data, and somatic mutation data of pan-cancers were obtained from the Pan-cancer TCGA dataset via the UCSC Xena browser. 80 Raw scRNA-Seq data were downloaded from GSE152048 and GSE162454 in the Gene Expression Omnibus (GEO) database. In the GSE152048 dataset, seven patients with primary osteosarcoma treated with NACT were included; six pre-NACT primary osteosarcoma specimens were collected from GSE162454. The two datasets were processed into a Seurat object and filtered by removing cells expressing <300 genes, >6000 genes and those with high mitochondrial content (>10%). Further downstream analysis was composed of SCTransform, dimensionality reduction, uniform manifold approximation and projection (UMAP), and clustering analysis. Major cell types were annotated based on known marker genes. 19,20 scRNA-Seq data were visualized using the DimPlot and FeaturePlot functions. AS analysis AS events were detected using ASTALAVISTA (v4.0) 81 and were analyzed individually and classified into five types including ES, A3SS, A5SS, IR, and MEE. The inclusion ratios of alternative exons or introns were calculated based on PSI-Sigma. 82 Differential AS events between 13 pairs of osteosarcoma and matched adjacent normal tissues were identified with over 10% PSI change and adjusted p value < 0.01. Sashimi plots were performed using rmats2sashimiplot with grouping files. To identify differential AS events between tumor and nontumor samples (osteosarcoma-specific), we used the criteria as previously reported. 83 The threshold of standard deviation (SD) of PSI in osteosarcoma was set as 0.15 to further screen individually different AS events in patients. Identification of hub-genes The Search Tool for the Retrieval of Interacting Genes/Proteins (STRING) database (https://stringdb.org/) was used to analyze PPI of TSGs at the protein level. Hub-genes were defined as the genes exhibiting a significantly higher degree of connectivity with other genes, which play key roles in the PPI network. 37 The CytoHubba plugin of Cytoscape was applied to screen hub-genes based on 10 independent ranking methods, including Maximal Clique Centrality, Maximum Neighborhood Component, Edge Percolated Component, Betweenness, BottleNeck, Closeness, Degree, EcCentricity, Radiality, and Stress. 38 Top 10% of hub-genes within each ranking method were identified, and we further identified the intersection of them to obtain the final hub-genes list. Gene enrichment analysis Overrepresentation enrichment analysis was performed using KEGG and GO. Single-sample gene set enrichment analysis was performed using the H (hallmark gene sets) and C2 (curated gene sets) downloaded from Molecular Signatures Database (MSigDB) (v7.5.1; https://www. gsea-msigdb.org/gsea/msigdb). Proteomaps were plotted to illustrate the composition and abundance as previously described. 84 Annotations were based on the KEGG database, and the size of polygons correlated with abundance. Immune cell signatures and related scoring criteria were obtained as previously described. 44,85 The immune and stromal cell infiltration in osteosarcoma was assessed based on the ESTIMATE algorithm. 44 Statistical analysis Overall survival (OS) was defined as the length of time from enrollment to death of any cause. Kaplan-Meier survival curves were analyzed using log-rank test. High-throughput drug screening and CellTiter assay The high-throughput drug screenings were performed on the CHIWEN automation high-throughput platform (MegaRobo Technologies, Suzhou, China). Basically, cells were seeded in 384-well plates at 2000 cells/well for highthroughput drug screening. The drug library containing 1971 single agents as well as DMSO controls were prepared in a specific 384-well plate. The above compounds were added to wells 24 h after cell seeding. After 48 h, cells were lysed in the plate with an equal volume of CellTiter (Beyotime Biotechnology, Shanghai, China) and shaken vigorously for 2 min. Luminescence was read on a Spark plate reader (Tecan, Maennedorf, Switzerland) after 10 min incubation. Relative cell viability was calculated as the ratio of the luminescence signals between drug-loading and DMSO-loading wells. IC 50 , the drug concentration causing 50% cell viability relative to controls, was calculated using GraphPad Prism (GraphPad, San Diego, CA, USA). 4.12 Drug combination screening and drug-drug interactions calculating Cells were seeded in a 384-well plate for drug combination screening. Concentrations of each drug were determined by their DDRs to achieve an inhibition of less than 50%. Drug combinations were administered as described above. After 48-h coincubation, cell viability was measured using CellTiter (Beyotime Biotechnology). Drug interactions in the combination screen were calculated using the BI model. 48 Drug synergy was defined as the total response of a combination that exceeded the presumable sum response of two single drugs. 48 If two single agents work independently, the theoretical value of combined response of two drugs can be calculated by the sum of two fractional responses minus their product (Pt = P a + P b − P a P b ). BI values represent the ratio of an actual measured combination response to the presumable summed responses (BI = Po/Pt). A BI < 1 indicated antagonistic effects, BI > 1 indicated synergistic effects, and BI = 1 indicated additive effects. Meanwhile, the Chou-Talalay combinatorial index (CI) was also used to evaluate drug synergies, calculated using the following equation: CI = DA/da + DB/db, where da and db are the IC 50 doses of the single drugs. DA and DB refer to the concentrations of each drug in the combination that reached the IC 50 effect during combination usage. 87 Flow cytometry Cells were seeded in six-well plates at 100,000 cells/well on the day before drug administration. After treatment for 24 h, attached tumor cells were digested and washed in PBS two to three times. Cells were then fixed and permeated using the eBioscience™ Foxp3/Transcription Factor Staining Buffer Set (Invitrogen, Waltham, MA, USA) overnight and stained with Ki-67 antibodies (Biolegend, San Diego, CA, USA) for 30 min. In an apoptosis assay, cells were incubated with Annexin V-PE and 7-AAD after digestion and washing and incubated in the dark for 10 min. Finally, the stained cells were tested using Cytek NL-CLC. Results were analyzed using Flowjo software (Flowjo, Ashland, OR, USA). Immunofluorescence Cells were seeded in 12-well plates and incubated overnight. After 24-h post drug treatment, cells were fixed with 4% paraformaldehyde in PBS, washed, and incubated with anti-γH2AX primary antibodies (ab81299, 1:100 dilution; Abcam) overnight at 4 • C, and incubated with secondary antibodies for 1 h at room temperature. Nuclei were stained with DAPI (5 min at room temperature). Fluorescence images were recorded using a fluorescence microscope. Immunohistochemistry Immunohistochemistry was performed as previously described. 88 In brief, tissues were fixed in 4% formalin for 24 h and embedded in paraffin. For clinical sample, tissue sections were incubated at 4 • C overnight with primary antibodies against HDAC1 (ab109411, 1:100; Abcam). After incubation, slides were washed with PBS for three times and were treated with the secondary antibody. Labeled cells were visualized using DAB+ as a chromogen. For tumor samples from animal experiments, primary antibodies against Ki-67 (ab16667, 1:100 dilution; Abcam) were used. Relative expression levels were calculated using Image Pro Plus software (Mediacy Cybernetics, Inc., Rockville, MD, USA) as previously described. 89 In brief, five 400-magnification fields were randomly selected from each slice. Integrated optical density (IOD) or positive cell percentage of each field was measured, and the average IOD or positive cell percentage of the five fields was used as the expression level of the slice. ChIP The ChIP assay was performed according to protocols as previously described. 90 Briefly, cells were formaldehyde cross-linked and quenched with 125 mM glycine. Then, small chromatin fragments were generated by sonication (sizes from 100 to 500 bp). Fixed DNA-protein complexes were used for immunoprecipitation assay with anti-Sp1 antibodies (5 μg for 25 μg of chromatin; ab231778; Abcam) or normal rabbit IgG antibodies. The precipitated DNA was used for PCR. The primer sequences for the promoter of SP1 were listed in Table S16. Mouse xenografts All animal experiments were approved by the Institutional Review Board of Second Xiangya hospital (Serial number: 2022303). The male BALB/c nude mice (6 weeks old) were purchased from Vital River Laboratory (Vital River Laboratories, Beijing, China). They were housed in specific pathogen-free conditions and fed with a standard diet and water ad libitum. The mice were injected with 5 × 10 6 143B cells subcutaneously at the right posterior flank. Tumor volume was measured with calipers and calculated as: tumor volume = π/6 × length × width × height After tumor establishment (the average tumor size reached about 100 mm 3 ), mice were randomly divided into four treatment groups (i.p. injection every other day): vehicle, DOX (1 mg/kg/d), PXD101 (40 mg/kg/d), or their combination. Experiments ended when tumor volumes in the vehicle-treated group reached 1000 mm 3 to ensure minimal animal suffering. A U T H O R C O N T R I B U T I O N W. Z. and L. Q. contributed to conceptualization, data curation, formal analysis, validation, investigation, visualization, methodology, and writing-original draft and editing. ZY. L. contributed to investigation, methodology, and resources. C. W., Y. W., L. H., and ZX. L. contributed to investigation and methodology. Z. F. contributed to conceptualization, data curation, supervision, investigation, and writing-review and editing. C. T. contributed to conceptualization, resources, data curation, supervision, funding acquisition, project administration, and writing-review and editing. ZH. L. contributed to conceptualization, resources, data curation, formal analysis, supervision, funding acquisition, investigation, visualization, project administration, and writing-review and editing. All authors have read and approved the final manuscript. A C K N O W L E D G M E N T S We would like to thank Dr. Xiaolei Ren and Mei Yang for their kind assistance in sample collection, and other members from TMIM Lab and MegaRobo Technologies for their help and suggestions. The graphical abstract was created using BioRender (BioRender, Toronto, Canada; BioRender.com). C O N F L I C T O F I N T E R E S T S TAT E M E N T Authors Cheng-zhi Wang, Ying Wu, Lianbin Han, Zhenxin Liu, and Zheng Fu are employees of MegaRobo Technologies Co., Ltd, but they have no potential relevant financial or nonfinancial interests to disclose. The other authors have no conflicts of interest to declare. E T H I C S S TAT E M E N T The study was approved by the Institutional Review Board of the Second Xiangya Hospital, and all participants provided written informed consent. Ethical approval for this study was granted under the Second Xiangya Hospital Ethical Committee (Serial number: 2022303). D ATA AVA I L A B I L I T Y S TAT E M E N T The Long-read transcriptome and NGS data have been deposited in GEO, and the accession number was GSE218035 (https://www.ncbi.nlm.nih.gov/geo/query/ acc.cgi?&acc=GSE218035). The MS proteomics data were deposited in the ProteomeXchange Consortium, with the accession number (PXD038452). All other data are available from the corresponding author upon reasonable request.
v3-fos-license
2015-09-18T23:22:04.000Z
2014-10-28T00:00:00.000
1657749
{ "extfieldsofstudy": [ "Biology", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/2218-1989/4/4/946/pdf", "pdf_hash": "8236926396051f0c5f58641ecfdb07489b55157a", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43563", "s2fieldsofstudy": [ "Biology" ], "sha1": "f00ff379d7dd5e9324a35e8b3de96ee2b5faceda", "year": 2014 }
pes2o/s2orc
An Efficient High Throughput Metabotyping Platform for Screening of Biomass Willows Future improvement of woody biomass crops such as willow and poplar relies on our ability to select for metabolic traits that sequester more atmospheric carbon into biomass, or into useful products to replace petrochemical streams. We describe the development of metabotyping screens for willow, using combined 1D 1H-NMR-MS. A protocol was developed to overcome 1D 1H-NMR spectral alignment problems caused by variable pH and peak broadening arising from high organic acid levels and metal cations. The outcome was a robust method to allow direct statistical comparison of profiles arising from source (leaf) and sink (stem) tissues allowing data to be normalised to a constant weight of the soluble metabolome. We also describe the analysis of two willow biomass varieties, demonstrating how fingerprints from 1D 1H-NMR-MS vary from the top to the bottom of the plant. Automated extraction of quantitative data of 56 primary and secondary metabolites from 1D 1H-NMR spectra was realised by the construction and application of a Salix metabolite spectral library using the Chenomx software suite. The optimised metabotyping screen in conjunction with automated quantitation will enable high-throughput screening of genetic collections. It also provides genotype and tissue specific data for future modelling of carbon flow in metabolic networks. Introduction Short rotation coppice (SRC) willow (Salix spp.) is an established biomass crop that is currently used as a feedstock for heat and power generation, and has potential for future production of biofuels and other industrial products. Genetic improvement of SRC-willow has been carried out by conventional plant breeding techniques and this has led to new commercial varieties, selected for increased pest resistance and biomass yield [1]. To develop further the potential of this crop, a molecular genetic approach to identifying key genes is being used to accelerate the improvement process via marker assisted breeding [2], as demonstrated by a recent report on quantitative trait mapping of loci (QTL mapping) for pathogen resistance [3]. To underpin this endeavour Rothamsted Research maintains an extensive Salix germplasm bank, including some 1500 accessions in the National Willow Collection gathered from around the globe, and a significant number of mapping populations, some which contain almost 1000 progeny. A high resolution willow genetic map, aligned with that of the related poplar (for which a full genome sequence is available), has been established [4], as have extensive agronomic trials in a variety of nutrient and water supply situations. Many of the quality traits that are targets for willow improvement e.g., biomass yield, calorific value, pest resistance and value-added chemicals are intimately linked with the operation of the plant metabolic network, as it responds to genetic and environmental programming. QTL-mapping of metabolite levels (mQTL analysis) will lead to biochemical pathways and genes that can be associated with desirable traits [5][6][7]. To develop the mQTL approach, methods for screening the extensive genetic collections are a necessity and plant metabolomics technology has developed to an extent where such large-scale screens are possible. Metabolomics analysis usually involves the application of 1 dimensional proton nuclear magnetic resonance (1D 1 H-NMR) spectroscopy and mass spectrometry (MS) in a combination of unbiased "metabolite fingerprinting" of un-purified solvent extracts, with more targeted quantitative analysis of known compounds [8,9]. In metabolite fingerprinting, the use of chemometrics to mine datasets for "metabolite biomarkers", and correlative statistics to relate metabolite features to genetic markers are now established technologies [5,6,10,11]. Key factors in generating high quality data in large scale metabolomic fingerprinting experiments are experimental design, sampling and sample stability. This leads to spectral stability which is absolutely required for confidence in data mining. 1D 1 H-NMR is routinely used in plant metabolomics due to its high spectral reproducibility and low instrument drift [12]. However this relies on plant extracts that are comparable such that all peaks appear in consistent positions along the chemical shift scale and that peak resolution between samples is equivalent. Factors that impact on spectral quality and comparability between samples includes pH variation, differences in ionic strength and peak broadening due to the presence of paramagnetic and other metal cations [13][14][15]. These problems impact differentially on resonances from different compound classes and often need to be addressed prior to data collection. The use of buffered NMR solvents to normalise pH across samples is regularly used in plant metabolomics to align peaks [14][15][16][17], although as an alternative, new software algorithms exist to adjust for pH variation [18,19]. Complexation with chelators such as ethylenediaminetetraacetic acid (EDTA) addresses peak broadening from the presence of metal cations [14,17,20]. In extreme cases, peak broadening is highly variable across datasets and even can lead to apparent loss of peaks into the spectrum baseline. Hence, the development of robust protocols for sample handling and data collection are essential components of any mQTL screen, where many hundreds of samples are involved. Willow (and other tree species) present a range of problems to large-scale screening and metabolomics data collection, which has been established on more tractable species such as Arabidopsis [21][22][23], with other significant studies on Solanaceae [24,25], cereals [26,27] and Medicago [28]. Metabolite screening of perennial woody plants has been reported for loblolly pine (for milled stem tissue) [29], but generally the heterogeneity of tissue types and physical/chemical properties requires considerable re-thinking of the protocols developed for annual crops. In this paper we describe the development of new protocols that allow stable 1D 1 H-NMR and MS data collection on both leaf and stem tissue of SRC willow. The utility and robustness of the method is demonstrated in a study of source and sink metabolites in two willow biomass genotypes. We have also further developed the method for high throughput genetic screens, including automated quantitation using a bespoke 1D 1 H-NMR spectral library. Establishment of a Robust 1D 1 H-NMR-MS Protocol for Willow Metabolite Screening We had established a number of years ago that 1D 1 H-NMR profiling of extracts of freeze-dried Arabidopsis aerial tissue, made directly into deuterated methanol-water mixtures produced stable spectral fingerprints containing a range of primary and secondary metabolites that could define different genotypes [21,30,31]. When this method was applied to wheat flour, a small modification, to incorporate a brief 2 min/90 °C heat shock, was added to the protocol in order to denature hydrolytic enzymes that remained active in the NMR samples causing spectral instability, particularly in carbohydrate signatures [26]. This modified procedure has since been applied to over 100,000 samples of leaf, stem and seed tissues in our laboratory over recent years and has been described in detail [32,33]. The utility of this method is further enhanced as aliquots of the extract can be taken and diluted with non-deuterated solvent to provide parallel samples for mass fingerprinting by electrospray ionisation mass spectrometry (ESI-MS). These samples are totally compatible with the electrospray technique and can be infused directly into spectrometers and/or subjected to full LC-MS analysis. As the identical samples are used, correlative statistical analysis of 1D 1 H-NMR versus ESI-MS datasets has credibility and adds much confidence to biomarker discovery and structural determination (for example [34]). In initial experiments with willow, we utilised freeze-dried leaf and stem tissue, taken from three parts (top, middle, bottom) of the two biomass varieties, Tora and Resolution. Plant tissue was harvested, from field plots, in June in the middle of the rapid growth season, after coppicing in the previous February. It soon became apparent that 1D 1 H-NMR fingerprints generated by our standard protocol (extraction at 50 °C in 80:20 D2O:CD3OD) [32,33] suffered from two problems: some peaks were poorly resolved and secondly many signals (compounds) common to all tissues were misaligned relative to added d4-3-(trimethylsilyl)propionic acid (d4-TSP) internal calibration standard ( Figure 1). The degree to which these two problems manifested themselves varied across the dataset. Misalignment of peaks was not a simple linear shift that could easily be dealt with by adding a data processing step. Binning or "bucketing" the 1D 1 H-NMR spectra is a technique which is commonly utilised in metabolomics prior to downstream processing with statistical software. The technique reduces the resolution of the dataset to ensure that small changes in chemical shift between spectra do not yield false results from statistical processing of the data. The width (in ppm) of the "bucket" is chosen to try and ensure that a peak remains in its given bin or "bucket" despite small chemical shift variations between analyses. This can be achieved by using a user-defined fixed bucket width or via the use of intelligent bucketing [35] which uses an algorithm to set the optimum bucket width for particular peaks such that they are not split between buckets. However, the extent of the variation in chemical shift for the distinctive anomeric hydrogen signals of sucrose and α-glucose ( Figure 1) was such that application of normal data processing strategies resulted in these abundant metabolites residing is different spectral buckets (bins). A fix based on processing with very wide bins (either via manual definition of the bucket size, or via intelligent bucketing) to encompass these shifts was not feasible as this resulted in signals from normally separated metabolites falling into the same bin, effectively reducing the high resolution spectra to a less useful, low resolution dataset with many uncertainties in metabolite annotation. The separate problem of poor resolution was also evident for a number of spectral regions particularly for the malate and citrate signals. In stem tissue samples, these signals could be easily observed but the degree of peak broadness varied for one sample to another depending on the harvest point of the willow stem. In leaves, the signals were so broad that they often seemingly disappeared into the baseline. The dual problem of variable line width and poor alignment meant that samples from different tissues or those taken from different parts of the plant could not easily be compared. A similar problem has previously been observed in extracts of fruit tissue such as tomato and fruit juices [36,37] that contain varying levels of malic and citric acids. In fruit juices, the problem was easily rectified by adding buffer directly to the liquid sample. In tomato tissues the problem was overcome by modifying the protocol to add a dry-down step after the initial extraction and removal of aliquots for ESI-MS, followed by re-dissolution of the NMR sample in deuterated phosphate buffer. This stabilised the 1D 1 H-NMR line shape and chemical shift of the organic acids as described by Kim et al. [38] and also realigned slight pH shifts in distinctive carbohydrate anomeric hydrogens. The willow spectra revealed that this plant also has high levels of citric and malic acids, but unfortunately, the relatively straight forward dry down/buffering solution to the problem was not completely successful (Table 1). It is known that willow is unusual in that it accumulates high levels of calcium oxalate in leaf tissue [39] and we reasoned that the 1D 1 H-NMR alignment problems were due to complex interactions of calcium ions with a variety of organic acids in the matrix, including malate and citrate as well as the 1D 1 H-NMRinvisible oxalate. To investigate this problem we carried out a detailed array of experiments as shown in Table 1, involving buffering at different pHs and ionic strengths and the addition of variable amounts of EDTA to complex the calcium ions. Initial trials were carried out on a dried down polar extract (80:20 H2O:CH3OH) of plant tissue. Reconstitution in 300 mM sodium phosphate buffer at pH6 failed to align the 1D 1 H-NMR peaks or to sharpen poorly resolved peaks such as those of citrate and malate. Increasing the ionic strength of the buffer to 600 mM still did not improve resolution. Trials were then carried out using EDTA to complex the Ca 2+ in the sample ( Table 1). Addition of 10 µL of a 3.2 mM solution of EDTA began to sharpen the pair of citrate doublets which appear between δ2.50 and 2.75. However the position of these peaks varied between samples. Adding increasing amounts (up to 100 µL) of the 3.2 mM solution of EDTA sharpened these peaks further but did not completely stabilise the chemical shift. Alternate strategies, to deal with Ca 2+ , such as precipitation as CaF2 following potassium fluoride addition [40] or removal by chelation with solid cation exchange resins [41] were also unsuccessful, failing to improve resolution or stability of peak position. An alternate solution to re-dissolution of the dried extract in aqueous buffer was to reconstitute the sample in the same ratios of deuterated methanol-water solvents as used to extract the plant. This improved the efficiency of reconstitution. Buffering of this solution via the addition of a small concentrated (10 µL, 2.6 M) "slug" of pH 6.0 buffer to the final sample appeared to improve the alignment of most signals in the spectrum, excluding malate and citrate. Increase of the pH of the concentrated buffer additive to 7.4 or 8.0 resulted in good alignment of these signals. Sharpening of the citrate and malate signals, such that they were of a comparable resolution across different tissues and genotypes, also required the addition of EDTA and after further experimentation it was found that a 10 µL addition of a stronger solution (32 mM) worked most effectively. The addition of this EDTA solution however, required further adjustments to buffer concentration to re-align some signals. It was found that the addition of a further 10 µL portion of the 2.6 M buffer such that the final solution was supplemented with 10 µL 32 mM EDTA and 20 µL 2.6 M potassium phosphate (pH 7.4) was optimum. In this way, a dataset was achieved within which all peaks from all tissue types were well resolved and aligned such that bucketing to 0.015 ppm reliably captured all the peaks in the same buckets between samples. By this approach we developed a protocol that produced stable, reproducible 1D 1 H-NMR spectra whilst retaining the ability to remove aliquots of the original extract for ESI-MS. To prevent introduction of EDTA and buffer salts into ESI-MS samples, concentrated chelator and buffer solutions were added at the end of the process only to the NMR sample. Representative spectra from stem and leaf tissues are shown in Figure 2. It can be seen that the organic acids are now well resolved and aligned, as are the anomeric hydrogens from common sugars. The signals from the Ca 2+ complex of EDTA are visible at 3.1 ppm (quartet) and 2.55 (singlet) [42,43] as abundant peaks, but do not interfere with those from endogenous metabolites. We can't rule out the possibility that EDTA was also complexing with other paramagnetic and diamagnetic metal ions but characteristic 1D 1 H-NMR peaks for e.g., Mg-EDTA (2.8 ppm) [42] or Mn-EDTA (2.8 ppm) [20] were not seen suggesting that Ca 2+ was the major cation responsible for chemical shift variation and peak broadening in willow tissues. Diamagnetic cations such as Ca 2+ , are commonly associated with chemical shift variation due to their ability to bind to metabolites such as citrate [40]. However, it is unusual for these diamagnetic cations to affect peak resolution which normally arises due to paramagnetic ion content. For example, studies in saliva showed that no peak broadening of the citrate peaks occurred due to the addition of additional Ca 2+ [44]. In willow tissues it appears that the variable organic acid content in leaf and stem tissues coupled with a high calcium oxalate presence, especially in leaves is influencing not just peak position but also resolution of both malate and citrate peaks, a situation that varies with the age of the tissue and which cannot be rectified by buffering alone, instead requiring a careful balance of metal chelator addition and pH adjustment. As the newly developed method involved a dry-down step, it also presented an opportunity to record the mass of extracted metabolites from each of the different tissue types. As shown in Table 2 the total mass of metabolites extracted from standard aliquots of freeze-dried milled willow tissue varied with the location of sampling. On the whole, approximately 30% of the dry mass of willow leaf was extractable, and this was consistent across both older and younger leaves. However, for stem tissue, not surprisingly, the percentage of extractable metabolites per unit dry weight of tissue, decreased from ca. 32% in stem tissue taken from the top of the plant to just 12% in stem material harvested from the bottom of the plant, reflecting the maturity and hardness of the wood from top to bottom. For qualitative analysis and relative quantitative analysis i.e., within sample or across samples of the same tissue type, the lower amount of extractives is not an issue. However, for the calculation of carbon pools and flow in different tissues around the plant then the extractable mass becomes a factor in any mass-balance analysis. A further issue that came to light during the development of the method concerns the flavan-3-ol catechin, which occurs widely in the plant kingdom, and is present at significant levels in willow samples. On standing in buffered deuterated aqueous solvents this compound undergoes slow hydrogen-deuterium exchange at the C-6 and C-8-positions.This results in loss of signal at δ6.09 (H-6) and δ 6.00 (H-8). Although less rapid than hydroxyl or carboxyl hydrogen exchange, the exchange of these aromatic hydrogen atoms, via keto-enol tautomerism, was a fairly fast process and as shown in Figure 3, and was complete in 12 h at pH 7.4. The phenomena of H/D exchange have previously been reported in response to heating samples containing flavonoid metabolites [45,46] and also in related anthocyanin molecules in acidified methanolic or aqueous solutions [47]. For the operation of the high throughput screen, varying degrees of exchange of the catechin H-6 and H-8 hydrogens, have potential to give false positive results in multivariate analyses of large sets of spectra. This can be avoided by either "resting" the samples for 12 h after addition of the buffer solution, before data collection, or, by removal of the affected chemical shift "bins" from the spreadsheet of chemical shift versus intensity during data processing [32]. This will prevent false discovery of catechin as a biomarker. Other non-exchangeable catechin aromatic hydrogens at δ6.93, 6.92 and 6.85, together with the aliphatic double doublet at δ2.86 ( Figure 3) can be diagnostic for this compound and thus should emerge from multivariate analysis if levels are changing across a sample set It should be noted that hydrogen-deuterium exchange in flavonoids only affects the buffered NMR sample. Samples for ESI-MS were removed before re-dissolution in NMR solvent and thus the flavonoids do not undergo any molecular weight shifts in this screen. Analysis of Tora and Resolution Using the New Method Willow stems and leaves from the two biomass varieties Tora and Resolution were analysed using the protocol described above. The choice to analyse two biomass willow varieties which are genetically related was deliberately made in order to test the robustness of the newly developed extraction and data collection protocol. Unlike many other biomass willows, these two varieties have a very similar phenotype and metabolite changes due to genotype were expected to be subtle. The ability of a protocol to separate spectra arising from these genotypes relied on high quality analytical data with a low variation due to the method itself. Average relative standard deviations, describing variation in technical replication, for abundant metabolites identified in the leaf and stem 1D 1 H-NMR spectra ranged from 2%-8% (Table 3). PCA of the resultant full 1D 1 H-NMR dataset (Figure 4), including all replicates, showed good clustering of the experimental data. Samples from technical and biological replicates for relevant samples clustered together and showed a lower variance compared to material from different sampling position or that from differing genotypes. Unsurprisingly the largest separation within the PCA model, in the direction of PC1 accounting for 42% of the total variance, was observed between leaf and stem samples (Figure 4a) irrespective of genotype or sampling point. PC2, accounting for 29% of the variance, described the separation within the leaf or stem cluster, due to sampling point (top, middle or bottom of the plant). The impact of sampling point was greatest in stem samples where samples harvested from the top of the plant formed a distinct cluster. When coloured according to genotype, PC4, which accounted for 3.5% of the total variance, separated the two biomass lines in the stem samples ( Figure 4b). In leaf samples, the two genotypes could be separated by PC5 accounting for 3% of the total model variance (Figure 4c). When leaf and stem samples were analysed separately (Figure 4d,e), clear clusters could be seen for sampling point in the direction of PC1 in both models. Separation due to genotype was evident in PC2. Interestingly, in stem tissue, the greater discrimination of samples was observed for tissues harvested from the bottom or middle of the plant. This discrimination was less evident in leaf samples where genotypes could be separated at all positional harvest points. Technical replication could also be assessed in the models resulting from separate tissue types ( Figure S1) and in general variance between the three technical replicates was lower than that observed between biological replicates. In order to determine the metabolites responsible for these distinct separations, a series of O-PLS models were constructed using a dummy matrix for separations due to tissue, sampling point or genotype ( Figure 5). Differences in the abundant metabolites between stems and leaves are shown in the OPLS S-plot in Figure 5b. Stem tissues typically contain higher glucose than leaves. In addition a number of amino acids are elevated including glutamine, asparagine, aspartate and GABA. The aromatic metabolite 2-phenylethylamine, a metabolite formed from phenylalanine and which is dominant in juvenile willow tissues is more abundant in stem tissues. Finally, signals relating to quinic acid at δ 1.845-2.073 are present in both tissues but are elevated in stem tissues and are also discriminatory metabolites. Contrastingly, leaf samples contain higher sucrose levels and elevated amounts of the organic acid malate. The abundant secondary metabolites, observed in leaves, included catechin and gallocatechin, while dihydromyricetin, the most abundant flavonoid in these Salix genotypes, was higher in leaves compared to stem samples. Finally, chlorogenic acid, an ester formed from caffeic and quinic acids was detected only in leaf samples. Figure 5c,d shows the OPLS model that describes metabolite changes observed due to location in the plant irrespective of tissue or genotype. As can be seen from the S-line plot in Figure 5d, a large number of signals are negative indicating that the abundance of the majority of extractable polar metabolites is typically higher in young leaves and stems taken from the top of the plant. Metabolites which oppose this, and that have higher concentrations in older tissue from the base of the plant, include sucrose, citrate and malate. Finally, the model constructed to describe generic differences between Tora and Resolution genotypes in shown in Figure 5e,f. Resolution typically contains higher levels of glutamine, asparagine, 2-phenylethylamine, glutamate and quinic acid. In contrast, Tora samples are generally higher in the major carbohydrates sucrose and glucose. In addition, dihydromyricetin, the major flavonoid in these samples is elevated in the Tora genotype. The PCA and O-PLS models demonstrate that utilising the new extraction protocol, samples from different willow genotypes, where tissue has been obtained from different locations of the plant, can be separated on the basis of their tissue type, harvest point and genotype. O-PLS S-plots detail the major metabolites responsible for these separations. However, it was difficult to ascertain which quantitative metabolite profiles across the sampling position of the plant were able to discriminate the genotypes and which, if any, showed contrasting profiles in the leaf versus stem tissue. Figure 6 shows the metabolite trajectories across the height of the plant allowing differences in the profiles to be more easily discerned. In leaves, metabolite profiles ( Figure 6a) which discriminate Tora from Resolution include those of leucine, aspartate and 2-phenylethylamine. These metabolites show a similar trajectory but are typically more abundant in one genotype compared with the other. For other metabolites a difference between genotypes can be seen when tissue is harvested from a particular position of the plant. Clear differences in dihydromyricetin levels are observed when leaves are harvested from the top of the plant, but older leaves from the lower part of the plant are unable to discriminate the genotypes. Similar observations are seen for aspartate and glucose. In general, the major soluble carbohydrate concentrations decrease as leaves are sampled from the top to the bottom of the plants while organic acid concentrations (malic and citric) are higher in the lower older leaves. Similarly, the amino acids GABA, glutamine, valine, isoleucine and leucine show higher concentrations in these older leaves from the base of the plant. Contrastingly, alanine, glutamine and threonine levels reach their highest concentration in samples from the top of the plant. Figure 6b shows the same type of metabolite profiles obtained from stem tissue. As suggested by the O-PLS plots, the extracted levels of many metabolites decrease in stem tissue obtained from the lower part of the plant. In many cases, although the profile follows the same trajectory the intensity of the profile is greater in material sampled from Resolution and examples here include asparagine, 2-phenylethylamine, threonine, isoleucine, lactate and glutamine. From this dataset the only metabolite that consistently increased when sampling the lower part of the stem was sucrose. This is in contrast to the profile observed in the leaves where sucrose was typically at its highest level when material was sampled from the top of the plant. Similarly the profiles of many amino acids and organic acids show contrasting profiles in the leaf and stem samples. The data described in Figure 6 was obtained via scaling the 1D 1 H-NMR dataset to a known concentration of internal standard (d4-TSP) which was present in the extraction solvent. Since 1D 1 H-NMR is a quantitative technique, irrespective of metabolite chemistry, scaling to the internal standard gives information regarding the absolute concentration of metabolite extracted from 15 mg of dried plant sample. However, from the data in Table 2 we know that the total amount of extractable metabolites is not consistent across all samples in the experiment. Whilst the mass of the soluble metabolome is fairly consistent in leaves and from stem samples obtained from the top of the plant, the amount of extractives obtained from older basal stem sections is considerably lower. Thus, while Figure 6 gives an overall picture of levels of each metabolite in each sample, it cannot describe relative changes within the soluble metabolite pool since some of these changes may be masked via a larger change in extractive yield. The new protocol described in this paper, incorporating a measurement of the extractives after dry down, allows the metabolomic 1D 1 H-NMR data to be normalised to a constant sample weight. This reveals the spatial variation in the dataset allowing metabolite changes within the soluble metabolite pool to be discerned. Figure 7 shows the effect of normalising the data back to a constant 3 mg weight of extractable material. The effect of the normalisation does not alter the direction of the leaf profiles (Figure 7a). This is to be expected since leaves harvested from different parts of the plant typically yielded the same amount of extractable metabolites. However, Figure 7b shows the effect of the normalisation of the stem data. Unlike the data displayed in Figure 6b, which described the diminishing concentrations of the majority of soluble metabolites down the stem, this plot now shows a range of contrasting profiles and represents the real soluble metabolite changes happening within the part of the tissue, irrespective of a changing, and presumably increasing, non-extractable portion of the tissue sample. There is an approximately three-fold difference in the amount of extractives obtained between stems sampled from the top and bottom parts of the plant. Thus, the profile of any metabolite change which is within a three-fold difference may reverse its trajectory when normalised. Those which showed greater than three-fold changes will continue to show the same trajectory although the magnitude of that difference will be attenuated. For the abundant soluble carbohydrates (glucose, fructose and sucrose) the profiles show a similar trajectory to that previously described. However, there has been a large effect on the malate and citrate profiles which now show that both these metabolites actually increase in concentration within the soluble metabolite pool as sampling proceeds from the top to the bottom of the plant. Similarly, we see that secondary products such as catechin, gallocatechin and dihydromyricetin increase in stem tissues obtained from the lower portion of the plant. In terms of differences between genotypes, the normalisation of the dataset to a constant weight of extractable metabolites shows that one of the largest differences in profile intensity is now observed for the asparagine content in stems which is very clearly higher in the material sampled from Resolution. Examination of the direct infusion ESI-MS data from the top, middle and bottom sections of the two genotypes using PCA of the concatenated positive and negative ion spectra revealed that the data shape is in line with that seen for the 1D 1 H-NMR profiles ( Figure S2). Leaf and stem samples could be easily separated in the direction of PC1 (45%) while PC2 (25%) separated the stem data based on sampling location ( Figure S2a). When coloured by genotype, PC4 (5%) separated the stem data based on genotype ( Figure S2b) and PC5 (1%) discerned differences due to genotype in the leaf samples ( Figure S2c). When PCA models were constructed using stem or leaf data alone, the data further mirrored the clustering observed in PCA of the 1D 1 H-NMR data (Figure 4). In leaves ( Figure S2d), PC1 (81%) described the separation due to sampling point while PC2 (9%) separated the two genotypes. Samples taken from the top of the two different genotypes were easily differentiated. For the stem data only, (Figure S2e), the ESI-MS data again mirrored the 1D 1 H-NMR data (Figure 4e) with harvest location described by PC1 (58%) and genotype described in the direction of PC2 (32%). Interestingly, it was more difficult to separate samples by genotype when material from the top of the plants was analysed by ESI-MS compared to samples taken from older, lower parts of the plant. This mirrored the observations from the PCA models constructed from stem 1D 1 H-NMR data (Figure 4e). Contrastingly, in the leaf only ESI-MS PCA model (Figure S2d), the separation between middle and bottom harvest points was less discernible, when compared to the corresponding 1D 1 H-NMR PCA model (Figure 4d). However, on the whole the shape of the ESI-MS data matched that of the 1D 1 H-NMR data, demonstrating that correlation of 1D 1 H-NMR signal versus ESI-MS signal is a valid strategy for metabolite annotation. Construction and Application of a Bespoke Willow 1D 1 H-NMR Spectral Library for Automated Quantitation of Metabolites Provision of a list of metabolites in a sample with their concentrations is the output of choice for multidisciplinary projects where the data is to be mined against other trait or omics datasets or passed onwards for further statistical processing. The nature of 1D 1 H NMR data and the complexity of typical plant extract spectra with many overlapping peaks from multiple metabolites make manual quantitation difficult and time consuming. Chenomx NMR suite is a set of tools for identifying and quantifying metabolites from 1D 1 H-NMR spectra of mixtures [48], allowing for quantitation of metabolites even when some signals are overlapped with those from another metabolite. Matching and quantitation can be carried out in automation based on comparison to a library of pH sensitive signatures of authentic metabolites run at differing instrument field strengths. However, as it was developed for clinical metabolomics, the Chenomx library does not contain many common plant metabolites, especially the species specific secondary metabolites. Furthermore, there is no capacity to compare spectra which have been collected in D2O:CD3OD mixtures. While this was a problem with some earlier versions of the software, Version 7.6 allows users to build user-defined signatures based on their own extraction protocol and 1D 1 H-NMR data collection parameters. We have therefore constructed a library of signatures from all the abundant primary metabolites detected in Tora and Resolution willow leaves and stems and have supplemented this with signatures from key secondary metabolites such as flavonoids and phenolics and their glycosides, such as salicin and salicortin and triandrin, which are well documented in the Salix literature. To date, this bespoke library contains 90 signatures, 52 of which overlap perfectly with those obtained when using the newly developed protocols described above. As an example, matching and quantitation (in μmoles/g dry weight and mg/g dry weight) of the Tora and Resolution leaf and stem data was evaluated and is detailed in Tables S1 and S2. As can be seen by comparison with the data in Figure 6, the use of the Chenomx profiling software has increased the number of metabolites that we were able to quantify. As a means of comparison to the relative data obtained from binning, quantified data in mg/g d.w. have been plotted across tissue types in Figures S3-S6. The profiles of these concentrations agrees well with the majority of metabolites following the same trajectory as that obtained from plotting characteristic regions from the 1D 1 H-NMR directly. Based on this quantified metabolite data, metabolites showing significant (p < 0.05) differences between the Tora and Resolution genotypes could be identified in both stem and leaf tissues sampled at each part of the plant (Table 4). There is surprisingly little published comparative quantitative data on S. viminalis primary metabolites and thus it is difficult to compare the levels of individual metabolites or compound classes found in our study. Some other diverse Salix genotypes have been studied although often these studies have been sampled at different points in the developmental cycle, on other tissue types and are often subject to stresses or heavy metal treatments. Such examples include the assessment of amino acids in phloem and xylem of Salix species [49,50]. In the case of soluble sugars, glucose, sucrose and fructose have been described as the major soluble carbohydrates present in hydroponically grown, juvenile S. viminalis leaves [51] where levels reached 35 mg/g d.w. for glucose, 12.5 mg/g d.w. for fructose and 44 mg/g d.w. for sucrose. Our data from field grown tissue mirrors the profile in that glucose and sucrose levels were similar to each other in leaves harvested from the top of the plant and that fructose levels although still abundant were somewhat lower in concentration. The overall concentration of leaf soluble sugars appears lower in older field grown material compared to that reported for young plants. This is in agreement with data presented on Populus deltoides × nigra where similar levels of carbohydrates were reported to our own study [52]. In terms of organic acids, malate, citrate, ascorbate and quinate levels dominated the organic acids fraction of leaves in our study while major components in stems were ascorbate, malate, quinate and 2-oxoglutarate, the latter being highest from stem material harvested from the top of the plant. Malate and citrate levels (on a fresh weight basis) are reported in leaves of S. alba at 1.6 and 0.6 mg/g F.W. respectively [53]. Thus, our observations of 3-10 mg/g d.w. of citrate in leaves are broadly comparable. Similarly, results of 6-22 mg/g d.w. of malate in S. viminalis are comparable with levels observed on a fresh matter basis in S. alba leaves. Willow and poplar are well known for the diversity of phenolic glycosides present in stem tissues [54], although it is also recognized that levels of such metabolites vary over the growth season [55]. S. viminalis tissue is typically low in the salicinoids, during periods of active growth, compared to other varieties of willow such as S. purpurea [56]. Thus, as expected, we observed only small amounts of salicin (typically <1 mg/g d.w.) in this experiment. Additionally, the 1,4-substituted analogue triandrin was detected in all leaf and stem samples, consistent with previous findings [56] that it is a common component in S. viminalis. The aromatic regions of our spectra also contained a mixture of flavanols, with major components such as dihydromyricetin, catechin and gallocatechin. Levels of these compounds in our study ranged from 0.23-7 mg/g d.w. Such high levels of these compounds have previously been reported in stem tissues of e.g., S. caprea [57]. Conversion of quantified data to units of mg/g d.w. allowed a total concentration of quantified metabolites to be elucidated (Table S2). Of note here is the fact that, in leaf, the concentration of total quantified metabolites ranged from 75 mg/g d.w. to 93 mg/g d.w. and did not vary significantly by genotype or tissue position. This is in parallel with the data outlined in Table 2 relating to the variation in % extractable metabolites from leaf. However, 90 mg/g d.w. of quantified metabolites in leaf samples represents approximately 30% of the known extractable mass. Thus, in leaves, ~70% of polar extractives relate to unknowns that either have not yet been quantified or to substances that do not give signals in the 1D 1 H-NMR spectrum (Figure 8). Examples here would be inorganics such as phosphate, metal salts or oxalate (which is known to be high in willow leaves, [29]) or multiple low abundance metabolites that are below the level of detection in NMR. From Chenomx assignments, it is the latter which is most likely. When compounds are examined by their chemical classes (Figure 9), it is clear to see that the only class that changes in the absolute amount per gram of leaf tissue is the organic acids which are at their highest level in older leaves at the bottom of the plant. When metabolite concentrations were normalised to the metabolite pool, we can also see that total levels of amino acids, carbohydrates and aromatics are highest in young leaves from the top of the plant. In contrast, mass that is 1D 1 H-NMR invisible such as inorganic salts is lowest in young leaves. In stem tissues the absolute amount of metabolites that can be quantified per gram of plant tissue decreases ( Figure 8). However, within the pool the % of these quantifiable metabolites is relatively static. In terms of 1D 1 H-NMR invisible metabolites, these are lowest in material from the top of the plant and increase in older stem tissue, although even here the mass of such metabolites is lower than seen in leaf material ( Figure 9). In terms of stem organic acids, these show a similar behaviour in both genotypes with highest levels at the top of the plant. In contrast to leaves, organic acid concentrations are lowest from stem material collected from the bottom of the plant. Levels of total soluble carbohydrates and amino acids discriminate genotypes with Tora containing higher stem carbohydrate and Resolution higher stem amino acids. Total aromatic metabolites are similar in both genotypes with highest levels of these compounds isolated from younger tissue. The development of the Chenomx metabolite library in concert with the methods described for sample handling and data collection therefore enable a detailed list of metabolites to be generated in high throughput for comparison of metabolite pools and compound classes between samples and will enable future large scale metabolomics experiments, such as mQTL studies, in willow. Simplification of the Method for High Throughput 1D 1 H-NMR-MS Screening Above, after much optimisation we developed a robust protocol for 1D 1 H-NMR-MS screening of the willow metabolome. The protocol ( Figure S7a) was developed and deployed above with a dry-down step, for recording of extractable weight, allowing normalisation and study of the dynamics of the metabolite pool. However, for the large-scale screening of comparable tissues from genetic populations for mQTLs, where wet-lab processing steps are ideally kept to a minimum, the method was modified according to Figure S7b and the final entry of Table 1. Tissue was extracted directly into deuterated NMR solvent and the dry down/reconstitution step was removed. After removal of aliquots for ESI-MS, NMR samples were then modified with pH 7.4 phosphate buffer and EDTA, prior to spectral data collection. Analysis of the resultant 1D 1 H-NMR spectra showed that samples prepared without the dry down step contained higher levels of ascorbate and acetate. These were the only evident changes between the two methods. Comparison of the data, obtained by the two methods, by PCA ( Figure 10) showed that corresponding samples prepared by each method still clustered together and that the separation by harvest position or genotype was larger than any difference between the two modes of extract preparation. Plant Material Tissue from the two biomass varieties, Tora and Resolution, was harvested from the National Willow Collection at Rothamsted in June 2012 ( Figure S4). Both genotypes are Salix viminalis × S. schwerinii hybrids and are female and diploid. They are distantly related in that a sibling of Tora (Bjorn) is the male parent of both parents of Resolution. The original planting of Tora was in 2002, whilst that of Resolution was 2004. The plots had previously been coppiced in February 2012 and thus the material represented circa 4 months fresh regrowth from stools. The freshly coppiced plots had been treated with herbicide (amitrole, 20 L/ha) and nitrogen fertiliser in February 2012. Immediately after harvest, leaves and stems from each genotype were each divided into three samples representing bottom (1-30 cm), middle (31-60 cm) and top (61 cm and above) parts of each plant. Two similar sized plants were harvested and dissected thus producing two biological replicates of each genotype/tissue type. Samples were frozen in liquid nitrogen, then freeze-dried and milled to a powder in a cryo-mill. They were stored at −80 °C prior to analysis. Figure 10. Comparison of binned 1D 1 H-NMR data from extracts prepared by method "a" and method "b" (Figure S7). (a) PCA scores plot of willow stem 1D 1 H-NMR data coloured by method used to prepare NMR extracts; green: method "a"; blue: method "b". (b) PCA scores plot of willow leaf 1D 1 H-NMR data coloured by method used to prepare NMR extracts; green: method "a"; blue: method "b". Preparation of NMR-MS Samples for Willow, Incorporating a Dry-Down Step for Determination of Mass of Extracted Metabolites To triplicate aliquots (15.0 mg) of each freeze-dried, milled plant sample in 2 mL round bottom Eppendorf tubes, was added H2O-CH3OH (4:1) extraction solvent (1.0 mL). After mixing, the tubes were heated to 50 °C for 10 min, cooled and centrifuged. From each tube, supernatant (850 μL) was transferred to a clean Eppendorf tube and then heated to 90 °C for 2 min. The samples were then cooled to 4 °C for 30 min and then centrifuged. For ESI-MS, 50 μL of the supernatant was removed to a glass HPLC vial and diluted with 950 μL of H2O-CH3OH (4:1). For extract mass determination and subsequent 1D 1 H-NMR analysis, 700 μL of the supernatant were transferred to a clean pre-weighed Eppendorf tube and then evaporated in a vacuum concentrator overnight at 30 °C. After further drying (30 min) in a vacuum oven (room temperature), the weight was recorded and 700 μL of NMR solvent [D2O-CD3OD, 4:1 v/v, incorporating 0.01% w/v 2,2,3,3-d4-3-(trimethylsilyl)propionic acid (TSP)] was added. After dissolution at room temperature, 20 μL deuterated 2.6 M phosphate buffer, pH 7.4 [containing 4.19 g K2HPO4 and 0.808 g KH2PO4 in 10 mL D2O] was added along with 10 μL of EDTA solution [32 mM, containing 12 mg ETDA-Na2.2H2O in 1 mL D2O]. After mixing and standing for 30 min, the samples were centrifuged and 650 μL were removed to clean, dry 5 mm NMR tubes. Sample Preparation for High-Throughput 1D 1 H-NMR-MS Screening of Willow, Utilising Direct Extraction into Deuterated NMR Solvent To triplicate aliquots (15.0 mg) of each freeze-dried, milled plant sample in 2 mL round bottom Eppendorf tubes, was added D2O-CD3OD (4:1 v/v) incorporating 0.01% w/v TSP (1.0 mL). After mixing, the tubes were heated to 50 °C for 10 min, cooled and centrifuged. From each tube supernatant (850 μL) was transferred to a clean Eppendorf tube and then heated to 90 °C for 2 min. The samples were then cooled to 4 °C for 30 min and then centrifuged. For ESI-MS, 50 μL of the supernatant was removed to a glass HPLC vial and diluted with 950 μL of H2O-CH3OH (4:1). For NMR, 700 μL of the supernatant was removed to a clean Eppendorf tube and mixed with 20 μL deuterated 2.6 M phosphate buffer, pH 7.4 and 10 μL of 32 mM EDTA solution in D2O, as above. 650 μL of this buffered sample was transferred to a 5 mm NMR tube. 1D 1 H-NMR and Direct Infusion ESI-MS Data Collection and Data Analysis These were respectively carried out on an Avance 600 MHz NMR Spectrometer (Bruker Biospin, Coventry, UK) and an Esquire 3000 mass spectrometer (Bruker Daltonics, Coventry, UK) using parameters and settings as previously described [30]. Briefly, 1D 1 H-NMR spectra were acquired at 300 K using a 5 mm SEI probe. A water suppression pulse sequence (noesygppr1d) was utilised employing a 90° excitation pulse angle and a pre-saturation pulse during the relaxation delay of 5 s. Data were acquired using 128 scans of 65,536 data points across a sweep width of 12 ppm. 1D 1 H-NMR FIDs were zero filled to double their original size, and Fourier transformed with an exponential window function (0.5 Hz). Spectra were manually phased and automatically baseline corrected in Amix (Analysis of MIXtures, Bruker Biospin) using a 2nd order polynomial. 1 H chemical shifts were referenced to d4-TSP at δ0.00 and spectra were automatically reduced to create an ASCII file containing integrated regions of equal width (0.015 ppm). Spectral intensities were scaled to the d4-TSP region (δ0.05 to −0.05). The ASCII file was imported into Excel for the addition of sampling/treatment details. The regions for unsuppressed water (δ4.865-4.775), d4-MeOH (δ3.335-3.285) and d4-TSP (δ0.05 to −0.05) were removed prior to importing the dataset into SIMCA-P 13.0 (Umetrics, Umea, Sweden) for multivariate analysis. Multivariate analysis (PCA and OPLS) was carried out using unit variance scaling. For construction of trajectory plots of individual metabolites, data from characteristic regions for known metabolites was combined to give a single intensity response for each metabolite. Technical replicates were averaged and errors displayed on the basis of 2 biological replicates. Annotation of peaks to individual metabolites was achieved via comparison to a library of authentic standards prepared in identical conditions to the test samples and run under identical 1D 1 H-NMR conditions. Automated Batch Quantification of Target Metabolites Batch quantification of metabolites in 1D 1 H-NMR spectra was achieved using the Chenomx NMR Suite 7.6 (Chenomx Inc., Edmonton, AB, Canada) [48]. A database of 90 metabolite signatures was built from spectra of authentic pure samples of common plant metabolites and willow-specific secondary products, by collecting spectra at 600 MHz on the same spectrometer and instrument settings in the pH 7.4 and EDTA modified solvent as above. The standard metabolites were quantified against the known concentration of reference compound (TSP) and fitted to record peak centres and coupling constants in the database. Quantitative profiling across the willow batched spectra was carried out using the Profiler module in the software, which superimposes a Lorentzian peak shape model for each database entry onto the analyte spectra, and reports a concentration for each matched metabolite in each spectrum. Every metabolite fit was manually inspected. Data for technical replicates were averaged and a mean concentration for each biological sample was tabulated. The output data table was examined by PCA (SIMCA-P, Umetrics, Umea, Sweden), to quality assure the Chenomx determined quantitations by means of the inbuilt biological and technical replication. Significance of metabolite concentration differences was determined using one-way ANOVA and was carried out in Microsoft Excel. A table of characteristic chemical shifts for metabolites identified from Tora and Resolution genotypes has been included as Table S3. Conclusions In summary, we have overcome a variety of technical challenges and developed a robust method for high throughput screening of the willow primary and secondary metabolomes, which gives 1D 1 H-NMR and ESI-MS data on the same samples with low variation due to technical replication. The method allows direct statistical comparison and correlation of stem (wood) and leaf samples from any part of the willow plant and across the two spectroscopic datasets, and this has been demonstrated via a range of statistical methods which are common in many metabolomics studies. The processing regime also allows for measurement of the extractable mass of the soluble metabolome, data that will be necessary for modelling metabolic flow from sources to sinks. A streamlined adaption of the method for high-throughput screening was also refined and demonstrated to be robust. In addition to the quantification of metabolites via integration of characteristic bins in the processed data, we have automated quantitation of 52 metabolites in the 1D 1 H-NMR spectra, using Chenomx and show that the results are comparable. Either method enables rapid extraction of quantitative data from high throughput genetic screens, which we are now conducting across the extensive genotype collections held at Rothamsted. We would anticipate that the methods developed here are directly applicable to related species such as poplar, and potentially to many other woody biomass crops. Using samples taken from the two willow genotypes, we have also demonstrated that the 1D 1 H-NMR and ESI-MS datasets show the same trajectories when modelled by PCA, and thus we expect that meaningful NMR to MS structural information can be gleaned from combined analysis of these two datasets. Furthermore, as NMR is non-destructive, the samples are available for further spectroscopic investigation to follow up on metabolites of interest. We are now applying these methods to diversity and mapping populations, with a view to identifying mQTLs for biomass yield and other agronomic traits, including selection of lines for novel metabolite related properties. Studies in annotation of the ESI-MS data are also underway, including a very high resolution uHPLC-ESI-MS-MS study to further enhance the value of the screen. Details of this study will be published elsewhere.
v3-fos-license
2022-12-30T05:06:33.989Z
2022-12-28T00:00:00.000
255225969
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://www.nature.com/articles/s41390-022-02445-6.pdf", "pdf_hash": "06a623e269b7430be6b8e2e88b9c6ea93cf76cda", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43564", "s2fieldsofstudy": [ "Medicine" ], "sha1": "5df4867ccf1e25108ddb90e5b2a7ad6cc07c5a2d", "year": 2022 }
pes2o/s2orc
Structural racism is associated with adverse postnatal outcomes among Black preterm infants Background Structural racism contributes to racial disparities in adverse perinatal outcomes. We sought to determine if structural racism is associated with adverse outcomes among Black preterm infants postnatally. Methods Observational cohort study of 13,321 Black birthing people who delivered preterm (gestational age 22–36 weeks) in California in 2011–2017 using a statewide birth cohort database and the American Community Survey. Racial and income segregation was quantified by the Index of Concentration at the Extremes (ICE) scores. Multivariable generalized estimating equations regression models were fit to test the association between ICE scores and adverse postnatal outcomes: frequent acute care visits, readmissions, and pre- and post-discharge death, adjusting for infant and birthing person characteristics and social factors. Results Black birthing people who delivered preterm in the least privileged ICE tertiles were more likely to have infants who experienced frequent acute care visits (crude risk ratio [cRR] 1.3 95% CI 1.2–1.4), readmissions (cRR 1.1 95% CI 1.0–1.2), and post-discharge death (cRR 1.9 95% CI 1.2–3.1) in their first year compared to those in the privileged tertile. Results did not differ significantly after adjusting for infant or birthing person characteristics. Conclusion Structural racism contributes to adverse outcomes for Black preterm infants after hospital discharge. Impact statement Structural racism, measured by racial and income segregation, was associated with adverse postnatal outcomes among Black preterm infants including frequent acute care visits, rehospitalizations, and death after hospital discharge. This study extends our understanding of the impact of structural racism on the health of Black preterm infants beyond the perinatal period and provides reinforcement to the concept of structural racism contributing to racial disparities in poor postnatal outcomes for preterm infants. Identifying structural racism as a primary cause of racial disparities in the postnatal period is necessary to prioritize and implement appropriate structural interventions to improve outcomes. INTRODUCTION Racial disparities in adverse perinatal and postnatal outcomes including prematurity, low birthweight, preterm comorbidities, infant mortality, and health care utilization have been previously described. [1][2][3][4][5] Racial disparities often persist despite adjusting for birthing person characteristics, including medical co-morbidities and socioeconomic factors. 2,4,5 Although genetic etiologies of racial disparities in perinatal outcomes were historically entertained, it is now understood that race is a socio-political construct without a genetic basis. 6,7 Once a historically under-recognized concept, racism in all its forms, structural, interpersonal, and internal, is now recognized as a core social determinant of health (SDH) with substantial health consequences for pediatric populations. 8,9 Structural racism is defined as systematically discriminatory laws and practices that have resulted in a disparate distribution of goods, services, and opportunities for racial groups. 10 Structural racism has been directly associated with poor perinatal outcomes including preterm birth (PTB), infant mortality, and small for gestational age (SGA) infants. [11][12][13][14][15][16][17] Structural racism has been quantified by measuring distribution of resources and opportunities such as employment, education, income, and incarceration. 11,[18][19][20] One such measure of structural racism is the Index of Concentration at the Extremes (ICE), a metric developed by Massey et al. and further modified by Krieger et al., that measures spatial social polarization by quantifying extremes of privilege among social groups using race and income data (ICE race + income). 20,21 In previous perinatal literature, ICE operationalized as a proxy of structural racism has been associated with preterm birth, infant mortality, and a combined outcome of neonatal mortality and severe preterm comorbidity. 15,[22][23][24] Less is known about how structural racism impacts preterm infants after their initial hospitalization. Previously, we described that Black preterm infants were at higher risk of frequent acute care visits, readmissions, and death after hospital discharge in their first year of life when compared to white preterm infants. 4,5 Despite adjusting for several medical and social covariates, our findings persisted, and we hypothesized that structural racism was a root cause of the racial and ethnic disparities observed. Although structural racism is associated with adverse perinatal outcomes like preterm birth and infant mortality for Black infants, less is known about how structural racism may continue to contribute to the health and wellbeing of Black preterm infants after hospital discharge. Thus, in this study, we investigate if structural racism, measured by ICE race + income, is associated with previously described adverse postnatal infant outcomes including frequent acute care visits, readmissions, and pre-and post-discharge mortality in the first year of life. Study population The data for our cohort study was drawn from birthing persons, a term that recognizes not all birthing people identify as women, who delivered liveborn infants in California between 2011 and 2017 (n = 3,448,707) using a birth cohort database. The database maintained by the Office of Statewide Health Planning and Development includes birth certificate, infant hospital/emergency department, birthing person hospital/emergency department, and infant death records up to the infant's first year of life. The sample was merged with census tract data available from the American Community Survey (ACS, 2011-2017) to generate ICE scores by census tract. The sample was restricted to live born singletons of non-Hispanic Black race/ethnicity birthing persons (n = 166,942) with gestational ages <37 weeks (n = 16,337), with records available for both infants and birthing person pairs (n = 13,321, Fig. 1). Non-Hispanic Black race/ ethnicity birthing persons were prioritized in the study as this group has been subject to extreme structural racism in the U.S. and a high risk of adverse obstetric and neonatal outcomes. Data from ACS regarding non-Hispanic white birthing persons was also used to calculate ICE race + income scores, although this group was not included in the study population. Exposure and outcomes Self-reported race and ethnicity was abstracted from the infant's birth certificate. Race and ethnicity were organized into the following groups for our analysis: non-Hispanic white (which we will refer to as "white") and non-Hispanic Black (which we will refer to as "Black"). The ICE metric measures Black-white disparities and thus Hispanic, Asian, American Indian and Alaska Natives, Native Hawaiian and Pacific Islander, and multiracial infants were excluded. Birthing person demographic characteristics were obtained from the birth cohort database. The primary outcomes included more than or equal to two acute care and/or emergency department (ED) visits based on a modified prior definition of frequent ED visits, 5,25 hospital readmission, less than 7 day mortality, and pre-and post-discharge mortality. All outcomes occurred in the first year of life and were obtained from the birth cohort database. Covariates Birthing person age was categorized into less than 18 years, 18-34 years, and greater than 34 years. Birthweight was used to assess in utero growth and was categorized into small for gestational age (SGA), average for gestational age (AGA), and large for gestational age (LGA) defined by less than 10th percentile, 10-90th percentile, greater than 90th percentile birth weights, respectively. 26 Birthing person characteristics included body mass index (BMI), categorized as underweight (less than18.5 kg/m2), normal weight (18.5-24.9 kg/m2), overweight (25.0-29.9 kg/m2), and obese (30.0 kg/m2). Other covariates collected with ICD-9 and ICD-10 codes on hospital records for birthing people included any smoking, alcohol, or illicit drug use, chronic or gestational hypertension, chronic or gestational diabetes mellitus. Adequacy of prenatal care was a binary outcome defined by Kotelchuck et al. 27 Social factors included: highest level of completed education (less than high school education, high school graduate, and more than high school education), insurance coverage for delivery (public, private, or other), and participation in the federal supplemental nutrition assistance program, Women, Infants, and Children (WIC). Index of concentration at the extremes (ICE) ICE race + income scores were generated using race and income data for census tracts derived from The American Community Survey (2011-2017). ICE measures spatial social polarization by quantifying extremes of privilege among social groups in a single metric. 20,21 The following formula is used to calculate ICE: ICE i = (A i -P i /T i ) where A i represents the number of persons belonging to most privileged extreme, P i is the number of persons who belong to the least privileged extreme in the i th census tract. T i is the total population in the i th census tract. This study uses the combined race and income ICE measure as proposed by Krieger et al. 21 The most privileged race and income group was defined as non-Hispanic white individuals with annual income >$100,000 and the least privileged group was defined as non-Hispanic Black individuals with annual income of <$25,000 annually. Annual incomes of <$25,000 and >$100,000 represent the 20th and 80th percentiles of household income, respectively. ICE is a continuous variable that ranges from −1 to 1, where −1 represents least privileged and 1 is most privileged. ICE race + income scores were categorized into three tertiles based on sample distributions (n = 13,321) of these measures from tertile 1 (least privileged) to tertile 3 (most privileged). ICE scores are calculated and assigned to each census tract, thus ICE scores represent the degree of racial and income segregation and inequity in the area in which each birthing persons live. Statistical analysis We computed summary statistics, including the mean, standard deviation, minimum and maximum for each ICE race + income tertile for the entire study sample, and for all California live births. In addition, we computed the proportion of covariates and outcomes under study. Moreover, we computed the percent of adverse postnatal outcomes by ICE income + race tertile and a chi-square test to examine differences among the tertiles. We then fit four sequential generalized estimating equations regression models with robust standard errors and an exchangeable correlation structure to generate risk ratios (RR) and 95% confidence intervals (CI) testing the association between ICE race + income score for the census tract in which a birthing person lives and adverse postnatal outcomes. The first model was unadjusted, while the second was adjusted for infant sex, gestational age and growth. The third model further adjusted for birthing person age, BMI, smoking/drug or alcohol use, adequacy of prenatal care as well as chronic or gestational RESULTS Among the 3,448,707 live born infants in California between 2011 and 2017, 16,337 infants were non-Hispanic Black singletons born between 22 and 36 weeks gestation, of those 13,321 had valid census tract and birth cohort data and thus were included in the study (Fig. 1 online). ICE race + income distributions for the entire California live birth population calculated for each census tract compared to the study sample are displayed in Table 1. In California, ICE race + income score distributions for each census tract ranged from −0.49 to +1, and similarly the ICE race + income score distributions for the census tracts in which the study sample of birthing persons lived ranged from −0.48 to +1. Demographic characteristics of Black birthing people and their infants are described in Table 2. In this sample, 19.4% of birthing people delivered very preterm (less than 32 weeks gestation) and 80.6% delivered late and moderately preterm (32-36 weeks gestation) infants. SGA infants were overrepresented at 14.4%, whereas LGA infants were underrepresented at 7% compared to their percentile definitions. Birthing people in this sample were mostly between 18-34 years (78.4%), overweight or obese (54.2%), did not smoke, use alcohol, or drugs during pregnancy (82%), had Medi-Cal insurance (59.7%), participated in WIC (65.8%), and had adequate prenatal care (70.4%). Similar proportions of birthing people had at least a high school education compared to less than a high school education (49.8% vs 47.2%), and had pre-existing or gestational diabetes (41.0% vs 59.0%). Black birthing people living in the least privileged ICE race + income tertiles consistently had the highest percentage of adverse birth outcomes ( Table 3). The proportion of acute care visits, readmission, and mortality were significantly different by tertile, confirmed with a chi-squared test (p values < 0.001, 0.03, and 0.02, respectively). Whereas the percentage of adverse infant outcomes for less than 7 days and pre-discharge mortality did not statistically significantly vary by tertile. Among all birthing people with preterm infants, Black birthing people in the least privileged ICE race + income categories, tertile 1 (RR 1.27, 95% CI 1.18-1.36) and tertile 2 (RR 1.21 95% CI 1.12-1.30) were more likely to have infants who were seen at 2 or more acute care visits (Table 4). These findings persisted when adjusting for infant characteristics and birthing person characteristics, but after adjusting for social factors in model 4 the findings were attenuated (RR 1.12 95% CI 0.96-1.29). Similarly, birthing people in the least privileged tertiles were more likely to have infants who were readmitted to the hospital (model 1 RR 1.10 95% CI 1.01-1.20). Following adjustment for infant characteristics, point estimates were unchanged. When additionally adjusting for birthing person characteristics the least privileged group, tertile 1, continued to have increased risk for readmission (model 3 RR 1.21 95% CI 1.02-1.43) but no differences were found when additionally adjusting for social factors (model 4 RR 1.15 95% CI 0.96-1.38). In tertile 2, findings were attenuated when additionally adjusting for birthing person characteristics (model 3 1.11 95% CI 0.94-1.31). Black birthing people living in the least privileged areas (tertile 1 and 2) were more likely to have infants who died after hospital discharge (model 1 RR 1.92 95% CI 1.17-3.14, RR 1.88 95% CI 1.14-3.08). These findings persisted after adjusting for infant characteristics (model 2 RR 1.91 95% CI 1.16-3.11, RR 1.88 95% CI 1.14-3.08). When adjusting for birthing person characteristics the least privileged tertile continued to have a higher risk of infant mortality after discharge and tertile 2 was not different from the referent group (model 3 RR 4.09 95% CI 1.15-14.5, RR 2.67 95% CI 0.7-10.1). The sample size for model 4 was too small to analyze. No difference in ICE metrics were found between the least privileged group (tertile 1) and the most privileged group for <7 day mortality or before discharge mortality (model 1 RR 0.99 95% CI 0.75-1.30, RR 1.17 95% CI 0.93-1.48). Results did not significantly differ after adjusting for infant, birthing person, or social factors. DISCUSSION In this study, we found that structural racism, as measured by racial and economic segregation via the Index of Concentration at the Extremes (ICE), was associated with poor postnatal outcomes for Black preterm infants. Black birthing people who delivered preterm and lived in less privileged areas were consistently at higher risk for frequent infant acute care visits, rehospitalizations, and death after hospital discharge compared to those who lived in more privileged areas. Our study is consistent with both historical and scientific literature describing the negative impact of structural racism and segregation to the health and wellbeing of Black communities. 28 Historically, the U.S. has enacted deliberately and overtly discriminatory laws and practices, particularly transparently in the Jim Crow era, that have resulted in disparate access to social determinants of health (SDH) including but not limited to quality education, housing, employment, and wealth. 10,[29][30][31] Although enslavement and legalized discrimination have been abolished, the legacy of structural racism continues. In medical literature, previous studies have identified structural racism as a significant contributor to racial disparities in perinatal outcomes. Redlining, an example of a structurally racist policy, and the resultant segregation in the U.S. have been associated with poor perinatal outcomes, including preterm birth, low birthweight, low Apgar scores, increased likelihood of NICU admission, and preterm comorbidities like intraventricular hemorrhage. 13,17,[32][33][34][35][36][37] Structural racism as measured by ICE has been associated with perinatal outcomes including PTB and IMR. 15 Fewer studies link structural racism and postnatal outcomes for preterm infants; we identified one study describing an association between neighborhood inequality and emergency department utilization for NICU graduates. 22 Another study described an association between ICE and a combined death and severe preterm comorbidity (necrotizing enterocolitis, intraventricular hemorrhage, retinopathy of prematurity, and bronchopulmonary dysplasia) outcome. 35 To our knowledge, ICE as a measure of structural racism has not been associated with infant postnatal healthcare utilization or infant mortality relative to discharge. Our study is consistent with previous studies and extends our understanding of the impact of structural racism beyond the hospital for infants who are born preterm. Additionally, it provides reinforcement to the concept of structural racism contributing to poor postnatal outcomes for preterm infants. Identifying structural racism as a primary cause of racial disparities in the postnatal period is the first step to prioritizing and implementing appropriate structural interventions to improve outcomes. Proposed pathophysiologic mechanisms and causal pathways linking segregation to poor perinatal outcomes include lowerquality care, stress exposure, socioeconomic disadvantage, and environmental toxins such as exposure to air pollution and lead. [38][39][40][41][42] Environmental factors are known to be associated with preterm birth and disproportionately burden Black communities. Similar prenatal and postnatal environmental and stress exposures may contribute to poor postnatal outcomes but we did not have relevant data to test this hypothesis in this study. Adult medical conditions, similar to birthing people medical conditions included in the study, are also impacted by structural racism. [18][19][20][21] Thus, covariates included may operate on the causal pathway from structural racism to adverse postnatal outcomes. Notably, adjusting for infant and birthing person characteristics did not significantly change the likelihood of adverse postnatal outcomes between groups. However, adjusting for SDH including insurance status, education, and participation in a federal income supplementation program, attenuated the risk in our last model, suggesting that the SDH chosen were important mediators of structural racism and poor postnatal outcomes. This is consistent with our framework of structural racism operating through the unequal distribution of SDH to impact postnatal outcomes. Our findings suggest that racism continues to negatively impact Black birthing people and their infants. 34 We cannot exclude the possibility of collinearity between these social determinants given their relationships with income. We did not find an association between structural racism and pre-discharge or less than 7-day mortality. Previous studies have found associations between structural racism and infant mortality in the first year of life, but we are not aware of any studies that examine associations between structural racism and less than 7 day mortality or mortality relative to discharge. [15][16][17][22][23][24] Previous studies have not shown racial disparities for in-hospital infant mortality, thus it is not surprising we did not find an association with structural racism for this outcome. 1,4,43 As ICE only uses race and income, it measures a small component of structural racism and therefore does not fully capture the impact of structural racism and lived experience of Black individuals in the U.S. Our study was limited by the variables in our datasets and important SDH data regarding employment, housing, wealth, and environmental exposures were not available to us. Similarly, self-reports of racism and discrimination to understand the lived- Our study is limited to the pre-pandemic era, and thus cannot be fully generalizable to pandemic or post-pandemic eras, but worsening societal inequities are worrisome for impact on preterm infant outcomes. Strengths of this study include using large, diverse population datasets and a frequently used measure of structural racism to examine previously difficult to study and rare outcomes. 15,[19][20][21][22]44,45 Other populations that are historically marginalized, including Hispanic and Indigenous populations, have also experienced structural racism. Further investigation is needed to examine the impact of structural racism on preterm infant outcomes in these populations. Structural racism is a fundamentally systemic problem that can only be solved through fundamentally systemic solutions by those with the power to decide, the power to act, and the control over the resources. 46 Structural racism is the result of disparate laws and practices that have resulted in disparate resources and opportunities for racialized groups, such as income and wealth inequality. 10 Systemic societal change to promote equity across federal, institutional, and interpersonal levels, has the potential to improve health outcomes when all birthing persons and their children can live up to their highest potential. 46 For example, addressing structural racism through federal policies can include providing opportunities for basic income, housing, healthcare, and employment for historically marginalized groups. Medical institutions can evaluate and address current culture and practice differences, strengthen ties to community resources, and reduce additional financial burdens. 38,47 Both the federal government and medical institutions must both examine how structural racism is Sample size too small to analyze. K.L. Karvonen et al. operationalized within their spaces and redistribute and equitably share power for all by revising their laws, practices, policies, and culture to become actively antiracist. [47][48][49] CONCLUSION In our study, structural racism, measured by racial and economic segregation, was associated with adverse postnatal outcomes for preterm infants born to Black birthing people, including frequent acute care visits, readmissions, and post-discharge mortality. Future studies and interventions that prioritize dismantling structural racism have the potential to achieve equitable outcomes for preterm infants. DATA AVAILABILITY The datasets analyzed during the current study are available from the corresponding author on reasonable request.
v3-fos-license
2019-12-12T10:37:08.102Z
2019-01-01T00:00:00.000
213437637
{ "extfieldsofstudy": [ "Computer Science" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.e3s-conferences.org/articles/e3sconf/pdf/2019/62/e3sconf_icbte2019_01011.pdf", "pdf_hash": "409314c1afcbfcc16c93387f2e8831e45c7d9e0a", "pdf_src": "Adhoc", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43565", "s2fieldsofstudy": [ "Engineering", "Computer Science" ], "sha1": "e8ba9d398da007ed66c10b153e1676f647ddc568", "year": 2019 }
pes2o/s2orc
Robustness Evaluation Strategy of Ubiquitous Power Internet of Things Based on Important Node Recognition This paper analyses the structure and characteristics of ubiquitous power Internet of things (UP-IoT) from four levels: the perception layer, network layer, platform layer and application layer. The robustness of UP-IoT is defined from the perspective of system structure, and the internal and external disturbance factors of robustness are analysed. According to the scale-free characteristics of complex network, a robustness evaluation strategy for UP-IoT based on identification of important nodes is proposed. A set of robustness evaluation indexes, including degree centrality, betweenness centrality, closeness centrality, maximum connectivity and connectivity factors, are established to quantify the importance of nodes. The model in this paper is used to analyse the UP-IoT network model with 12 nodes and verify the feasibility of the evaluation strategy. The access layer has a large number of network nodes, which are widely distributed in various power plants, substations of 110kV and lower voltage grades, counties and their backup, power users, power emergency units and other scenarios, supporting data access and highspeed information interaction of local users; The network nodes in the convergence layer are mainly distributed in key scenarios such as 220kV substation, HVDC converter stations, regional power grid dispatching centers and backup dispatching centers, and the main functions are access control and flow control. The network nodes of the core layer mainly rely on the regional power grid dispatching centers, the provincial power grid dispatching center and its backup dispatching center, and the substation of voltage grade 330kV and above, with high transmission speed, low delay and high reliability. The network nodes of backbone layer are supported by national power grid dispatching center, regional power grid dispatching centers and provincial power grid dispatching centers to realize high-speed transmission and exchange of cross-regional power grid communication. The platform layer can be regarded as the "background" of the UP-IoT. It is built based on big data and cloud computing technology. Its main role is to store and manage data of each scene, count and analyze information of each unit, calculate and verify indicators of each link, and provide a compatible and shared operation control environment and big data support for internal and external application services. The platform layer of the UP-IoT mainly includes four aspects: the integrated "state grid cloud" platform, the enterprise operation support platform, the IoT management center, and the unified data platform of the whole business. The application layer is the "front desk" of data and user interaction in the UP-IoT. It fully processes and excavates big data to make the potential value of data apparent, presents and interacts the corresponding indicators and information to users in an intuitive and friendly way, and meets the application needs of various types of users inside and outside the industry. The characteristics of UP-IoT The original business system of smart grid is built based on the needs of each major. Although it is vertically connected, it is horizontally independent, and the crossmajor interaction and sharing of data and information is not ideal, forming a number of isolated business groups. The UP-IoT will unify data acquisition and interaction standards, standardize interface mode, realize horizontal connection of business and cross-professional sharing of data, and open the "last kilometre" of smart grid communication with wireless communication technology. It will expand the application space of data information with smart mobile terminal devices [10] with flexible access and friendly interface, and provide users with high-quality, efficient and simple services, so as to optimize the business operation mode and personnel cooperation mode. Therefore, the construction of UP-IoT will promote the emergence of new business forms and bring unprecedented opportunities for the development of power grid and enterprise transformation. The UP-IoT integrates sensing, access, transmission, computing, storage and other equipment into various scenarios of power systems at all levels such as power plants, substations, dispatching centers, power users, etc. Each scene is connected by power network and communication network, forming an intelligent service system with interconnection of elements, state awareness, open sharing and integration innovation ability. These highly integrated scenarios of Internet of things devices become the elements that affect the function of the whole system. According to the complex network theory, the scene units integrating a large number of ubiquitous devices and facilities in the power Internet of things are abstracted as nodes, and the power network and communication network connecting each scene are abstracted as edges, so as to facilitate the application of complex network theory to study the UP-IoT. The application of mobile Internet technology makes node access more convenient, networking more flexible and topology more complex. With the continuous access of new nodes, the scale of UP-IoT will continue to grow, and new access nodes tend to connect with nodes with high connectivity. A few nodes usually have a large number of connections, while most nodes have a small number of connections. That is to say, the connection degree of nodes conforms to power law distribution, which reflects the scale-free characteristics of complex networks [11]. Robustness of UP-IoT UP-IoT can be regarded as a generalized complex control system from the perspective of system theory. Among them, various levels and links, such as state perception, information extraction, edge calculation, signal access, data transmission and application processing, will be affected by complex physical environment and network environment, as well as interference from many known and unknown factors in the actual operation. System parameter perturbation is inevitable, so it is necessary to take robustness as an evaluation index of system performance and introduce it into UP-IoT. The robustness of UP-IoT is mainly manifested in the ability to maintain stable network performance and normal operation of the system when the network suffers shock or parameter disturbance. Since the UP-IoT is divided into four levels from the structure, the disturbance can be decomposed according to the levels, so the robustness of the UP-IoT can be described as follows: (1) Among them, is the robustness of UP-IoT; is the disturbance factor affecting the perception layer, is the disturbance factor affecting the network layer, is the disturbance factor affecting the platform layer, and is the disturbance factor affecting the application layer. , , and are weights corresponding to different factors at different levels, 1 , 2 , 3 and 4 are the number of disturbance factors of each layer. This description mode reflects the system structure of UP-IoT on the whole, so the analysis of network structure is the key point to study the robustness of UP-IoT. Disturbance factors analysis According to the sources, the robustness disturbance factors of UP-IoT can be divided into internal factors and external factors. Internal factors are mainly the parameter perturbations of the system itself. After verification, the factors that have great influence include: signal acquisition success rate, sensor node density, sensor node failure rate, sensor node signal-to-noise ratio, signal acquisition throughput, signal transmission delay, delay jitter, terminal packet loss rate, terminal error rate, available bandwidth and working frequency band of the terminal, etc. [12] External factors refer to impact damage, network attack, natural disaster, etc. Due to the small number of connections of most nodes, UP-IoT shows strong robustness in response to random attacks or unexpected failures. Due to the large number of connections of a few nodes, when the UP-IoT is maliciously attacked by these few nodes, once the node with a high number of connections fails, the network function will be severely hit, and even the network disintegration and paralysis will occur. It is of great significance to evaluate the robustness of network in real time for formulating network security strategies and doing network operation and maintenance well. Robustness evaluation strategy According to the scale-free characteristic of complex networks, it can be seen that nodes with high connectivity are usually the key factors affecting network functions. Although these nodes have a small number, their importance is very high, which should be paid special attention to in network security protection and resource allocation. However, node connectivity is only one factor in evaluating network robustness. Therefore, the robustness evaluation strategy of UP-IoT is to establish a set of reasonable evaluation indicators, quantify the importance of network nodes by calculating the value of various indicators, determine the "key few" nodes that have a great impact on network functions from a macro perspective, and identify the key parts that determine the robustness of the network. Robustness evaluation indicator system The complex topological structure of UP-IoT can be abstracted into nodes and the edges of connecting nodes. The two nodes interact with each other through the connected edges and are not restricted by fixed directions and weights. According to these, the model of UP-IoT is constructed as undirected and unweighted [13] network = ( , ). The network is composed of | | = nodes and | | = edges. is the set of nodes in the network, = { 1 , 2 , 3 , ⋯ , }. is the set of edges in the network, = { 1 , 2 , 3 , ⋯ , } .If node is connected to node via an edge, then = 1 . Conversely, if = 0, then node is not connected to node . Since the node cannot be connected to itself, = 0 for any node . The importance degree of of any node can be quantified as several key parameters of the node, including degree centrality [14], betweenness centrality [15], closeness centrality [16], maximum connectivity [17] and connectivity factor [18], etc. The importance degree of the node can be determined by accounting and analyzing the key parameters of each node. First, degree centrality is an indicator describing the number of edges directly connected to a node. This indicator reflects that the more connections a node has, the greater its communication capacity and influence range will be, and the higher its importance will be. However, it is impossible to distinguish the importance of nodes with the same number of connections. The degree centrality of node is expressed as: Second, betweenness centrality is an indicator describing the number of shortest paths between one node and the other. This indicator reflects the influence of a node on the communication function of other nodes. If this indicator value of a node is high, it will have a greater impact on the network communication function and its importance will be higher. The shortest path number of all nodes in the network to other nodes is expressed as , and the number of all shortest paths passing through node is expressed as (ⅈ), then the betweenness centrality of node is expressed as: Third, closeness centrality is an indicator describing the inverse shortest path from a given node to other nodes. This indicator reflects the degree of closeness between nodes and other nodes in the network. If this indicator value of nodes is high, it is more likely that they are located in the center of the network and more important; otherwise, it indicates that nodes are marginalized by the network and have less influence. represents the shortest distance between node and node , then the closeness centrality of node is expressed as: Fourth, the maximum connectivity is the ratio of the number of nodes in the maximum connected subnets to the number of nodes in all connected subnetworks after the node is damaged and fails. The larger the value is, the smaller the influence of the failure node on the network function, the stronger the robustness of the network, and the better the communication ability is maintained. When node is damaged, the node number of the maximum connected subnet is expressed as , and the node number of all connected subgraphs without isolated nodes is expressed as ′ , then the maximum connectivity of the network is expressed as: = max ∕ ′ (5) Fifth, the connecting factor ϖ is the reciprocal of the number of subnets formed by network splitting after the node is damaged and failed. ϖ is larger, the smaller the impact on the function of network failure node, network fragmentation degree is lower, the stronger robustness of the network. When node is damaged, the network is divided into sub-networks, and contains isolated nodes. Then, the connectivity factor of the network is expressed as: 5 Establishment of network model and analysis of example Establishment of network model In terms of topological structure, the UP-IoT is compatible with the typical topology of strong smart grid, and integrates classic structures such as star, tree, bus and ring. In order to ensure uninterrupted power supply and communication transmission and enhance network robustness, important nodes in the strong smart grid usually adopt ring networking mode to ensure 100% redundant backup of links. Structural features can be extracted and abstracted into network models, as shown in figure 2. The network model is composed of 12 nodes according to the typical topology of UP-IoT. Node 4, Node 5, Node 6 and Node 7 are in a group, and Node 6, Node 7 and Node 8 are in a group, respectively forming a ring network and the two rings are tangent. Node 1, Node 2, Node 3, Node 9, Node 10, Node 11 and Node 12 form star or tree links. It contains classical structures such as ring network, ring tangency and ring chain, etc. The maximum node degree is 4, and the average node degree is 2.1667. Validation and analysis of evaluation strategies According to the robustness evaluation strategy, five evaluation indicators of each node in the network model are calculated one by one, including degree centrality DC, betweenness centrality BC, closeness centrality CC, maximum connectivity and connectivity factor ϖ. The calculation results are shown in Table 1. According to the maximum connectivity of all nodes, of Node 4 and Node 6 are the same and the minimum. Therefore, when Node 4 and Node 6 fail, the network functions are most affected, and the degree centrality value, betweenness centrality value and closeness centrality value of Node 6 are the highest, indicating that Node 6 is the most important in this network model. The connectivity factor ϖ value of Node 1 and Node 10 is minimum because of star network mode. When Node 1 and Node 10 fail, Node 2, Node 3, Node 11 and Node 12 become isolated nodes, indicating that this connection mode is not conducive to maintaining network functions and reduces network robustness. The betweenness centrality value of Node 2, Node 3, Node 5, Node 9, Node 11, and Node 12 is 0, indicating that these nodes have no influence on the communication function of other nodes. Among them, the closeness centrality value of Node 2, Node 3, Node 9, Node 11, and Node 12 is low, indicating that they are located at the edge of the network. Node 5 has a high value of closeness centrality, indicating that it is close to the core of the network. Although Node 5 forms circular connections with Node 4, Node 6 and Node 7, it is not in the shortest path between any nodes, so the ring structure can improve the network robustness. From the above analysis, it can be seen that the importance of nodes cannot be distinguished from only one indicator. Through calculation and analysis of the network node 5 indicators, not only can determine the importance of nodes, but also can grasp about distribution of nodes in the network location, as well as the information such as networking mode and the influence of nodes on network functions. It can provide important reference for further targeted network construction and optimize the allocation of resources. The five indicator values of each node in the network model are drawn into a polyline graph, as shown in figure 3. From the figure, we can intuitively see the change of the maximum connectivity of the network when a node fails. Then, the most important node in the network can be identified as node 6 by comparing other indicators. When Node 6 fails, the network robustness is the lowest. When some edge nodes, such as Node 2, Node 3, Node 9, Node 11 and Node 12, are attacked, the maximum connectivity and connectivity factor of the network are at high values, and the network has high robustness. Conclusion In this paper, a network model is constructed based on the typical UP-IoT topology, and the feasibility of the robustness evaluation strategy proposed in this paper is verified by an example analysis, which lays a foundation for the promotion and application in the actual network. UP-IoT has a huge scale. In the state of interconnection of everything, electrical equipment or substations can be abstracted as nodes in the network. Through state awareness, the real-time state of the network and each node can be obtained. It is necessary to give full play to the functions of cloud computing, locate important nodes immediately, identify the key factors affecting network functions, and realize real-time evaluation of network robustness, ensure the overall security and stability of the network, and improve the disaster tolerance and fault tolerance of UP-IoT.
v3-fos-license
2024-02-06T16:56:33.666Z
2024-02-02T00:00:00.000
267432419
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.frontiersin.org/articles/10.3389/fpsyg.2024.1332098/pdf?isPublishedV2=False", "pdf_hash": "879e9a966794854bdd43429e87ebd0a7a197a208", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43567", "s2fieldsofstudy": [ "Education", "Linguistics", "Computer Science" ], "sha1": "4a8be3bb0caf57cb1d3c72ac232932424a2c9ec2", "year": 2024 }
pes2o/s2orc
The impact of internal-generated contextual clues on EFL vocabulary learning: insights from EEG With the popularity of learning vocabulary online among English as a Foreign Language (EFL) learners today, educators and researchers have been considering ways to enhance the effectiveness of this approach. Prior research has underscored the significance of contextual clues in vocabulary acquisition. However, few studies have compared the context provided by instructional materials and that generated by learners themselves. Hence, this present study sought to explore the impact of internal-generated contextual clues in comparison to those provided by instructional materials on EFL learners’ online vocabulary acquisition. A total of 26 university students were enrolled and underwent electroencephalography (EEG). Based on a within-subjects design, all participants learned two groups of vocabulary words through a series of video clips under two conditions: one where the contexts were externally provided and the other where participants themselves generated the contexts. In this regard, participants were tasked with either viewing contextual clues presented on the screen or creating their own contextual clues for word comprehension. EEG signals were recorded during the learning process to explore neural activities, and post-tests were conducted to assess learning performance after each vocabulary learning session. Our behavioral results indicated that comprehending words with internal-generated contextual clues resulted in superior learning performance compared to using context provided by instructional materials. Furthermore, EEG data revealed that learners expended greater cognitive resources and mental effort in semantically integrating the meaning of words when they self-created contextual clues, as evidenced by stronger alpha and beta-band oscillations. Moreover, the stronger alpha-band oscillations and lower inter-subject correlation (ISC) among learners suggested that the generative task of creating context enhanced their top-down attentional control mechanisms and selective visual processing when learning vocabulary from videos. These findings underscored the positive effects of internal-generated contextual clues, indicating that instructors should encourage learners to construct their own contexts in online EFL vocabulary instruction rather than providing pre-defined contexts. Future research should aim to explore the limits and conditions of employing these two types of contextual clues in online EFL vocabulary learning. This could be achieved by manipulating the quality and understandability of contexts and considering learners’ language proficiency levels. Introduction In recent years, online learning has gained popularity among selfdirected learners due to its convenience and accessibility (Alzahrani, 2022).This flexible approach accommodates a wide range of subjects in today's era of lifelong learning (McAuley et al., 2010).Online English vocabulary learning is particularly favored for its crucial role in global communication (Hyunjeong and Mayer, 2018;Lee and Harris, 2018).English as a Foreign Language (EFL) learners, especially those with limited proficiency, prefer online vocabulary learning for its self-paced nature (Arispe and Blake, 2012).Vocabulary is progressively learned through mobile applications and online platforms that provide multimedia content for enhanced learning (Chen et al., 2018;Roy et al., 2019;Wang et al., 2021).As a result, EFL vocabulary learning in this digital age has shifted from traditional paper-based methods to online multimedia environments (Alhatmi, 2023). The question of how to effectively learn vocabulary online and optimize learning outcomes has drawn significant attention from researchers and educators (Mahmoudian, 2017;Yeh and Lan, 2018;Xu et al., 2020).It has long been thought that vocabulary acquisition should be determined by mechanical repetition rather than a deeper understanding of word meanings.Researchers used to suggest that vocabulary could be retained through continuous repetition, transferring words from short-term to long-term memory (Atkinson and Shiffrin, 1968;Gairns and Redman, 1986).However, contemporary research highlights the importance of understanding the relationships between target words and contextual clues for EFL learners (Wang and Huang, 2017;Fukushima, 2019).It is now understood that contextual clues, such as words, phrases, or sentences in text, play a pivotal role in aiding learners in associating unfamiliar words with their prior knowledge, a crucial step in vocabulary acquisition (Lowell et al., 2020;Jabar and Mansor, 2021).They serve as a vital cue for the indispensable semantic processing phase of vocabulary learning (Xu, 2010).However, the impact of contextual clues on EFL vocabulary learning, especially in online environments, warrants further exploration. The advantage of learning EFL vocabulary online Online vocabulary learning is beneficial for EFL learners primarily because digital materials provide multiple channels of information, enabling learners to make more effective use of their cognitive resources for meaningful learning (Mayer, 2017;Wolf, 2018).An increasing body of evidence suggests that exposure to digital language materials significantly enhances learners' vocabulary comprehension and acquisition when compared to traditional printed materials (Bui et al., 2020;Cong-Lem and Lee, 2020).For instance, video lectures, as a prominent form of online instructional resource, are highly favored by EFL learners due to their inherent advantages (Ramezanali and Faez, 2019;Kokoc et al., 2020;Wang and Lee, 2021).Video captions effectively synchronize audio-visual input channels and guide learners' attention, promoting deeper word processing and vocabulary acquisition (Montero Perez et al., 2015, 2018;Teng, 2019;Wang, 2019;Ouyang et al., 2020).Additionally, the presence of vivid instructor images in videos facilitates EFL learners' vocabulary mastery through social cues like gestures, promoting interaction and motivation (Drijvers et al., 2019;Andra et al., 2020;Zhu et al., 2022) and delivering extra semantic information for vocabulary comprehension in an efficient manner (Drijvers and Ozyurek, 2016;Pi et al., 2021). In addition to the inherent attributes of online materials, online vocabulary learning provides extensive communication opportunities for EFL learners through virtual chat rooms and network groups (Stashko, 2019).Increased interaction enhances learners' motivation and self-perception as capable speakers (Skidmore, 2023), leading to satisfactory vocabulary acquisition through active participation and the use of social networking tools (Polat et al., 2013;Teng et al., 2022).These social benefits extend to other forms of online vocabulary learning, including digital games and virtual reality (Acquah and Katz, 2020;Huang et al., 2022).Besides, these innovative methods reduce language anxiety by creating a supportive social environment and enhancing learners' autonomy through real-time interactivity (Jabbari and Eslami, 2019;Tseng et al., 2019;Tai et al., 2022).Consequently, learners gain confidence in their vocabulary development due to increased engagement (Calvo-Ferrer, 2017). Research regarding contextual clues in EFL vocabulary learning Contextual clues play a crucial role in vocabulary instruction, aiding learners in comprehending new words and grasping their semantic meanings (Wallace, 1982).Existing research suggests two ways of accessing contextual clues for semantic processing in vocabulary learning.First, related contexts can be provided by learning materials, such as example sentences accompanied by unknown words (Liu and Mostow, 2013).Example sentences with translations in the learners' native language act as valuable scaffolding, especially for EFL learners with lower language proficiency (Jimenez and Kanoh, 2012;Pauwels, 2012).This promotes comprehensive vocabulary acquisition and facilitates subsequent review (Cheng and Good, 2009).Accordingly, researchers have attached great importance to the role of contextual clues provided by examples in establishing specific semantic connections within learners' prior cognitive schemas (Kaivanpanah and Rahimi, 2017;Elgort et al., 2018a;Butler, 2020).In contrast, another group of researchers advocates for internalgenerated contextual clues created by EFL learners themselves, such as constructing sentences with new words.They emphasize the significance of generative semantic processing in improving vocabulary acquisition due to the variability of learners' backgrounds and individual experiences (Sun and Scardamalia, 2010;Wittrock, 2010).Given that understanding provided contexts relies on learners' prior language proficiency, example sentences may hinder semantic processing and contextual integration due to poor linguistic comprehensibility caused by the presence of unfamiliar words in the context (Bernardo and Harris, 2017;Chen et al., 2017;Elgort et al., 2018b).It further impedes learners' vocabulary acquisition if example sentences are created by automatic machine translation that lacks the richness of expression (Hsiao and Hung, 2022).Learners might, therefore, achieve better performance by generating their contextual clues and linking words to their existing semantic networks (Ding et al., 2017). Existing studies have highlighted the crucial function of contextual clues, whether provided by materials (e.g., example sentences) or generated by learners (e.g., creating sentences) in EFL vocabulary learning.However, few studies have explored the differences between these two approaches.Some evidence comes from incidental vocabulary learning, where learners memorize words incidentally through reading materials (Sok and Han, 2020).Learners were assigned to one of three tasks after reading a passage: multiplechoice, fill-in-the-blank, or sentence creation (Folse, 2006;Ansarin and Bayazidi, 2016).The results showed the poorest vocabulary retention when learning by creating sentences, indicating that contextual clues provided by materials enhance vocabulary acquisition more effectively than those generated by learners.However, the emphasis on additional word-related tasks and repetition in different contexts could have influenced the results in incidental vocabulary learning (Rott et al., 2002).The evidence from the aforementioned studies remains inadequate to conclusively establish the superiority of external contextual cues provided by content and materials.A contrasting study by Hulstijn and Laufer (2001) on incidental vocabulary acquisition revealed that learners who were instructed to write compositions using target words demonstrated superior vocabulary acquisition compared to those who engaged in a fill-inthe-blank task.This outcome underscores the efficacy of internalgenerated contextual cues by learners.Further evidence on this matter is derived from intentional vocabulary learning, where learners acquire new words by directly studying vocabulary lists (Sok and Han, 2020).Intentional learning is considered crucial in EFL vocabulary instruction and received much attention from researchers as it is the most commonly employed strategy among learners to acquire lexical knowledge (Yamamoto, 2014;Webb et al., 2020).There is an increasing consensus suggesting that intentional learning often results in better recall and retention performance compared to incidental learning (Schmitt, 2000;Yamamoto, 2014;Wong et al., 2021;Panmei, 2023).However, consensus remains elusive in the realm of intentional vocabulary learning.Some studies have suggested that both external-provided and internal-generated contextual clues have equal effects on promoting vocabulary acquisition (Talebzadeh and Bagheri, 2012;Soleimani et al., 2015).On the other hand, other researchers advocate the advantages of internal-generated contexts (Zhang, 2009;San-Mateo-Valdehita, 2023).San-Mateo-Valdehita (2023) observed that Japanese learners achieved better performance and reported greater cognitive efforts when learning Spanish vocabulary by creating their own sentences.However, this conclusion was drawn from a study on Spanish vocabulary, not EFL vocabulary learning.Zhang's (2009) experiment yielded similar results, indicating that English major learners performed better when learning English vocabulary by constructing sentences rather than relying on example sentences provided by their instructor.Nevertheless, it is essential to acknowledge that most online EFL learners are non-majors who engage in informal self-directed vocabulary learning (Zourou, 2020;Pikhart et al., 2022).Their preferences for diverse language learning strategies stem from variations in vocabulary level and language proficiency compared to major learners (Shujing and Xie, 2007;Ma and Abdul Samat, 2022).Consequently, they might struggle with unknown words in provided contexts (Sadeghi and Nobakht, 2014).Moreover, the sentences they construct may not be as high in quality as those produced by major learners due to their limited vocabulary and knowledge of sentence structures (Nishida, 2014;Song et al., 2022).Overall, the debate regarding the benefits of contextual clues, whether provided by materials or generated by learners, warrants further exploration. In addition to exploring behavioral performance, researchers argue that vocabulary acquisition can be predicted by learners' mental efforts and cognitive involvement during the learning process (Zarifi et al., 2021).Current evidence suggests that vocabulary comprehension involves a deep level of processing linked to cognitive functions, which aid in retaining new words in long-term memory with a lasting impact (Craik and Lockhart, 1972;Craik and Tulving, 1975).Given the critical role of contexts in EFL vocabulary comprehension, it is important to investigate the differences in mental efforts between external-provided and internal-generated contextual clues.Research has shown that learners achieve better learning performance when context-related tasks require them to exert greater mental effort to understand word meanings (Verhallen and Bus, 2009), especially in intentional learning settings that demand higher attention and engagement with lexical knowledge (Zhang et al., 2020).In essence, the extent to which a new word is remembered depends on the level of cognitive involvement, particularly the mental efforts invested when encountering contextual clues (Keating, 2008;Taheri and Golandouz, 2021).However, prior studies have not reached a consensus on whether external-provided or internal-generated contexts necessitate higher cognitive involvement and elicit greater mental efforts from learners (Zou, 2017;Gohar et al., 2018;Alavinia and Rahimi, 2019;Liu and Reynolds, 2022).Soleimani et al. (2015) even suggested that learners appear to engage in similar mental efforts and achieve comparable performance when viewing presented contextual clues compared to generating their contexts, which may be attributed to the fact that EFL learners often prefer high-quality, easily understandable examples to grasp word meanings (Xu, 2006;Webb, 2008).Therefore, it remains imperative to further explore the differing effects of contextual clues provided by materials and those generated by learners from the perspective of mental efforts, especially for non-major EFL learners. In addition, sustained attention has been identified as a critical predictor of online learning performance among learners (Chen and Wang, 2018).Focusing on lexical knowledge results in greater engagement in learning, which contributes to vocabulary acquisition (Ouyang et al., 2020).However, online learners often report difficulties in maintaining attention due to the lack of oversight and guidance (Valizadeh and Soltanpour, 2021).Attentional engagement in EFL vocabulary learning can be enhanced by increasing the frequency of exposure to words and task demands (Lai et al., 2017;Godfroid et al., 2018;Koval, 2019), which is distinct from the processes involved in viewing provided contexts and generating their contexts.Therefore, it is highly conceivable that learners may exhibit different levels of attentional engagement when acquiring lexical knowledge since they need to access contextual clues to understand unfamiliar words through various means.However, relevant studies have yet to explore this interesting and significant issue. In addition to the limited exploration of learners' mental efforts and attentional engagement during EFL vocabulary learning within the domain of contextual clues, another limitation of the existing literature is that researchers typically rely on behavioral self-reports after learning to assess cognitive activities.This limitation restricts the effectiveness of using semantic processing and contextual comprehension as predictors of vocabulary acquisition.Vocabulary comprehension is closely associated with deep cognitive functions and internal processing mechanisms (Crossley et al., 2009;Yousefi and Biria, 2018).However, behavioral measurements may not sufficiently capture learners' cognitive activities, particularly their mental efforts during the learning process, as they are unable to reveal learners' cognitive processes (Hulstijn, 1993;Yamada et al., 2014).Concerning attentional engagement, while several studies have explored learners' visual preferences during online vocabulary learning using eye-tracking, these investigations were not directly related to the topic of contextual clues (Godfroid et al., 2018;Ouyang et al., 2020;Wang and Pellicer-Sanchez, 2022).Furthermore, eye movement indicators are associated with learners' visual preferences but may not fully uncover their mental responses (Ding et al., 2022).Consequently, it is necessary to explore learners' cognitive activities (such as mental efforts and attentional engagement) during vocabulary learning using an immediate and accurate method.This would contribute significantly to our understanding of the differences between the two ways of accessing contextual clues from a deeper and internal perspective. Assessment of mental efforts and attentional engagement It has been established that electroencephalography (EEG) can provide insights into the processes related to attention and mental efforts during learners' cognitive activities, enabling a real-time examination through neural oscillations (Ko et al., 2017;Puma et al., 2018).Its reliability in assessing learners' mental efforts has been established in an educational context (Zhu et al., 2021).Additionally, EEG has been employed to investigate online learners' attentional engagement, given its sensitivity to variations in concentration (Chen et al., 2017;Chen and Wang, 2018).Therefore, EEG is a valuable tool for exploring learners' mental efforts and attentional engagement when learning EFL vocabulary online using different approaches to accessing contextual clues.This approach helps address the research gaps and provides insights into the impact of contexts on vocabulary acquisition by examining learners' neural activities. Stronger beta-band oscillations (14-30 Hz) are reportedly associated with active cognitive involvement and sustained mental efforts (Sprengel and Job, 2004;Lin and Kao, 2018).This correlation is most prominent in frontal and parietal regions (Howells et al., 2010;Bauer et al., 2016;Orun and Akbulut, 2019) and is linked to learners' self-control of cognitive processing and engaged mental efforts (McDonough et al., 2015;Stoll et al., 2016).Furthermore, an increase in alpha-band oscillations (8-13 Hz), especially in frontal and occipital regions, serves as an indicator of high cognitive loads when learners dedicate significant mental efforts to processing information (Meltzer et al., 2008;Wisniewski et al., 2017).Conversely, a decrease in alpha power is a sign of learners' visual concentration when they focus on external target objects (Freunberger et al., 2011;Klimesch, 2012).It has been reported that the association between alpha power and attentional engagement is most pronounced in parietal and occipital regions (Jensen et al., 2002;Marsella et al., 2017;Whitmarsh et al., 2017), which are associated with learners' processing and interpretation of visual information (Mazher et al., 2015).Another indicator related to attentional engagement is inter-subject correlation (ISC).ISC posits that there will be a greater degree of similarity in learners' neural activities when they focus on the same visual stimulus (Cohen et al., 2017;Poulsen et al., 2017).In other words, EEG signals exhibit stronger correlations across learners when they attend to the same auditory or visual information than when their attention is directed to a mentally demanding task with high internal processing requirements (Ki et al., 2016).ISC helps overcome the subjectivity of self-reporting in behavioral measurement by investigating attentional engagement through the calculation of the correlation of neural oscillations among learners (Cohen et al., 2018). The current study As previously mentioned, contextual clues, whether provided by materials or generated by learners, have been the focus of related studies due to the pivotal role of contexts in EFL vocabulary acquisition.However, little emphasis has been placed on comparing the differing effects of provided and generated contextual clues on vocabulary learning.Furthermore, the mechanisms by which contexts influence learners' EFL vocabulary learning processes remain unclear. Examining the neural underpinnings of cognitive activities, particularly mental efforts, and attentional engagement, during online vocabulary learning could enhance our understanding of how contextual clues are associated with vocabulary acquisition.Therefore, the present study compared two methods of accessing contextual clues (external-provided vs. internal-generated) and explores their effects on vocabulary learning among non-major EFL learners.Importantly, this study investigated the potential internal mechanisms, including mental efforts and attentional engagement, underlying these effects based on EEG technology. The current study conducted a within-subject experiment in which two groups of vocabulary words were taught to participants through online video clips.Regarding external-provided contextual clues, participants were presented with example sentences to gain contextual clues for vocabulary comprehension after watching videos containing new words.For internal-generated clues, participants created their sentences to generate contextual clues.Learning performance was evaluated through post-tests, including scores and reaction times in key-press responses after learning.Given that internal-generated contextual clues tap into learners' existing cognitive structures and may contribute to a better understanding of word meanings compared to provided contexts, the study formulated the following hypothesis regarding vocabulary acquisition: Hypothesis 1: Learners will achieve better learning performance when they learn vocabulary words with contextual clues they generated themselves rather than those provided by materials. In addition to behavioral indicators of learning performance, this study investigated learners' cognitive activities through EEG measurements during vocabulary learning.First, it assessed their mental efforts during word comprehension and integration of contextual clues by examining alpha and beta-band oscillations.Second, the study investigated learners' attentional engagement by analyzing alpha-band oscillations and ISC to explore the potential impact of different contextual clues on learners' attention when acquiring lexical knowledge from videos.As self-generating contextual clues represent a more demanding task that may motivate learners to exert greater mental efforts and engage more in attention, the study formulated two hypotheses regarding cognitive activity: Hypothesis 2: Learners will invest more substantial mental efforts, indicated by stronger alpha and beta-band oscillations when they comprehend vocabulary words with contextual clues generated by themselves compared to those provided by materials.Furthermore, stronger alpha-band oscillations will be most pronounced in frontal and occipital regions, whereas stronger beta-band oscillations will be most significant in frontal and parietal regions.Hypothesis 3: Learners who learn vocabulary words with internal-generated contextual clues will demonstrate higher attentional engagement, as indicated by weaker alpha-band oscillation and higher ISC while viewing videos compared to those who learn with contextual clues provided by materials.Furthermore, weaker alpha-band oscillations and higher ISC will be most pronounced in the parietal and occipital regions. Participants Twenty-nine non-major EFL students were recruited from a Chinese public university through an online advertisement, exhibiting female predominance (n = 21) with a mean age of 22.4 years (SD = 2.04), emanating from diverse academic backgrounds, including majors in educational technology, psychology, mechanical engineering, and others.All participants were native Mandarin Chinese speakers and reported having normal or corrected-to-normal vision and hearing.The learning material was high-frequency vocabulary words taken from preparation books for the Graduate Record Examination (GRE) (Pratheeba and Krashen, 2013).All participants should have passed College English Test-6 (CET-6) and had no prior preparation for GRE.CET-6 is the highest national English proficiency test for non-major students in China, and many undergraduates pass it with varying scores (Yang et al., 2013).This ensured a minimum level of English proficiency for participants to learn GRE words and complete the experimental tasks.Participants were not considered advanced English learners, as indicated by their pre-test scores (mean/maximum = 6.65/20,SD = 2.23).To provide context, advanced English learners in other studies typically scored an average of 105.29 out of 120 on the TOEFL (Moon et al., 2019).After the experiment, participants were compensated with 60 RMB as a token of appreciation.The study obtained informed consent from all participants and received approval from the Ethics Committee. Design and procedure The present study adopted a within-subjects design to control for the effects of prior knowledge.Each participant engaged in two experimental conditions categorized by the source of contextual clues: external-provided and internal-generated.In each condition, participants learned 40 vocabulary words through 40 video clips presented in random order.After viewing each video, participants were tasked with comprehending and remembering the word using contextual clues.For the external-provided contextual clue (condition a), an example sentence was displayed on the screen for 10 s without sound, and participants were instructed to read it silently.For the internal-generated contextual clue (condition b), participants had 10 s to create a sentence that included the word in mind.No specific requirements regarding sentence structure, content, or grammaticality were imposed.They then moved on to the next word in the following video clip.The order of the conditions was counterbalanced using a Latin Square design.Participant No.1 started with condition (a) and then proceeded to condition (b), while participant No. 2 followed the reverse order.The assignment of IDs was determined randomly based on the order of registration. Before the formal experiment, participants completed a pre-test to ensure that their English proficiency was not advanced, which might affect their vocabulary acquisition strategies and performance compared to non-advanced learners.Subsequently, they filled out a personal information questionnaire, provided informed consent, and were informed about the laboratory requirements before washing their hair and wearing an electrode cap.Following these preparations, participants engaged in both experimental conditions, with EEG signals recorded throughout the experiment.After learning 40 words in the first experimental condition, participants finished a post-test and took 5-min break to minimize carry-over and overload effects before enrolled in the other experimental condition.The entire experiment lasted approximately 1 h for each participant.The experimental procedure is depicted in Figure 1. Material Eighty words were randomly selected from the GRE vocabulary list, with each word taught through a three-second video clip.The English word and its Chinese translation were presented together on the left side of the screen.An instructor's image appeared on the right side since it was proved to facilitate vocabulary learning from video lectures (Drijvers et al., 2019).She pronounced the word in English and provided its main meaning in Mandarin Chinese.The instructor did not use gestures, and her orientation and gaze remained consistent across all video clips to avoid the interference.Because these non-verbal behaviors are confirmed to act as social cues, which will influence learners' attention and learning performance (Pi et al., 2020(Pi et al., , 2021)).The 80 video clips were randomly divided into two groups for use in the external-provided and internal-generated conditions (40 clips per group).There were no significant differences in video duration and the number of letters in words between the two groups [t(78) = 1.01, p = 0.317 > 0.05; t(78) = 1.13, p = 0.264 > 0.05].To assess the difficulty of the two-group vocabulary words, 10 undergraduates from various majors (excluding English) watched the video clips and found that they were consistent in difficulty according to the result of an informal interview. Contextual clues for each word were presented following each video clip (Figure 2).For the external-provided contextual clue (condition a), example sentences for all words were sourced from online dictionaries (e.g., Youdao, Oxford, and Collins) by three English major postgraduates independently.These dictionary-derived example sentences were considered of high quality and suitable for facilitating vocabulary comprehension (Friedman, 2009;Liu and Mostow, 2013).Two English professors further reviewed and revised these sentences to ensure their appropriateness.Then, the 10 undergraduates selected the best example sentence for each word based on comprehensibility.The example sentences were accompanied by their Chinese translations as contextual clues, with the word and its meaning highlighted in red.For internal-generated contextual clue (condition b), participants were asked to create their sentences, and only a red "?" was displayed on the screen. Prior knowledge (pre-test) To assess learners' prior knowledge of the learning content, a pre-test consisting of 20 multiple-choice questions was conducted.Each question corresponded to one word randomly selected from the pool of 80 words.Participants were required to choose the correct Chinese translation for the word from four options.Each correct answer earned participants 1 point, while incorrect answers received 0 points. Learning performance (post-test) In each condition, 40 multiple-choice questions corresponding to the 40 words in that condition were used to assess participants' mastery of vocabulary through key-press responses.The questions were developed by the two English professors.Participants were asked to choose the most suitable option to fill in the gapped text based on the compatibility between the word's meaning and the context in the sentences.Each question included one correct option and three incorrect options, all derived from the 40 words within the respective condition.The frequency of occurrence for each word was balanced across questions.For example: Question: "I pray that such a never comes again to anyone in the world. EEG recording and analysis A 64-channel EEG electrode cap according to the international 10-20 system was placed on the surface of participant's scalp to record EEG signals in conjunction with brain amplifier (Jasper, 1958).The electrode impedance was kept below 5 kΩ after inserting conductive gel into each electrode with a blunt needle syringe.CPz was respected to reference electrode during recording and the ground electrode was placed at the position of GND.The recording filtered with a passband from 0.1 to 100 Hz.There was no low-signal quality presented in EEG recording and no further data filtering or trimming was applied.EEG data analysis was performed using MATLAB.Bilateral mastoids M1 and M2 was acted as average re-reference in offline analysis to prevent laterality bias (Teplan, 2002).The original EEG signals were filtered with a passband between 0.1 and 50 Hz to remove the other artifact noises.Subsequently, the EOG and eye artifacts were eliminated by conducting independent component analysis (Mennes et al., 2010;Subasi and Gursoy, 2010).Algorithms were used in the software to flag and separate the epochs based on marks (Pi et al., 2021). The pre-processed EEG data was re-segmented into specific time window as two stages: (i) learn words by videos (0-3 s) and (ii) comprehend words with contextual clues (3-13 s).The pre-video interval (−1 to 0 s) was acted as baseline correction.Short-time Fourier transform (STFT) was used to separate out the alpha-band (8-13 Hz) and beta-band (14-30 Hz) oscillations and compute their power (μV 2 ) via averaging all scalp electrodes (Golden et al., 1973;Park et al., 2018;Krishnan et al., 2020).ISC was calculated based on covariance of within-subjects and between-subjects by integrating the feature vectors (Ki et al., 2016;Cohen et al., 2017).ISC and alpha-band oscillations in stage i were adopted to investigate the attentional engagement of participants when they learned words by videos, while alpha and beta-band oscillations in stage ii were used to explore their mental efforts when they comprehend words with contextual clues. Learning performance To assess differences in learning performance between the two types of contextual clues, paired samples t-tests in SPSS 22.0 were conducted with scores and reaction times as the dependent variables.The data were tested using the Shapiro-Wilk normality test and met the normality assumption of the t-test (Pallant, 2016).The "Condition" (external-provided vs. internal-generated) as the within-subjects independent variable.Cohen's d (small size: 0.2-0.5, medium size: 0.5-0.8;large size: > = 0.8) was used to measure effect size for the t-tests according to Cohen (1988). The results revealed significant differences in learning performance between the two types of contextual clues (Table 1).With internal-generated contextual clues, participants achieved higher scores [MD = 3.42, t(25) = 3.72, p = 0.001 < 0.05, d = 0.73] and shorter reaction times [MD = -1.32,t(24) = −2.06,p = 0.05, d = 0.40] compared to the external-provided contextual clues.These results strongly support Hypothesis 1, which suggests that participants benefit more from generating their own contextual clues than reading contextual clues provided by learning materials for vocabulary learning. EEG evidence To investigate differences in participants' mental efforts and attentional engagement between the two types of contextual clues, repeated-measures ANOVAs (2 × 10) in SPSS 22.0 were performed on EEG oscillations (alpha and beta-band) and ISC as the dependent variables, with "Condition" (external-provided vs. internal-generated) and "Region" (Left frontal, Right frontal, Left fronto-central, Right fronto-central, Left parietal, Right parietal, Left occipital, Right occipital, Left temporoparietal, Right temporoparietal) as withinsubject independent variables.Mauchly's test was employed to assess the assumption of sphericity for repeated-measures analysis (Pallant, 2016).In case the assumption of sphericity was violated, Greenhouse-Geisser corrected significance values were reported.Effect sizes were measured using η 2 (small size: 0.01-0.06;medium size: 0.06-0.14;large size: > = 0.14) for the ANOVAs (Cohen, 1988). In summary, these results were inconsistent with Hypothesis 3, which posited that contextual clues would influence participants' attentional engagement during video viewing.Weaker alpha-band oscillations and higher ISC were observed with external-provided contextual clues rather than with internal-generated contextual clues.Moreover, differences in alpha-band oscillations were significant in the left fronto-central, bilateral occipital, and temporoparietal regions but not in the parietal region as anticipated.ISC differences were observed in left fronto-central and bilateral temporoparietal regions other than the expected parietal and occipital regions, with a contrasting ISC discrepancy found in the right frontal region. Taken together, our results partially supported Hypothesis 2, indicating that with internal-generated contextual clues, participants exerted greater mental efforts, as indicated by stronger alpha and betaband oscillations when comprehending vocabulary words compared to the external-provided contextual clues.However, the differences in alpha-band oscillations extended across the entire brain except the frontal and parietal regions.Besides, the differences in beta-band oscillations were observed in other regions but not in the frontal region as expected. Discussion The current study aimed to evaluate the impact of contextual clues provided by materials versus those generated by learners on online EFL vocabulary acquisition.Our findings suggested that internalgenerated contextual clues brought more significant benefits for vocabulary acquisition compared to contexts provided by materials.Learners achieved higher learning performance, as indicated by better scores and shorter reaction times.EEG signals indicated increased mental efforts during word comprehension but reduced attentional engagement when viewing lexical knowledge from videos.To our knowledge, no study has hitherto compared the effects of contextual clues between external-provided and internal-generated sources on online vocabulary learning, especially for non-major EFL learners.Moreover, it is the first to adopt EEG to provide underlying neural evidence for the benefits of contextual clues from the perspectives of mental effort and attentional engagement.The electrodes in 10 brain regions.A picture illustrating the electrodes contained in each brain region.These regions and electrodes are consistent to textual description in the last paragraph of section 2.4.3 namely EEG recording and analysis. The impact of contextual clues on EFL vocabulary learning performance The learning performance in vocabulary comprehension significantly differed between the external-provided and internalgenerated contextual clues.Learners achieved higher scores and shorter reaction times when learning new words with self-created contexts.These results align with Zhang's (2009) findings, suggesting that sentence creation leads to better vocabulary acquisition than viewing example sentences, even for non-major EFL learners in this study.According to constructivist learning theory, internal-generated contextual clues are derived from learners' cognitive structures associated with their prior experiences and knowledge (Chuang, 2021).This process enables learners to construct understandable semantic associations between new words and existing lexical resources, resulting in accurate subsequent retrieval (Crutcher and Ericsson, 2003).This advantage was further supported by the reaction times in the post-test, where learners were asked to recall word meanings.An increase in reaction times reflects higher task demands associated with more challenging recall and retrieval (Bachurina et al., 2022).Shorter reaction times associated with internal-generated contextual clues indicated efficient retrieval performance during recall, suggesting that learners had solidified semantic connections between unfamiliar words and existing schema after generating contextual clues for them (Cook and Ausubel, 1970).However, these results contrast with several studies that advocate for equal benefits of generated and provided contextual clues (Talebzadeh and Bagheri, 2012;Soleimani et al., 2015), with some incidental vocabulary learning studies even presenting empirical evidence for the superior advantages of contexts presented by content and materials (Folse, 2006;Ansarin and Bayazidi, 2016).This inconsistency might be attributed to heterogeneity across studies in terms of the externally provided contextual clues.Importantly, the present study limited the richness and quantity of contexts to one example sentence for each word, whereas in previous studies, researchers provided a paragraph or more than one sentence, offering abundant contextual clues.This limitation might have mitigated evidence of the expected benefits of provided contexts, given that a single example might not be sufficient for learners to fully grasp word meanings and usage (Frankenberg-Garcia, 2012).Indeed, example sentences can better leverage their unique advantages in vocabulary comprehension by providing various contextual clues for each word, particularly for words with multiple The power of alpha-band oscillations across brain regions in two conditions when comprehending words with contextual clues.A 10 section bar graph plotting the power of alpha-band oscillations of participants when they comprehend words with contextual clues.Significantly higher alpha power occurs in the internal-generated condition in all brain regions indicated by obvious higher black bar on the right than gray bar on the left in each section. FIGURE 7 The power of beta-band oscillations across brain regions in two conditions when comprehending words with contextual clues.A 10 section bar graph plotting the power of beta-band oscillations of participants when they comprehend words with contextual clues.Significantly higher beta power occurs in the internal-generated condition in all brain regions except Left and Right frontal indicated by obvious higher black bar on the right than the gray bar on the left from third to tenth sections. 10.3389/fpsyg.2024.1332098 Frontiers in Psychology 11 frontiersin.orgimplications in different situations (Han and Song, 2011;Huang et al., 2019).However, learners may struggle to find multiple examples as online English vocabulary learning instruments usually contain no more than one example sentence used for describing the context and explaining word meaning (Huang and Ku, 2016;Wang et al., 2021).Nevertheless, these results presented the first evidence of the superior benefits of internal-generated contextual clues on vocabulary acquisition compared to single-exposure contexts provided by materials. The impact of contextual clues on EFL learners' mental efforts during words comprehension The improved learning performance associated with the internalgenerated contextual clues also stemmed from learners' significantly higher mental efforts during word comprehension compared to the external-provided condition.This study assessed learners' mental efforts using EEG frequencies in the alpha and beta-bands, which are associated with learners' cognitive activities (Chikhi et al., 2022).Learners' alpha-band oscillations are stronger when they actively control their cognitive resources to handle tasks (Zoefel et al., 2011;Huycke et al., 2021).Increased alpha activities in the frontal and occipital regions represent the maintenance of mental efforts and a high level of working memory load, leading to subsequent cognitive fatigue (Meltzer et al., 2008;Wascher et al., 2014).Higher beta power, especially in the frontal and parietal regions, also reflects significant cognitive involvement in the current task when solving problems (Tschentscher and Hauk, 2016;Hubner et al., 2018).For example, learners' beta power increases as they devote their best mental efforts to satisfying high task demands and achieving better task performance (She et al., 2012).Greater mental efforts thus contribute to positive information processing in working memory (Jaquess et al., 2018;Zhu et al., 2021), which facilitates deeper semantic integration and vocabulary comprehension (Ender, 2016;Bohn et al., 2021). Significant stronger oscillations in the alpha and beta bands were associated with internal-generated contextual clues compared to external-provided clues.The results indicated that learners devoted greater mental efforts to comprehending words when they self-created contextual clues.Given that the generative task, as a transfer of knowledge, enhances learners' autonomy and motivation in vocabulary learning (Laufer, 2001;Jilani and Yasmin, 2016), it is associated with mental efforts and the cognitive resources invested (Seufert, 2020).However, in the present study, a difference in alpha power was observed in all regions except the frontal and occipital regions that were expected (i.e., bilateral fronto-central, parietal, and temporoparietal regions).In contrast, beta power differed significantly between conditions in bilateral fronto-central, parietal, occipital, and temporoparietal regions, except the frontal region that was assumed.These results indicated the whole-brain effects of semantic processing when learners comprehended words with contextual clues in EFL vocabulary learning. On the one hand, stronger alpha-band oscillations were observed in all regions with internal-generated contextual clues compared to external-provided contextual clues.Greater alpha power in the frontal and occipital regions revealed that learners devoted greater efforts to encoding word meanings, reflecting high demands for cognitive resources during internal processing (Meltzer et al., 2008;Benedek et al., 2011;Wascher et al., 2014).We also found higher alpha power in the fronto-central, parietal, and temporoparietal regions, where is relevant to semantic processes.The activation of the frontal and temporoparietal regions indicates successful semantic integration after learners match words with related contextual clues (Baumgaertner et al., 2002;Rempe et al., 2022), whereas neural activity in the frontocentral results from functional coupling with the frontal region when increased executive control of semantic processing occurs (Rominger et al., 2020).It has been established that the process of semantic integration involves the retrieval of prior contextual clues in learners' long-term memory, resulting in alpha excitation associated with good semantic integration in the regions mentioned above (Ehrhardt et al., 2022).In addition, increasing alpha power in frontal, fronto-central, occipital, and parietal regions is also related to creative idea generation and thinking activities instead of resting states (Rominger et al., 2018;Barcelona et al., 2020;Rominger et al., 2022).This demonstrates learners' creative thinking and original ideas when they internalgenerate contextual clues, enabling them to devote greater mental efforts compared to acquiring knowledge generated by others (Wen, 2020).In short, the increased alpha-band oscillations in the above regions suggest better cognitive task performance, which requires higher working memory demands (Mahjoory et al., 2019).The process through which learners understand vocabulary words with internal-generated contextual cues is characterized by effective internal processing, substantial cognitive exertion, and seamless semantic integration. On the other hand, stronger beta-band oscillations were observed in all regions except the frontal region with internal-generated contextual clues compared to the external-provided contextual clues.The activation of frontal beta activity is associated with executive function and cognitive control resulting from learners' active engagement in current cognitive tasks (Kropotov, 2009;Basharpoor et al., 2019).In other words, increased beta power in the frontal region represents a state of efficient cognitive functioning (Song et al., 2014).The comparable beta power in the frontal region between conditions demonstrated that learners had involved similar cognitive resources in vocabulary encoding and comprehension, whether with generated or provided contextual clues.However, higher parietal beta power was associated with internal-generated contextual clues since learners' recall processes typically involve retrieving contextual clues from their prior knowledge, alongside cognitive involvement during semantic processing (Tschentscher and Hauk, 2016;Kaiser et al., 2017).Besides the expected regions, greater beta power was also observed in the fronto-central, occipital, and temporoparietal regions when learners generated contexts themselves.Higher beta power found in the fronto-central region was consistent with several language learning studies, suggesting learners' cognitive involvement and active processing (Subbaraj et al., 2014;Alimardani et al., 2021).Moreover, greater beta power in the occipital and temporoparietal regions has been associated with learners' perception of difficulty and increased tension due to high task demands, which often predict better task performance (Kakizaki, 1985;She et al., 2012).The above studies overlap in their assertion that the process of semantic integration and processing is similar to language grammar learning, driven by wholebrain functional connectivity communicated through beta-band oscillations (Kepinska et al., 2017).The interregional communication of brain activation from anterior to posterior is modulated by working memory demands, which is associated with learners' mental efforts and cognitive involvement (Sauseng et al., 2005;Fernandez et al., 2021).In brief, learners devote higher mental efforts to being involved in the process of contextual retrieval and integration when they comprehend vocabulary words with internal-generated contextual clues due to the high demands for cognitive resources from tasks. The impact of contextual clues on EFL learners' attentional engagement in videos of lexical knowledge This study further investigated learners' attentional engagement when learning lexical knowledge from videos using alpha-band oscillations and ISC.A decrease in alpha-band oscillations in parietal and occipital regions indicates active attention to external visual information (Sokoliuk et al., 2019;Son et al., 2023), while higher ISC caused by similar neural activities reveals learners' better attentional engagement when processing the same visual stimuli (Cohen et al., 2017(Cohen et al., , 2018)).A high attention level suggests that learners are focusing on and processing presented visual information, contributing to its subsequent encoding and retrieval in memory (Kirkorian et al., 2016;Kruikemeier et al., 2018).As a result, learners achieve a high level of learning performance, marked by positive attentional engagement, especially in online self-directed environments (Chen et al., 2017;Wang et al., 2019). Interestingly, we found that learners showed significantly lower attentional engagement during viewing videos when learning with internal-generated contextual clues compared to learning with the external-provided contextual clues, based on the alpha power and ISC.In this respect, pronounced higher alpha power was associated with internal-generated contextual clues, especially in the left frontal, bilateral occipital, and temporoparietal regions, except for the parietal region we anticipated.Alpha power in the parietal region is associated with visual attention to external stimuli (Hutchinson et al., 2021).The comparable parietal alpha power between the two conditions showed that learners paid attention to videos and actively received lexical knowledge.Besides, the left frontal region plays a significant role in language processing (Plaza et al., 2009), and stronger frontal alpha activity reflects learners' internal processing, where they proactively inhibit unrelated visual stimuli and focus on key details (Benedek et al., 2011).Higher alpha power in the left frontal region demonstrated that learners engaged in a proactive process of eliminating irrelevant visual information from videos and then focused on the internal processing of crucial lexical knowledge.Stronger alpha power was also pronounced in the occipital region, indicating learners' inhibition of distracting visual input and the reallocation of sensory resources (van Diepen and Mazaheri, 2017).In addition, excited temporoparietal alpha is associated with efficient visual search resulting from perceptual prediction (Spaak et al., 2016), which suggests that learners can predict the spatial location of key knowledge in upcoming video clips due to its design consistency.These results suggest an interesting finding that when learners comprehend words with internal-generated contextual clues, learners actively selected important content for subsequent processing and filtered unrelated visual input via top-down attentional control mechanisms when learning lexical knowledge through videos.The active attentional control may be due to the generative task of self-creating contextual clues, which improves learners' goal-driven attention and results in prioritized processing of task-related information in working memory (Ravizza et al., 2021), suggesting that self-generating contextual clues might further play an essential function in subsequent visual tasks, which is similar to active cognitive control for reducing interference from distractors and maintaining priority attention to core concepts (Lavie et al., 2004). Secondly, the present study also found higher ISC was associated with external-provided contextual clues rather than internal-generated contextual clues, especially in the right frontal, left fronto-central, bilateral parietal, occipital, and temporoparietal regions.The activation of the parietal and occipital regions reflects learners' processing of visual information (Mazher et al., 2015).A stronger ISC in these regions indicates that learners viewed the visual information with similar psychological perspectives and shared understanding (Lahnakoski et al., 2014).Additionally, a greater ISC was found in the temporoparietal region, which is involved in bottom-up visual selection directed by stimuli (Corbetta and Shulman, 2002) and is associated with the initial recognition and acquisition of vocabulary words (Mills et al., 1993;Davis and Gaskell, 2009).These results demonstrate that learners in the external-provided condition exhibited higher consistency in recognizing and visually processing lexical knowledge, with their attention directed by the stimuli in the videos.Moreover, the fronto-central and temporoparietal regions are associated with language processing in phonological tasks (Seghier et al., 2004;De Carli et al., 2007).The left fronto-central region is designated for the auditory pre-attentive processing of word perception (Arunphalungsanti and Pichitpornchai, 2018), whereas the temporoparietal region plays an essential role in auditory comprehension by transforming auditory input into mental lexical representation (Bosseler et al., 2021).The higher ISC in these regions appears to indicate more significant auditory participation in attending to and recognizing lexical knowledge for learners when they learn with the external-provided contextual clues.In summary, when learners need to comprehend word meanings by viewing provided contextual clues, they focus on the learning content in videos and exhibit higher sensory engagement in lexical knowledge.The greater focus on external input information likely results from the receptive task of viewing presented contexts, promoting learners to attach great importance to all lexical knowledge from videos through a bottom-up mechanism (Kakvand et al., 2022).Intriguingly, in the present study, learners with higher attentional engagement to instructional videos yielded worse vocabulary acquisition compared to those whose attention to videos seems lower.According to the uniquely higher ISC in the right frontal region in the internal-generated condition, we assumed that it might result from learners' earlier initial processing of lexical knowledge after recognizing unfamiliar words from videos.This finding aligns with prior language learning studies, which suggested a positive correlation between right frontal engagement and better acquisition and retention when learners preliminarily processing language knowledge (Qi et al., 2019).This indicates that mind wandering of attention during learning through videos is not always detrimental; some off-tasks thinking about topics can enhance knowledge acquisition (Kane et al., 2017).The results suggest an intriguing finding that greater attentional engagement might not necessarily predict better learning performance from video lectures, especially in online vocabulary learning.Instructional information with low complexity, such as lexical knowledge, might not require learners' excessive attentional resources to the learning content.Instead, learners' active processing after recognizing the key knowledge is the most critical factor.In summary, the mixed results of this study suggest that selfgenerating contextual clues to understand word meanings motivate learners to exert greater mental efforts in semantic processing and lead to better contextual integration.This generative task further leads to a top-down attentional mechanism when learners learn lexical knowledge from videos and enables them to process important visual information as a priority.The active control of the cognition process by EFL learners consequently improves their online vocabulary learning performance. Significance and implications The present study has improved the current understanding of the influence of two ways of accessing contextual clues on online EFL vocabulary learning.Existing studies have primarily focused on whether presenting contexts along with words facilitates vocabulary acquisition (Bilgin and Tokel, 2019;Nielsen et al., 2022), with different means of accessing contextual clues largely understudied (Zhang, 2009;San-Mateo-Valdehita, 2023).This study compared two contextual clues, referring to their source: those provided by materials and those generated by learners.The results were drawn based on both behavioral and neural evidence, confirming the superiority of learning vocabulary with internal-generated contexts compared to externalprovided contexts. Importantly, our findings about contextual clues have meaningful implications for online vocabulary learning.Firstly, learners who generate contextual clues themselves appear to devote greater mental efforts to semantic processing and word comprehension compared to those who receive provided contextual clues.This contributes to better vocabulary learning performance, even with higher demands for cognitive resources.Learners are encouraged to self-create contextual clues for vocabulary words after recognizing them to achieve satisfactory acquisition.Secondly, self-generating contextual clues further enhance top-down attentional control and promote the priority processing of crucial visual input when learners view lexical knowledge through videos.Therefore, learners should pay selective attention to the presented learning content from online materials and actively process the most important knowledge. Our results also provide further implications for the design of online EFL vocabulary instructional instruments.On the one hand, compared to presenting contextual clues for each word, requiring learners to self-create contexts motivates their higher mental efforts and cognitive involvement.The greater engagement subsequently facilitates learners' better vocabulary acquisition and may further improve their persistence in autonomous online learning.For another, lexical knowledge in online materials should be re-considered and simplified to help learners gain crucial information at first sight.The refined content consequently frees learners from the distraction of unimportant input and facilitates optimal use of visual resources to preferentially process crucial information. Importantly, this study is the first to explore the neural underpinnings of learners as they engage in the comprehension of words using contextual clues provided by materials or internalgenerated for semantic processing and contextual integration.We adopted EEG oscillations to explore learners' mental efforts, while previous studies about context investigated semantic comprehension by event-related potentials (ERPs), especially N400 (Abel et al., 2018;Bell et al., 2019).Specifically, higher alpha and beta-band oscillations were associated with the internal-generated contextual clues compared to the external-provided contextual clues, indicating learners' greater mental efforts and cognitive involvement in semantic processing and the results of better semantic integration (Zoefel et al., 2011;Tschentscher and Hauk, 2016;Hubner et al., 2018;Huycke et al., 2021).Moreover, the present study further investigated learners' attentional engagement in lexical knowledge from videos influenced by different ways of accessing contextual clues, which has never been considered in other studies on this topic.The outcomes of higher alpha power and lower ISC with internal-generated contextual clues compared to the external-provided contextual clues revealed learners' goal-directed attentional control and their selective visual processing of critical information when viewing videos (Cohen et al., 2017(Cohen et al., , 2018;;Sokoliuk et al., 2019;Son et al., 2023).These results collectively suggest that EFL learners who generate contextual clues themselves engage more cognitive resources in semantic processing and word comprehension, whereas their attentional engagement in lexical knowledge when viewing videos remains relatively low, which mainly stems from learners' autonomous cognitive processes containing a top-down attentional mechanism and active information processing, enabling them to attend to critical incoming information of lexical knowledge as a priority and then integrate new words with related contexts in prior knowledge structures with mental efforts.These findings provide a further explanation of the different effects between provided and generated contextual clues on online EFL vocabulary learning, as well as extending previous behavioral studies on this topic (Zhang, 2009;San-Mateo-Valdehita, 2023) by clarifying learners' cognitive activities in terms of attentional engagement and mental efforts. Limitations and further work There are three limitations in this study that can be addressed in future research.First of all, we did not require participants to write down the sentences they created or type them on the screen.Even though we had reminded participants before the experiment to create sentences in English as contextual clues, we cannot actually control the language they are thinking in.The sentence-making in Mandarin Chinese in their mind would potentially influence the effect of contextual clues on vocabulary acquisition.Because generative tasks facilitate learners to understand and use vocabulary words through a process of exposure and contextual integration that nurtures their language proficiency (Ha and Bellot, 2020).In addition, the lack of recorded sentences against further analysis of their structure, content, and grammaticality.Given that this was a preliminary trial undertaken to explore the different effects of provided and generated contextual clues, we aimed to investigate learners' mental efforts and cognitive involvement when self-creating contexts rather than testing their language fluency or analyzing language forms and rules that require time to master.However, previous research suggested that appropriate contexts and accurate grammar help learners to use words well and build vocabulary knowledge since they provide understandable contextual clues for words' semantic integration (Davidson and Ellis 10.3389/fpsyg.2024.1332098Frontiers in Psychology 14 frontiersin.orgWeismer, 2017;Ko, 2019).It is highly conceivable that the quality of contextual clues, such as the richness of content, complexity of structure, and accuracy of grammar, would moderate EFL learners' vocabulary acquisition when they generate contexts themselves. Further study is warranted to understand the interactive effects of internal-generated contextual clues and the quality of contexts on vocabulary learning performance. Secondly, the present study did not investigate participants' semantic processing on purpose when they comprehend word meanings through contextual clues due to differences in topic and focus.While EFL vocabulary acquisition relies on the understanding of contexts, enabling learners to derive word meanings through semantic relatedness (Chen et al., 2017;Joseph and Nation, 2018), it was challenging for us to ensure that every learner in the experiment could understand all example contexts due to their different prior experiences and cognitive schemas.This situation certainly occurs among EFL learners in their actual online learning processes.This leads us to a new assumption that the effects of provided contextual clues, compared to internal-generated ones, may vary based on learners' comprehension of contexts.Previous studies have explored the correlation between specific ERPs (e.g., N400) and successful semantic processing and contextual integration when learners derive word meanings from contexts (Abel et al., 2018;Bell et al., 2019).Future work should explore how the understandability of contexts and the extent of semantic integration influence the effectiveness of contextual clues provided by materials in online vocabulary acquisition. Finally, participants were not stratified according to their individual differences especially those relevant to language background.We did not collect their English usage background (e.g., study or travel abroad in English-speaking country) and English exposure experience (e.g., interacting with the English media and cultural products), which is positively relevant to EFL learners' language proficiency (Lu et al., 2021;Azzolini et al., 2022).While participants all met the recruitment requirements as non-major and non-advanced learners, and the within-subjects design excluded the interference of prior knowledge, language proficiency does affect learners' comprehension of contexts and the process of semantic integration (Yang et al., 2018).For example, learners with high proficiency tend to rely more on contextual clues to understand word meanings compared to those with lower proficiency levels (Alharbi, 2019).This kind of difference further reflects another limitation of the study regarding that various language proficiency also enables them to use different preferred strategies in language learning (Rao, 2016;Ma and Abdul Samat, 2022).The consistently employed preferred strategy facilitates vocabulary acquisition and retention for second language learners (Yang and Wu, 2015).In this case, the differences in patterns of language usage and strategies of contextual clues (e.g., preference for provided or generated contextual clues) might contribute to the observed differing vocabulary acquisition across conditions in current study.Additionally, learners who vary in language proficiency also show different attentional function when encountering target stimulus (Privitera et al., 2023).We can reasonably assume that learners with various English prior levels would perform differing attentional engagement and active attentional control when they learn lexical knowledge in videos.It is also an interesting question on the topic of online EFL vocabulary learning, which worth our exploration in further work.Further studies should be conducted to assess the feasibility of different approaches to accessing contextual clues for EFL learners with varying English proficiency levels and learning preference. To sum up, self-generating contexts have been showed as an effective method for semantic processing and contextual integration during vocabulary comprehension, especially when compared to simply viewing contexts provided by learning materials.Future research should continue to explore the advantages and limitations of these two types of contextual clues in online EFL vocabulary learning. " Options: A. calamity; B. slap; C. magnify; D. queasy.Participants received 1 point for each correct answer and 0 points for incorrect answers on each item, with a maximum score of 40 points in each condition.After choosing an option by pressing a key, the program automatically recorded reaction times and scores before proceeding to the next question.Both the external-provided and internal-generated contextual clues, post-tests exhibited high split-half discrimination [t(24) = 6.08, p < 0.001; t(24) = 5.46, p < 0.001]. FIGURE 1 FIGURE 1 Experimental procedure.A flow diagram illustrating the things each participant did in experiment.The procedure in flow diagram is in line with the textual description in section 2.2 namely design and procedure. FIGURE 2 FIGURE 2An example of contextual clues of vocabulary words.The left picture is for an example of contextual clues in the external-provided condition, presenting the context of word calamity.Texts on upper is an example sentence: This would be a calamity for European bank.Texts below is the translation of the example sentence in participants' first language (Chinese): 这对欧洲银行来说将是一个灾难.The word calamity and its translation 灾难 are in red color while other words are in white.The right picture is for an example of contextual clues in the internal-generated condition, presenting a red? on screen. FIGURE 4 FIGURE 4The power of alpha-band oscillations across brain regions in two conditions when learning words by videos.*p < 0.05, **p < 0.01, ***p < 0.001; MD is the mean difference between two Conditions (the same below).A 10 section bar graph plotting the power of alpha-band oscillations of participants when they learned words by videos.Each section responding to a brain region, from left to right namely Left frontal, Right frontal, Left fronto-central, Right fronto-central, Left parietal, Right parietal, Left occipital, Right occipital, Left temporoparietal and Right temporoparietal (the same below).Each section contains two bars, the gray one on the left represents the external-provided condition, whereas the black one on the right represents the internal-generated condition (the same below).Significantly lower alpha power occurs in the external-provided condition in Left-frontal, Left occipital, Right occipital, Left temporoparietal and Right temporoparietal regions indicated by obvious shorter gray bar on the left than black bar on the right in first, seventh, eighth, nineth, and tenth sections. FIGURE 5 FIGURE 5The ISC across brain regions in two conditions when learning words by videos.A 10 section bar graph plotting the ISC of participants when they learned words by videos.Significantly higher ISC occurs in the external-provided condition in Left-fronto-central, Left parietal, Right parietal, Left occipital, Right occipital, Left temporoparietal and Right temporoparietal regions, indicated by obvious higher left gray bar than right black bar in third, fifth, sixth, seventh, eighth, nineth, and tenth sections.However, a higher black bar on the right than gray bar on the left in second section indicates significantly lower Left-frontal ISC in the external-provided condition. TABLE 1 Mean and standard deviation of learning performance in two conditions.
v3-fos-license
2023-08-30T15:15:12.906Z
2023-08-26T00:00:00.000
261312408
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/1422-0067/24/17/13252/pdf?version=1693042916", "pdf_hash": "6a9e153e54340b7d9feee33aa80b0e3814a168ef", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43568", "s2fieldsofstudy": [ "Biology", "Medicine" ], "sha1": "985ed90f289192a4ee485e1fedf867099e57832b", "year": 2023 }
pes2o/s2orc
From Biomarkers to the Molecular Mechanism of Preeclampsia—A Comprehensive Literature Review Preeclampsia (PE) is a prevalent obstetric illness affecting pregnant women worldwide. This comprehensive literature review aims to examine the role of biomarkers and understand the molecular mechanisms underlying PE. The review encompasses studies on biomarkers for predicting, diagnosing, and monitoring PE, focusing on their molecular mechanisms in maternal blood or urine samples. Past research has advanced our understanding of PE pathogenesis, but the etiology remains unclear. Biomarkers such as PlGF, sFlt-1, PP-13, and PAPP-A have shown promise in risk classification and preventive measures, although challenges exist, including low detection rates and discrepancies in predicting different PE subtypes. Future perspectives highlight the importance of larger prospective studies to explore predictive biomarkers and their molecular mechanisms, improving screening efficacy and distinguishing between early-onset and late-onset PE. Biomarker assessments offer reliable and cost-effective screening methods for early detection, prognosis, and monitoring of PE. Early identification of high-risk women enables timely intervention, preventing adverse outcomes. Further research is needed to validate and optimize biomarker models for accurate prediction and diagnosis, ultimately improving maternal and fetal health outcomes. Introduction Preeclampsia (PE) is one of the most common obstetric illnesses during pregnancy [1], affecting around 3-8% of pregnant women worldwide, making it a leading cause of gestational-related morbidity and mortality [2].The proteinuria and the new-onset hypertension after 20 weeks of gestation are the hallmark symptoms of PE [3].However, PE might affect multiple organ systems (respiratory, hepatic, urinary, neuroendocrine, and circulatory) leading to the fetal growth restriction, preterm birth, and other adverse fetal outcomes.These symptoms accompanying hypertension allow the recognition of PE according to the criteria of numerous gynecological and obstetrician societies [4,5].Although the clinical manifestations of PE usually do not appear before week 20 of gestation, the molecular pathways leading to its onset are believed to occur relatively early in pregnancy [6].At present, the etiology of disease and thus the effective prediction of preeclampsia before it strikes are still not fully determined.The pieces of evidence indicate that the complexity of pathophysiology and etiology of PE varies between early and late onset of preeclamptic cases.The early-onset type, known as preterm preeclampsia developing before week of 34 of gestation, is recognized as a reason of the defect of placentation, whilst maternal etiology dominates in the occurrence of the late onset of preeclampsia (i.e., term PE occurring after week 34 of gestation) [7].Due to its multifaceted etiology and pathogenesis, it is impossible to find a single good candidate to predict the occurrence of preeclampsia.Therefore, a constellation of biomarkers including biochemical, ultrasound, and physiologic maternal features such as BMI, age or the smoking status are considered to foresee the probability of the risk of occurrence of PE [8]. The Fetal Medicine Foundation (FMF) created a first-trimester screening paradigm that integrates maternal variables with biochemical indicators (PlGF and PAPP-A) and biophysical markers (uterine artery pulsatility index, UtA-PI, and MAP).PE screening for this subtype is therapeutically beneficial since this model predicts preterm PE at 75% with a 10% false-positive rate (FPR).However, the detection rate for predicting term PE is only 41%, and at present, any biochemical markers do not improve the predictive value of the algorithm of prediction of PE based on the USG markers and maternal risk factors [9].Therefore, the search for the new biochemical as well as molecular predictors of preeclampsia is still a "hot" topic of research investigations.At present, some biochemical markers such as soluble FMS-like tyrosine kinase 1 (sFIT-1), PP-13, GDF15, ADAM12 [10] as well as molecular markers, e.g., cell free ribonucleic acid circulated in maternal blood [11], are considered as candidates for non-invasive tests to increase the efficacy of risk classification for PE pregnancies.Perhaps in the future, these biomarkers will allow the implementation by the physicians of preventive measures for high-risk women. This review aims to synthesize and summarize the current state of knowledge regarding the biomarkers allowing the prediction of the late onset of PE.The review specifically focuses on biochemical markers tested in maternal blood or urine and provides an in-depth analysis of their molecular mechanism for the diagnosis of PE. Pathogenesis of PE Sufficient blood flow to the placenta is essential for a correct outcome of pregnancy.During normal implantation, the placental trophoblasts (of fetal origin) invade the uterus inducing the remodeling of uterine spiral arteries, making them wide and low resistant once.This provides adequate placental perfusion to nourish the growing fetus. It is believed that the pathogenesis of PE is marked by the defective and inadequate trophoblast invasion into maternal decidua and its artery.The incomplete spiral artery remodeling leads to improper placental perfusion.The placental cells living under prolonged starvation and a hypoxia environment start to secret into maternal bloodstream numerous factors including sFlt1 or soluble endoglin (sENG) influencing the extensive maternal endothelial dysfunction [3,12] and supporting the maternal immune response, the generation of oxidative stress as well as the activation of the maternal coagulation system.All of these functions support the development of hypertension, proteinuria, and very often lead to failure of organs other than kidneys (Figure 1) [13][14][15]. PE may be classified as early-onset and late-onset depending on the timing, pathophysiology, and clinical implications. Early-Onset PE Preeclampsia detected at or after 34 weeks of pregnancy is referred to as being of an early onset type [16].The condition may be categorized as early-onset preeclampsia, requiring delivery prior to 34 weeks of gestation based on time.This type of preeclampsia is linked to the insufficient trophoblast invasion as a reason of strong inflammatory maternal reaction (Figure 1) [3,13,15].It is known that immunologic abnormalities observed as an altered profile of lymphocytes T helper (TH) and an elevation level of the CD19+CD5+ B lymphocyte support the PE phenotype.The CD19+CD5+ B lymphocytes are the major cause of forming polyreactive antibodies, especially angiotensin II type 1 receptor (AT1R) autoantibodies [17].The incorrect gestation switch of lymphocyte T helper from the Th2 to the Th1 subpopulation support inflammation as Th1 cells secrete numerous pro-inflammatory cytokines, including interleukin-12 and interleukin-18.Moreover, the changes in the profile of natural killer cells (NK), as well as their communication with the placental cells, differ between the preeclamptic and normotensive cases.Natural killer cells are one of the subpopulation of uterine cells being localized at the maternal-fetal interface, and they are involved in early placental development, specifically trophoblast invasion and remodeling of the spiral arteries [18][19][20].Some studies demonstrate that the communication between NK immunoglobulin-like receptor (KIR) and human leukocyte antigen (HLA)-C presented on the trophoblast is failure [19].Moreover, the deficiency of the maternal CD56+/NKp46+ and CD56+bright/NKp46+ cells is the hallmark of preeclampsia; indeed, the depletion in the levels of both fractions of NK occurs three/four months before the onset of the disease [19,21,22]. Early-Onset PE Preeclampsia detected at or after 34 weeks of pregnancy is referred to as being of an early onset type [16].The condition may be categorized as early-onset preeclampsia, requiring delivery prior to 34 weeks of gestation based on time.This type of preeclampsia is linked to the insufficient trophoblast invasion as a reason of strong inflammatory maternal reaction (Figure 1) [3,13,15].It is known that immunologic abnormalities observed as an altered profile of lymphocytes T helper (TH) and an elevation level of the CD19+CD5+ B lymphocyte support the PE phenotype.The CD19+CD5+ B lymphocytes are the major cause of forming polyreactive antibodies, especially angiotensin II type 1 receptor (AT1R) autoantibodies [17].The incorrect gestation switch of lymphocyte T helper from the Th2 to the Th1 subpopulation support inflammation as Th1 cells secrete numerous pro-inflammatory cytokines, including interleukin-12 and interleukin-18.Moreover, the changes in the profile of natural killer cells (NK), as well as their communication with the placental cells, differ between the preeclamptic and Late-Onset PE Late-onset PE may be induced due to maternal genetic predisposition to cardiovascular and metabolic diseases [12].The diseased placenta releases factors causing widespread damage to the endothelium of the maternal organs such as kidneys, brain, and liver.Pathologic analysis of the adrenal glands and liver has shown infarction, necrosis, and hemorrhage.The kidneys may reveal the presence of severe glomerular endotheliosis, and the heart may show endocardial necrosis.Podocyturia and a reduced glomerular filtration rate (by 40%) have been observed in women with PE.The autopsy findings of women who died from eclampsia also revealed cerebral edema and intracerebral parenchymal hemor-rhage.Research reveals that women with PE showed impaired endothelium-dependent vasorelaxation along with a subtle rise in pulse pressure and blood pressure before the onset of hypertension and proteinuria (Figure 1) [13][14][15].Severe PE may also become an underlying factor for the appearance of the HELLP (hemolysis, elevated liver enzymes, low platelets) syndrome, eclampsia (seizures), and/or restricted fetal growth (Figure 1) [23]. Biomarkers and Their Molecular Pathway Development and Prediction of PE The elucidation of the pathophysiology of preeclampsia has led to the development of various assays that estimate the maternal concentrations of biochemical markers (angiogenic or anti-angiogenic factors); these assays may further aid in administration of improved diagnosis [7]. Research suggests that differences in plasma concentration of pro-angiogenic (TGF-β and PlGF) and anti-angiogenic (sFlt-1 and sEng) factors are associated with PE.Serum samples collected at the time of delivery revealed a significant rise in sFlt-1 concentration and reduced PlGF concentration in PE compared to the controls.These disproportionate levels of anti-angiogenic factors (sFlt-1 and sEng) and pro-angiogenic factors (PlGF, VEGF, and TGF-β) cause maternal endothelial dysfunctions, further leading to the development of renal endotheliosis, hypertension, and blood coagulation [3,13,24]. The most promising screening method for an early diagnosis and prognosis of PE during the third trimester of gestation is to estimate a combination of biomarkers such as PlGF, sFLT1, and sEng with improved sensitivity and specificity [3]. Pregnancy-Associated Plasma Protein-A (PAPP-A) PAPP-A (pappalysin-1) is a high-molecular-weight (200 KDa, 1547 amino acids) glycoprotein synthesized by the placental trophoblasts and secreted into the maternal bloodstream.PAPP-A interacts with insulin-like growth factors and is significant for the growth of the placenta and fetus.PAPP-A is a metalloproteinase (containing a Zn 2+ -binding site) that is involved in the proteolytic cleavage of the insulin-like growth factor binding protein (IGFBP), eventually regulating the local insulin-like growth factor (IGF) action, which acts as a growth-promoting enzyme essential for fetal and placental development.Furthermore, PAPP-A is also involved in rapid and rigorously controlled growth and development processes such as bone remodeling and peak bone mass accrual (during puberty), folliculogenesis, wound healing, and atherosclerosis [24][25][26]. PAPP-A is a fetoplacental-specific molecule that is being used as a biomarker for predicting PE [25].In pregnant women, PAPP-A predominately circulates as a PAPP-A/proMBP heterotetrametric (99% (proMBP: proform eosinophil major basic protein)).The inhibition of the proteolytic activity of PAPP-A by proMBP serves to prevent a significant increase in IGFBP-4 proteinase activity.However, a local increase in IGFBP-4 proteinase activity is significant for the development of the placenta.The concentration of the PAPP-A protein is lower in the first trimester and gradually increases throughout the gestation period with concentrations increasing by 100-fold during the first trimester and 10,000-fold during the third trimester compared to the levels observed in non-pregnant women.After delivery, the PAPP-A concentration rapidly drops to basal values [25]. PAPP-A is a biochemical marker used earlier for the screening of chromosomal abnormalities and fetal Down syndrome.Placental pathology can decrease the PAPP-A concentrations.Studies have reported that a decreased concentration of PAPP-A in the first trimester may be associated with abnormal placentation or placental dysfunction and the development of subsequent PE [26,27].The study conducted by Luewan et al. revealed that pregnancies with reduced PAPP-A concentrations were significantly associated with an increased risk of early-onset preeclampsia.Furthermore, they also confirmed that reduced PAPP-A concentrations, at a cut-off of < 10th percentile, may be used to predict preeclampsia (with 26.1% sensitivity and 9.2% false-positive rate) [26,27].It was also observed that during the early second trimester of pregnant women developing PE, the PAPP-A concentration may decrease to one third of the concentration compared to the values observed in women without PE.However, the diagnostic utility of PAPP-A concentration during the early second trimester has not been demonstrated.Interestingly, the level of the PAPP-A protein increased with the course of preeclamptic gestation obtaining the highest concentration in the third trimester of pregnancy; indeed, mild and severe PE cases demonstrated a 1.5-fold increase in PAPP-A concentration in comparison to the values observed in healthy pregnancy [25]. At present, it is recognized that the determination of PAPP-A concentration along with other biochemical markers, maternal factors and Doppler ultrasound may be used as an early marker for the screening of PE [26,27].A combination of screening methods such as Doppler PI, PAPP-A, inhibin A, and PlGF showed a detection rate of 100% for early-onset PE; however, for PE (in general), the detection rate was only 40% with a false-positivity rate of 10% [25]. Placental Growth Factor (PlGF) and Vascular Endothelial Growth Factor (VEGF) PlGF and VEGF are effective pro-angiogenic factors secreted by the trophoblast cells.PlGF is a glycosylated dimeric protein playing a significant role in placental angiogenesis during early pregnancy, and inducing the growth, differentiation, and invasion of trophoblasts into the maternal decidua [28].The circulating PlGF concentration is prominently increased during pregnancy; in the first trimester, the concentration of this factor is low and it becomes elevated from the 11th to 12th week onwards, reaching a peak at the 30th week of gestation, after which it declines [29].PlGF belongs to the VEGF family, and it is primarily expressed in the placenta; however, the small concentrations appear also in several other tissues, such as the heart, skeletal muscles, lungs, liver, bone, and thyroid [29].VEGF plays a key role in the maintenance of endothelial cell function, specifically the fenestrated endothelium (found in the brain, liver, and glomeruli-the major organs affected by PE).Both PlGF and VEGF are linked by anti-angiogenic factor sFlt-1, secreted endogenously.sFlt-1 selectively binds to PlGF and VEGF, thereby inhibiting the binding of PlGF and VEGF with its membrane receptor [3,25], (Figure 2).This influences the levels of free PlGF and VEGF factors circulating in maternal bloodstream.Moreover, the level of both particles is regulated by the partial pressure of oxygen in the environment [30].Under hypoxic environment, the cells, including those from placenta, start to activate the hypoxia-inducible factor-1 (HIF-1) [31][32][33].This transcription factor is responsible for the activation of gene expressions coding for proteins implicated in the process of angiogenesis, including PlGF, VEGF, as well as their receptor, i.e., Flt-1 [34].Interestingly, although the gene coding for PlGF is under the control of HIF-1α, its level presents a negative correlation to the level of HIF-1α, and this phenomenon seems to be dependent on the type of cells [31].In placental cells, the level of PlGF is downregulated, but VEGF is upregulated under hypoxic conditions [35,36]. This might explain why, under preeclamptic condition strongly related to the low level of oxygen (i.e. at about 2%O 2 ), the PlGF concentration is depleted.This might also explain why the total level of VEGF is elevated in preeclampsia.However, the free fraction of this particle is significantly depleted in preeclamptic maternal bloodstream, as a consequence of binding of this particle to its soluble receptor, i.e., sFlt-1 [37]. Studies have indicated that PlGF concentrations can be used for an early diagnosis of PE with a detection rate of 90% and a fixed false positive rate of 5% [3].The increased sFlt-1 concentration in PE is significantly correlated with disease severity.Both PlGF and sFtl-1 concentrations may diagnose PE at the end of the first trimester of gestation [3,25,38].Additionally, the ratio of sFlt-1 to PlGF seems a good predictive factor of PE; PE women demonstrate a significant elevation of the sFlt-1/PlGF ratio than the healthy controls [25,39].Moreover, Rana et al. also suggested that the sFlt-1/PlGF ratio higher than ≥85 might be a marker of the early onset of PE and predicted adverse maternal and fetal outcomes.The sFlt-1/PlGF ratio of ≤38 in women at 24-37 weeks of gestation can be a reliable measure to diagnose the absence of PE (negative predictive value-99.9%)[3,38] hypoxia-inducible factor-1 (HIF-1) [31][32][33].This transcription factor is responsible for the activation of gene expressions coding for proteins implicated in the process of angiogenesis, including PlGF, VEGF, as well as their receptor, i.e., Flt-1 [34].Interestingly, although the gene coding for PlGF is under the control of HIF-1α, its level presents a negative correlation to the level of HIF-1α, and this phenomenon seems to be dependent on the type of cells [31].In placental cells, the level of PlGF is downregulated, but VEGF is upregulated under hypoxic conditions [35,36]. This might explain why, under preeclamptic condition strongly related to the low level of oxygen (i.e. at about 2%O2), the PlGF concentration is depleted.This might also explain why the total level of VEGF is elevated in preeclampsia.However, the free fraction of this particle is significantly depleted in preeclamptic maternal bloodstream, as a consequence of binding of this particle to its soluble receptor, i.e., sFlt-1 [37]. Figure 2. The Disrupted Balance: Angiogenic Imbalance in Preeclampsia.NOTE: Imbalance of angiogenesis in preeclampsia.In a normal pregnancy, the balance and stability of blood vessels are regulated by appropriate levels of vascular endothelial growth factor (VEGF) and transforming growth factor-1 (TGF-β1) signaling.However, in the case of preeclampsia, there is an excessive release of two antiangiogenic proteins, sFlt-1 and sEng, by the placenta.These proteins act to inhibit the signaling of VEGF and TGF-β1 in the blood vessels.Consequently, this disruption leads to the dysfunction of endothelial cells, characterized by a decrease in the production of nitric oxide.Created with BioRender.com. Studies have indicated that PlGF concentrations can be used for an early diagnosis of PE with a detection rate of 90% and a fixed false positive rate of 5% [3].The increased sFlt-1 concentration in PE is significantly correlated with disease severity.Both PlGF and sFtl-1 concentrations may diagnose PE at the end of the first trimester of gestation [3,25,38].In a normal pregnancy, the balance and stability of blood vessels are regulated by appropriate levels of vascular endothelial growth factor (VEGF) and transforming growth factor-1 (TGF-β1) signaling.However, in the case of preeclampsia, there is an excessive release of two antiangiogenic proteins, sFlt-1 and sEng, by the placenta.These proteins act to inhibit the signaling of VEGF and TGF-β1 in the blood vessels.Consequently, this disruption leads to the dysfunction of endothelial cells, characterized by a decrease in the production of nitric oxide.Created with BioRender.com. Thus, PlGF is a key molecule in the prediction and diagnosis of PE.However, an estimation of a combination of PlGF, VEGF, and sFlt-1 has shown promising results to trace the changes in the placental vasculature and damage in the endothelium before and during PE.These molecules have been estimated in studies for screening PE [25]. Soluble FMS-Like Tyrosine Kinase-1 (sFlt-1) sFlt-1 is an antiangiogenic soluble protein that mediates its antagonistic effect by binding to and inhibiting the pro-angiogenic proteins-PlGF and VEGF, thus inducing endothelial dysfunction [3].sFlt-1 is a splice variant of the membrane bound the FMS-like tyrosine kinase-1 (Flt-1) receptor (also known as VEGFR-1).The sFlt-1 is a 100 kDa protein deficient in the transmembrane and intracellular portion of the active protein Flt [32].The major source of the sFlt-1 protein in the maternal circulation is the placenta.sFlt-1 is a key factor in the development of PE.The biological actions of VEGF and PlGF would be prohibited by increased sFlt-1 levels, thereby inducing the development of PE. sFlt-1 concentration was found to be extremely elevated in the circulation of women with PE, and this raised concentration existed before the development of hypertension and proteinuria [40].The serum concentration of sFlt-1 in PE may increase to up to five times higher than the concentrations circulating in normotensive women [24,28,41].The plasma sFlt-1 concentration increased at 6-10 weeks in women with PE compared to the levels observed in normal pregnancies.Moreover, the plasma sFlt-1 concentrations in early-onset PE and late-onset PE showed elevated levels at the 26th and 29th weeks of gestation, respectively, compared to the levels observed in normal pregnancies [24]. Endothelium-Derived Nitric Oxide (NO) Endothelium-derived NO is a gaseous molecule acting as a potent vasorelaxant that helps in multiple physiological and pathophysiological functions such as angiogenesis, neovascularization, vessel tone regulation, and regulation of systemic blood pressure.NO acts as a central mediator and modulates the effect of various angiogenic factors (VEGF, PlGF, and TGF-β) to stimulate normal endothelial migration and proliferation.The expression of eNOS is found to be upregulated by the angiogenic factors (TGF-β, VEGF, PlGF) [23,42,43]. In endothelial cells, VEGF binds with its receptor (VEGFR) and activates an endothelialcell-specific isoform of an NO-producing enzyme endothelial nitric oxide synthase (eNOS).This activation of eNOS (Ca2+/calmodulin-regulated) occurs through (i) the phosphorylation of NOS (phosphatidylinositol-3-OH-kinase-Akt mediated pathway), (ii) increased calcium flux induction, and (iii) recruitment of heat-shock protein-90.After activation, eNOS catalyzes the conversion of L-arginine to L-citrulline and NO (Figure 3).The produced NO stimulates neovascularization via angiogenesis as well as vasculogenesis.However, in PE, this protective signaling pathway of VEGF is compromised because of an increased circulating concentration of sFlt-1 and sEng and a decreased expression of PlGF.[33,42] An elevated concentration of circulating sEng and sFlt1 present in preeclamptic women may oppose the NO-dependent vasodilatation stimulated by VEGF, TGF-β, and PlGF, consequently leading to the development of hypertension observed in PE patients.Research reports that a decreased NO concentration significantly contributes to the pathogenesis of PE.The sFlt1-and sEng-induced inhibition of eNOS activation indicates the molecular basis for an increased mean arterial pressure [23].Therefore, the attenuation of a VEGF-dependent activation of eNOS may inhibit angiogenesis and induce hypertension [23] (Figure 3). In summary, elevated sFlt-1 concentration increases peripheral vascular resistance, which subsequently increases blood pressure.The increased sequestering of VEGF by sFlt-1 may also disturb the glomerular filtration barrier and cause glomerular endothelial injury, leading to proteinuria [44].Furthermore, sFLT1 mRNA expression was also found to be increased in the placenta of women with PE.Studies on animal models suggest that administering exogenous sFlt-1 in experimental animals leads to glomerular endotheliosis, proteinuria, and hypertension [3]. Placental Protein 13 (PP-13) PP-13 is another fetoplacental-specific molecule that is being used as a biomarker for predicting preeclampsia [25].PP13 is amongst the 56 identified placental proteins described to date.PP13 is a carbohydrate binding protein (32 kDa homo-dimer protein, 139 amino acid residues) belonging to the galectin family and synthesized in the syncytiotrophoblast [12,45,46].The structural and functional characteristics of PP-13 are vital in placental development and regulatory pathways [46].It is involved in early placentation; however, it also plays a major role in the maintenance of pregnancy at different stages of gestation (viz., trophoblast invasion, maternal-fetal immune tolerance, embryo implantation, and vascular remodeling).The specificity of the conserved carbohydrate recognition domain for β-galactosides-containing glycoconjugates plays an important role in implantation and embryogenesis [46].PP-13 can bind with the β-actin located in the trophoblastic cells, which enables their migration to the placental bed along with enhancing the secretion of prostacyclins required for spiral artery remodeling during early placentation.PP-13 also initiates apoptosis in maternal T cells for effective placentation and implantation.PP-13 also plays a significant role in trophoblast differentiation and syncytialization that helps in the secretion of immune proteins and placental hormones necessary for embryo development and immune tolerance.Research showed a decreased concentration of serum/plasma PP-13 during the first trimester of pregnancy; however, these concentrations gradually increase with the progress in the gestation period.Evidence suggests the presence of low serum concentration of PP-13 in PE [12,45,46].a VEGF-dependent activation of eNOS may inhibit angiogenesis and induce hypertension [23] (Figure 3).NOTE: Regulation of endothelial nitric oxide synthase (eNOS).Upon binding of VEGF to its receptor (VEGFR2), the receptor dimerizes and activates its tyrosine kinase activity, leading to autophosphorylation of intracellular domains.This event triggers a series of signaling pathways that modulate NO synthesis.Firstly, VEGFR2 activation stimulates the PI3K/Akt pathway, resulting in an increase in intracellular Ca 2+ levels, which induce the binding of calmodulin (CaM) to endothelial nitric oxide synthase (eNOS), facilitating its activation.Additionally, VEGFR2 signaling activates PLCγ, leading to the conversion of PIP2 into DAG and IP3.IP3 acts as a secondary messenger contributing to the elevation of intracellular Ca 2+ levels.On the other hand, DAG activates PKC, which plays a role in downstream signaling events.Hsp90, a molecular chaperone, is recruited to the activated VEGFR2 complex and assists in the proper folding and stabilization of eNOS, ensuring its functional integrity and preventing degradation.The coordinated actions of Ca 2+ , calmodulin, and Hsp90 lead to the activation of eNOS, enabling the conversion of molecular oxygen and L-arginine to produce NO.The generated NO mediates various biological effects, including enhanced vascular permeability, vasorelaxation, and maintenance of endothelial cell survival.Created with BioRender.com. In summary, elevated sFlt-1 concentration increases peripheral vascular resistance, which subsequently increases blood pressure.The increased sequestering of VEGF by sFlt-1 may also disturb the glomerular filtration barrier and cause glomerular endothelial injury, leading to proteinuria [44].Furthermore, sFLT1 mRNA expression was also found to be increased in the placenta of women with PE.Studies on animal models suggest that Regulation of endothelial nitric oxide synthase (eNOS).Upon binding of VEGF to its receptor (VEGFR2), the receptor dimerizes and activates its tyrosine kinase activity, leading to autophosphorylation of intracellular domains.This event triggers a series of signaling pathways that modulate NO synthesis.Firstly, VEGFR2 activation stimulates the PI3K/Akt pathway, resulting in an increase in intracellular Ca 2+ levels, which induce the binding of calmodulin (CaM) to endothelial nitric oxide synthase (eNOS), facilitating its activation.Additionally, VEGFR2 signaling activates PLCγ, leading to the conversion of PIP2 into DAG and IP3.IP3 acts as a secondary messenger contributing to the elevation of intracellular Ca 2+ levels.On the other hand, DAG activates PKC, which plays a role in downstream signaling events.Hsp90, a molecular chaperone, is recruited to the activated VEGFR2 complex and assists in the proper folding and stabilization of eNOS, ensuring its functional integrity and preventing degradation.The coordinated actions of Ca 2+ , calmodulin, and Hsp90 lead to the activation of eNOS, enabling the conversion of molecular oxygen and L-arginine to produce NO.The generated NO mediates various biological effects, including enhanced vascular permeability, vasorelaxation, and maintenance of endothelial cell survival.Created with BioRender.com. It has been reported that in normal pregnant women, median serum PP-13 concentrations increase from 166 pg/mL to 202 pg/mL and 382 pg/mL in the first, second, and third trimesters, respectively [46].Most of the research studies have reported that a reduced serum concentration of PP-13 during the first trimester increases the risk of developing PE.Research reveals that serum PP-13 concentrations in patients who developed early-onset PE (estimated during their first trimester) were significantly lower than those with normal gestation (specificity: 80%, sensitivity: 100%).A study conducted by Vasilache et al. aimed at using PP-13 for the prediction of PE and showed a specificity of 0.83 (95% CI) and a sensitivity of 0.53 (95% CI) [47].Another study also reported that the maternal PP-13 mRNA expression was significantly reduced in PE patients (28%) compared to the levels observed in the control group (76%), with a highly statistically significant difference (P < 0.0001) [48].These results suggest that serum PP-13 concentrations estimated during the first trimester may serve as a promising marker for the risk assessment of PE.Thus, estimating PP-13 concentration during the first trimester and using it as a screening marker for PE may help in the identification of women predisposed to develop early-onset PE [12,45,46]. Consequently, PP-13 may be considered a strong predictive factor in early-onset PE.Assessment of PP-13 (in the first trimester) in combination with uterine artery Doppler ultrasound may increase the prediction rate to 90% [12,25,46]. Growth Differentiation Factor 15 (GDF15) GDF-15, a member of the TGF-β superfamily, is also known as a macrophage inhibiting cytokine-1 (MIC-1).It is produced in the placenta; however, it is also secreted in response to stress and is upregulated during cellular injury and inflammation.GDF-15 has also been recognized to possess a cardio-protective function [49,50]. Research has demonstrated that GDF-15 concentrations increase with gestational age and are dysregulated in PE.During the 30th-34th weeks of gestation, GDF-15 concentrations were found to be higher in women subsequently developing PE than in women with normal pregnancy; nevertheless, the difference was relatively minor.Furthermore, studies focused on assessing serum GDF-15 concentrations in women with PE have reported discrete findings with no change, decreased concentrations, and significantly increased concentrations.In the absence of consistent findings, the utility of GDF-15 concentrations (as an individual prediction biomarker) in clinical practice seems to be unlikely [49][50][51].However, when used in combination with sFlt-1 and PlGF, GDF-15 may be a promising biomarker for the prediction of PE [50]. A disintegrin and Metalloprotease 12 (ADAM-12) ADAM-12 is a multidomain glycoprotein derived from the placenta that possesses proteolytic and cell-adhesion activities.ADAM-12 controls the migration and invasion of trophoblasts during placental development.Hence, it is a key constituent in controlling the growth and development of the placenta and the fetus [52][53][54].ADAM-12 occurs in the form of ADAM-12-L (long) and ADAM-12-S (short).ADAM-12-S, the secreted form of ADAM-12, possesses a proteolytic activity against IGFBP-3, which is believed to stimulate growth by promoting the IGF-I and IGF-II levels.ADAM-12-S is found in maternal serum beginning from the first trimester of pregnancy and increasing throughout gestation [52]. Studies report that the serum ADAM-12 concentrations in women predisposed to develop PE were significantly lower (P < 0.05) than those observed in women with normal pregnancy [25,52,54,55].However, the available research indicates a modest predictive efficiency of ADAM-12 for PE [25,54,55].Research suggests using a combination of screening methods for predicting PE.However, a combination of screening methods such as PAPP-A, β-hCG, PlGF, and ADAM-12 showed a detection rate of 44% only, with a false positivity rate of 5% [24]. β-Human Chorionic Gonadotropin (β-hCG) hCG is a glycoprotein hormone with two non-covalently associated subunits, α and β.It is synthesized by placental trophoblasts.The free β-subunit can be either produced directly by trophoblast cells, become dissociated from hCG into free subunits (α and β), or become nicked by macrophages or neutrophils.The serum hCG concentration reaches a peak at 8-10 weeks of pregnancy, after which it declines and obtains a plateau at 18-20 weeks of pregnancy [56]. A reduced serum hCG concentration during early pregnancy may be an indication of an impaired invasion of trophoblast cells; consequently, a reduced serum hCG concentration may act as a biomarker for delayed implantation and impaired placental development.These factors may further contribute to the development of PE [57]. A study conducted by Asvold et al. showed that hCG concentrations were inversely associated with the risk of development of PE (dose-dependent).Women with hCG concentrations of <50 IU/l had a four times higher risk of developing severe PE compared to women with hCG concentrations ≥150 IU/l (Day 12 after transfer of cleavage stage embryos (2-to 4-cell stage) in pregnancies after IVF treatment) [57].However, the study also reported that a single measurement of hCG concentration during early pregnancy may not serve as a potent biomarker for individual prediction of PE risk [57].Other studies also demonstrate inconsistent findings.Few studies indicate that increased hCG and β-hCG concentrations during the second trimester of gestation were associated with a higher risk of PE [25,56,58], whereas another study revealed the absence of any statistically significant association between PE and β-hCG concentration [59].However, all these studies suggest that the potential use of β-hCG as a biomarker for the prediction of PE shows a low detection rate with reduced sensitivity [25,57,58,60]. Inhibin Alpha (Inhibin-A) Inhibin-A is a placenta-derived glycoprotein hormone belonging to the TGF-β superfamily.Inhibin-A is involved in trophoblast differentiation and proliferation, embryo implantation, and endometrial decidualization.It thus helps in fetal growth and maintenance of pregnancy.Inhibin-A concentration reaches its first peak at 8-10 weeks of gestation and becomes stable at 14-30 weeks of gestation, and later increases gradually during the third trimester and onwards, reaching its highest level at delivery [61,62]. Studies demonstrate that an increased Inhibin-A concentration during pregnancy is significantly associated with PE [61][62][63][64].The possible reason of serum Inhibin-A levels becoming elevated in women with PE may be owing to the abnormal invasion and proliferation of trophoblasts in the uterine vessels in response to the repair of ischemic damage.The consequent damage and repair may lead to the functional changes on the surface of PE placenta, contributing to an increase in serum Inhibin-A concentration [61,65]. Studies suggest that Inhibin-A may be useful for the detection of PE [62][63][64]; however, the predictive sensitivity of Inhibin-A as a potent biomarker is relatively low, and so it is recommended to be used in combination with other biomarkers for best predictive outcome [25,62,63]. Soluble Endoglin (sEng) sEng is another placenta-derived 65 kDa soluble form of the homodimeric transmembrane glycoprotein endoglin (Eng).It is an antiangiogenic factor that acts as a co-receptor for TGF-β1 and TGF-β3, and it is highly expressed in endothelial cells and trophoblasts.TGF-β is an anti-inflammatory growth factor.A prolonged exposure of endothelial cells to TGF-β stimulates the expression of the eNOS gene and protein [23].sEng modulates TGF-β signalling by acting as an endogenous TGF-β1 inhibitor.An elevated concentration of sEng inhibits the TGF-β signalling pathway, eNOS activation and vasodilation, thereby interrupting significant homeostatic mechanisms essential for sustaining the vascular health [23,42].Consequently, it antagonizes and impairs the biological action of the proangiogenic factor TGF-, an action like sFlt-1 antagonizing VEGF.This further indicates that sEng leads to impaired TGF-β signalling in the vasculature, thereby altering vascular permeability leading to hypertension [3,15,23,24,66]. Studies show that serum sEng concentration was upregulated in PE [15,23,24].Venkatesha et al. reported that the elevated serum sEng concentration in PE individuals correlated with the severity of the disease.The sEng expression in PE was four times that of the normal pregnancy (P < 0.01).Furthermore, compared to controls (gestational age-matched), sEng concentrations were three, five, and ten times those in women with mild PE, severe PE, and HELLP syndrome, respectively [23].Another study comparing the sEng concentration (ng/mL) in women with PE and normotensive pregnant women revealed a significantly higher sEng concentration in women with PE during the second trimester (MD: 5.554, P < 0.001) and the third trimester (MD: 31.006,P < 0.001).During the first trimester, the concentration of sEng was higher in women with PE; however, the difference was statistically non-significant (MD: 1.105, P = 0.06).Furthermore, the sEng concentrations were significantly higher in both early-onset and late-onset PE (P < 0.05) [55].Furthermore, studies in an animal model showed that the effect of sEng was augmented by co-administration of sFlt1, causing severe PE with HELLP syndrome and restricted fetal growth [22]. An elevated sEng concentration may be observed before the onset of clinical symptoms, and its concentrations may be associated with the severity of the disease; measuring sEng concentration may allow an early prediction and diagnosis of PE [66][67][68]. The present review attempted to summarize the significant biomarkers that were discussed in the above section, but the major hinderance for drawing conclusions is the fact that the values of obtained biomarkers were collected from different countries.Most of the previous literature had limited or incomplete information regarding either PE or biomarkers.Hence, only a limited number of publications (a minimum of two papers to a maximum of four papers) had the information on PE onset with biomarker levels; the papers are represented in Table 1, providing the baseline characteristics of the denoted population.From Table 1, it can be observed that there was not much difference in the serum levels of maternal PAPP-A and PIGF levels documented between American and Chinese populations.The predominant angiogenic markers that were well demarcated through the literature survey were determined to be PAPP-A, PIGF, sFlt-1, and sEng.Most of the review discussed the usefulness of PAPP-A and PIGF, to a greater extent; in addition, for sFlt-1 and sEng, multicentric trials were performed in larger quantities.Hence, the baseline details of the substantial angiogenic biomarkers confined to PE are exhibited in Table 1 with country information.Table 2 provides details of combination biomarkers useful to PE.Since the data were limited in nature, the existing literature provided the baseline details of the sFlt-1:PIGF ratio in a clinical scenario through a multicentric study along with sensitivity and specificity. Table 3 describes the summarized information of the control size, sample size, control value and sample value with respect to the examined biomarker through the literature assessed.This is presented to facilitate understanding of the broad geographical difference in the levels of significant biomarkers of PE existing between populations.It is obvious that PIGF levels were found to be 2.3 times lower in PE patients than the control group, and in our analysis, the study observed a greater time of fall in PIGF among German patients followed by the Indian, American and Chinese populations.Figure 3 illustrates the increasing trend of all represented biomarkers except PAPP-A with respect to the studied nation. In general, the first-trimester combined algorithm for PE, which combines maternal characteristics with mean arterial blood pressure, mean uterine artery resistance, and circulating PlGF to stratify risk, was recently discussed in [69].Clinical guidelines to apply a PE risk score through NICE and ACOG guidelines were also discussed.The study concluded that the sFlt1:PlGF ratio > 38 is a strong "rule out" test with a 99.3% negative predictive value for preeclampsia developing within a week.If the concentration of PlGF is 100 pg/mL, it represents a screen positive in women with suspected preeclampsia at or before 35 weeks, achieving a 96% sensitivity and a 98% negative predictive value for preeclampsia developing within two weeks.PIGF has been shown to reduce the time to diagnosis, adverse maternal outcomes, outpatient attendances and costs to healthcare service. Another significant study [70] showed that PAPP-A and PlGF MoMs (Multiples of Median) values were significantly reduced among early-onset PE cases (0.57 and 0.60), followed by preterm PE (0.63 and 0.67), all PE (0.74 and 0.74), and gestational hypertension (0.89 and 0.86) cases relative to controls (0.99 and 1.00) for first-trimester PAPP-A and PlGF, respectively.In addition, the study also displayed that a combination of maternal characteristics and PAPP-A and PlGF can provide reasonable performance for PE screening in the first trimester.In the second trimester, we found PlGF to be a better predictor for PE than the sFlt-1:PlGF ratio before 20 weeks of gestation. Conclusions The existing comprehensive research suggests that, from a variety of biomarkers that can be applied in clinical settings to diagnose PE, three biomarkers were found to be sensitive either to detect or rule out the condition, including PlGF, PAPP-A and PlGF and sFlt1:PlGF. Future Perspectives Consequently, larger prospective studies focused on screening the best predictive biomarkers with better predictive values (when used alone or in combination) should be carried out.Moreover, research to completely understand the molecular mechanism of these biomarkers in the development of PE should also be conducted.Such a specific prediction strategy for detecting PE in early gestation may help in identifying high-risk women (predisposed to develop PE); accordingly, a specific preventative intervention/therapy may be provided.Furthermore, an early diagnosis may also alleviate anxiety and redundant therapy/interventions in women with low risk of developing PE.Perhaps it is worth asking whether these markers will not only serve to predict early PE, but also, in the future, serve to differentiate between the early and late forms, since we know that the late form of PE only in a fraction of cases runs with a disruption of these markers.Further studies in the future will help answer this question. 19 Figure 1 . Figure 1.Two stages of preeclampsia pathogenesis.NOTE: Preeclampsia has a two-stage pathogenesis.Preclinical Stage 1 is characterized by abnormal placentation, resulting in the emission of soluble factors into the maternal blood, which then causes systemic endothelial dysfunction and hypertension (Stage 2).Created with BioRender.com. Figure 1 . Figure 1.Two stages of preeclampsia pathogenesis.NOTE: Preeclampsia has a two-stage pathogenesis.Preclinical Stage 1 is characterized by abnormal placentation, resulting in the emission of soluble factors into the maternal blood, which then causes systemic endothelial dysfunction and hypertension (Stage 2).Created with BioRender.com. Figure 2 . Figure2.The Disrupted Balance: Angiogenic Imbalance in Preeclampsia.NOTE: Imbalance of angiogenesis in preeclampsia.In a normal pregnancy, the balance and stability of blood vessels are regulated by appropriate levels of vascular endothelial growth factor (VEGF) and transforming growth factor-1 (TGF-β1) signaling.However, in the case of preeclampsia, there is an excessive release of two antiangiogenic proteins, sFlt-1 and sEng, by the placenta.These proteins act to inhibit the signaling of VEGF and TGF-β1 in the blood vessels.Consequently, this disruption leads to the dysfunction of endothelial cells, characterized by a decrease in the production of nitric oxide.Created with BioRender.com. Figure 3 . Figure 3. Regulation of Endothelial Nitric Oxide Synthase (eNOS) by VEGF Signaling Pathways.NOTE: Regulation of endothelial nitric oxide synthase (eNOS).Upon binding of VEGF to its receptor (VEGFR2), the receptor dimerizes and activates its tyrosine kinase activity, leading to autophosphorylation of intracellular domains.This event triggers a series of signaling pathways that modulate NO synthesis.Firstly, VEGFR2 activation stimulates the PI3K/Akt pathway, resulting in an increase in intracellular Ca 2+ levels, which induce the binding of calmodulin (CaM) to endothelial nitric oxide synthase (eNOS), facilitating its activation.Additionally, VEGFR2 signaling activates PLCγ, leading to the conversion of PIP2 into DAG and IP3.IP3 acts as a secondary messenger contributing to the elevation of intracellular Ca 2+ levels.On the other hand, DAG activates PKC, which plays a role in downstream signaling events.Hsp90, a molecular chaperone, is recruited to the activated VEGFR2 complex and assists in the proper folding and stabilization of eNOS, ensuring its functional integrity and preventing degradation.The coordinated actions of Ca 2+ , calmodulin, and Hsp90 lead to the activation of eNOS, enabling the conversion of molecular oxygen and L-arginine to produce NO.The generated NO mediates various biological effects, including enhanced vascular permeability, vasorelaxation, and maintenance of endothelial cell survival.Created with BioRender.com. Figure 3 . Figure 3. Regulation of Endothelial Nitric Oxide Synthase (eNOS) by VEGF Signaling Pathways.NOTE:Regulation of endothelial nitric oxide synthase (eNOS).Upon binding of VEGF to its receptor (VEGFR2), the receptor dimerizes and activates its tyrosine kinase activity, leading to autophosphorylation of intracellular domains.This event triggers a series of signaling pathways that modulate NO synthesis.Firstly, VEGFR2 activation stimulates the PI3K/Akt pathway, resulting in an increase in intracellular Ca 2+ levels, which induce the binding of calmodulin (CaM) to endothelial nitric oxide synthase (eNOS), facilitating its activation.Additionally, VEGFR2 signaling activates PLCγ, leading to the conversion of PIP2 into DAG and IP3.IP3 acts as a secondary messenger contributing to the elevation of intracellular Ca 2+ levels.On the other hand, DAG activates PKC, which plays a role in downstream signaling events.Hsp90, a molecular chaperone, is recruited to the activated VEGFR2 complex and assists in the proper folding and stabilization of eNOS, ensuring its functional integrity and preventing degradation.The coordinated actions of Ca 2+ , calmodulin, and Hsp90 lead to the activation of eNOS, enabling the conversion of molecular oxygen and L-arginine to produce NO.The generated NO mediates various biological effects, including enhanced vascular permeability, vasorelaxation, and maintenance of endothelial cell survival.Created with BioRender.com. Table 1 . Global-wise reported concentration of maternal serum biomarker levels associated with early onset and late onset of PE corresponding to the trimester (gestational weeks). Table 2 . Comparison of predicting rates (%) for combinations of serum biomarkers for PE. Table 3 . Country-wise distribution of average value of the significant biomarkers for detecting PE.
v3-fos-license
2018-10-21T21:47:41.039Z
2018-09-21T00:00:00.000
52916757
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://doi.org/10.5935/abc.20180167", "pdf_hash": "42c006ad806ec96ba56874549dbf598e3bdc733e", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43569", "s2fieldsofstudy": [ "Medicine" ], "sha1": "42c006ad806ec96ba56874549dbf598e3bdc733e", "year": 2018 }
pes2o/s2orc
Comparison of Cardiac and Vascular Parameters in Powerlifters and Long-Distance Runners: Comparative Cross-Sectional Study Background Cardiac remodeling is a specific response to exercise training and time exposure. We hypothesized that athletes engaging for long periods in high-intensity strength training show heart and/or vascular damage. Objective To compare cardiac characteristics (structure and function) and vascular function (flow-mediated dilation [FMD] and peripheral vascular resistance [PVR]) in powerlifters and long-distance runners. Methods We evaluated 40 high-performance athletes (powerlifters [PG], n = 16; runners [RG], n = 24) and assessed heart structure and function (echocardiography), systolic and diastolic blood pressure (SBP/DBP), FMD, PVR, maximum force (squat, bench press, and deadlift), and maximal oxygen uptake (spirometry). A Student’s t Test for independent samples and Pearson’s linear correlation were used (p < 0.05). Results PG showed higher SBP/DBP (p < 0.001); greater interventricular septum thickness (p < 0.001), posterior wall thickness (p < 0.001) and LV mass (p < 0.001). After adjusting LV mass by body surface area (BSA), no difference was observed. As for diastolic function, LV diastolic volume, wave E, wave e’, and E/e’ ratio were similar for both groups. However, LA volume (p = 0.016) and BSA-adjusted LA volume were lower in PG (p < 0.001). Systolic function (end-systolic volume and ejection fraction), and FMD were similar in both groups. However, higher PVR in PG was observed (p = 0.014). We found a correlation between the main cardiovascular changes and total weight lifted in PG. Conclusions Cardiovascular adaptations are dependent on training modality and the borderline structural cardiac changes are not accompanied by impaired function in powerlifters. However, a mild increase in blood pressure seems to be related to PVR rather than endothelial function. Introduction Exercise training induces cardiovascular adaptations secondary to changes in blood pressure as well as other hemodynamic and metabolic changes in response to physical exertion. These adaptive changes can induce left ventricular (LV) hypertrophy in the long run. 1 Some authors claim that borderline physiological and anatomical changes occur as part of an adaptive process of high-performance training and they have sparked off debate on their implications. 2 They postulate that volume overload generally increases LV pumping ability producing eccentric hypertrophy while, in contrast, pressure overload decreases ventricular cavity size producing concentric hypertrophy. Moreover, peripheral vascular resistance (PVR) is an important factor of cardiac overload by specifically modulating LV afterload. Furthermore, the endothelium is central to vasodilation by producing nitric oxide (NO), which is a vasodilator and has a direct effect on PVR. Therefore, it is important to highlight that after exercise there is a stimulation of NO production and eNOS phosphorylation, which contributes directly to a reduction in PVR. 3,4 Aerobic exercise increases shear stress leading to increased release and synthesis of NO and higher active muscle vasodilation. 5 LV pressure overload is reduced over time. 6 However, high-intensity resistance training such as weightlifting and powerlifting involves a number of very slow-speed contractions that produce transient mechanical compression of resistance vessels, increasing PVR and LV pressure overload during exercise. 7 It has been postulated that chronic increase in afterload induces the parallel addition of new sarcomeres in the myocardium leading to concentric ventricular hypertrophy. 8 Yet, this form of ventricular hypertrophy has not been demonstrated in strength training athletes, 9 and it is thus an inconsistent finding. Given the limited body of evidence in support of these cardiovascular adaptations as well as concerning endothelial function and PVR in strength athletes, this study aimed to compare structural and functional cardiac changes in powerlifters and long-distance runners. Secondarily, we compared endothelium-dependent vasodilation and PVR in these athletes. Our hypothesis is that athletes engaging in high-intensity strength training for long periods of time show changes in cardiac structure associated with reduced cardiac function when compared to long-distance runners. Furthermore, long-time exposure to high-intensity strength training could lead to a reduction of endothelial function caused by pressure overload. Study participant selection and groups The study convenience sample comprised 40 male individuals aged 18-40 years. We selected athletes of powerlifting (powerlifters group [PG], n = 16) and long-distance (over 10 km) running events (runners group [RG], n = 24). Eligible athletes were those competing for at least 3 years. Individuals with any medical condition in the preceding 6 months; those not competing in the preceding 6 months; those on use of illicit (doping) substances in the last 12 months; or those who refused to sign an informed consent were excluded. The study sample was recruited using an open invitation at training sites (gyms, health clubs and sports centers) and selected after applying the inclusion criteria. Participants were assessed as follows: on the first visit they underwent blood pressure assessment, echocardiographic assessment, brachial artery flow-mediated dilation (FMD), PVR assessments. In addition, they were administered a comprehensive questionnaire with questions about training including time of training experience; performance timeline; any awards/prizes; current training routine (volume, intensity, and duration of weekly training sessions, frequency of competitive participation, rest times, etc.) among others. On the next day, they underwent a maximum load test; and on the last visit (48 hours later), they underwent a maximum oxygen uptake test. All assessments were carried out within the same period of time (8 a.m. to 11 a.m.). Blood pressure assessment Blood pressure measurements were taken using a semi-automatic blood pressure monitor (OMROM 705CP), with the participant in a seated position with both feet on the floor, after a 10-minute rest; the cuff was placed and adjusted to the arm circumference. In a completely quiet room, blood pressure measurements were taken in duplicate on both arms, and the higher value of these readings was used in the study. Echocardiographic examination Transthoracic echocardiographic examinations were performed by an echocardiography specialist (G.B.G.). An ultrasound device (EnVisor CHD, Philips, Bothell, WA, USA) equipped with a sector transducer probe (2-4 MHz) was used to obtain longitudinal, cross-sectional, two-dimensional 2-and 4-chamber, and M module images. Continuous-wave, pulsed-wave, and color Doppler techniques were used to examine ventricular tissues and walls. All images were stored and sent to a second echocardiography specialist (D.P.K.) for blind evaluation of images. Body surface area (BSA) was calculated using Du Bois method. 10 Brachial artery flow-mediated dilation and peripheral vascular resistance We used a high-resolution two-dimensional Doppler ultrasound device (EnVisor CHD, Philips, Bothell, WA, USA) equipped with a high-frequency (7-12 MHz) linear vascular transducer probe and electrocardiographic imaging and monitoring software. FMD measurements were taken with the participants in the supine position, and a properly fitting pressure cuff was placed on the arm 5 cm above the cubital fossa. 11 Baseline brachial artery longitudinal diameters were assessed. Following that, the occlusion cuff was inflated to 50 mmHg above the systolic blood pressure (SBP) for 5 minutes and then deflated. Brachial artery diameters were measured for 60 seconds after deflation of the cuff. All analyses were performed offline and brachial artery measurements were made at the end of diastole (at R-wave peak on the electrocardiogram). FMD responses were expressed as percentage change from the baseline brachial artery diameter. PVR was calculated from mean blood pressure (MBP) and baseline blood flow obtained in the FMD test (PVR = MBP/ baseline blood flow in mmHg/cm.s -1 ). Maximum load test Maximum strength was assessed in the one-repetition maximum test (1-RM) for the squat, bench press and deadlift exercises, which are specifically performed at competitions, and through the total sum of these three exercises (total load). Distance runners attended a familiarization session within 48 hours of the test when the order of strength exercises and proper performance were introduced. For the 1-RM, the participants performed the maximum number of repetitions with the proposed load, up to a maximum of 10 repetitions. Exercise loads were increased according to Lombardi (1989) up to a point where participants were able to perform only one repetition with a maximum of 3 attempts to achieve the maximum load. Maximum oxygen uptake Maximum oxygen uptake (VO 2 peak or VO 2 max) was assessed through cardiopulmonary exercise test on a treadmill with respiratory gases collected (VO2000 model, Inbramed, Porto Alegre, Brazil). Powerlifters attended a familiarization session within 48 hours of the test where test procedures were introduced (Bruce protocol and mask placement for gas collection). The highest value, either VO 2 peak or VO 2 max was recorded at the end of the test as VO 2 max. Statistical analyses We performed the Shapiro-Wilk test to test normality of the data and homogeneity of variance was tested using Levene's test. All results are described as mean ± SD and confidence interval. We conducted Student's t Test for independent samples to assess differences between groups and calculated Pearson's linear correlation coefficients (α = 0.05 for all tests). All statistical analyses were performed using SPSS Statistics (version 21 for Windows). Results The participants had similar age and height (Table 1). However, all anthropometric measurements for PG were greater compared to distance RG. In turn, Table 2 shows loads for the squat, bench press, and deadlift exercises and total load (total sum of these three exercises). For all types of exercises, weight loads were higher in PG than RG as expected. The total load was greater by ~133% in PG than RG. The differences remained unchanged when loads were adjusted for body mass. Table 3 shows hemodynamic and cardiopulmonary parameters. Powerlifters had higher resting SBP (~10%) and resting DBP (~12%); the absolute differences between the two groups were 13.6 mmHg and 10.1 mmHg, respectively. Resting heart rate was higher in PG compared to RG (~19%, Δ15.7 bpm). VO 2 max was much higher in RG than PG (~65%): the highest VO 2 max value among powerlifters was lower than the lowest VO 2 max value among runners. Table 4 shows the echocardiographic results. As for cardiovascular adaptations, aorta diameter, left atrium (LA) diameter, right ventricle diameter, LV systolic diameter, and LV diastolic diameter were similar in both groups. However, PG showed greater interventricular septum thickness (Δ2.4 mm) and posterior wall thickness (Δ1.2 mm). They also showed greater LV mass (Δ46.5 g), but this difference disappeared after adjusting for BSA. As for diastolic function, LV diastolic volume, transmitral E wave, e' wave, and E/e' ratio were similar in both groups. However, LA volume (~22%), and LA volume adjusted for BSA (~40%) were found in PG, when compared to RG, but they were all within normal ranges. Although PG showed some degree of anatomical remodeling and different diastolic function parameters compared to RG, systolic function reflected in LV systolic volume, ejection fraction, and ejection fraction calculated by Simpson's rule were similar in both groups. Of the 40 participants, 9 (22.5%) had physiological ventricular hypertrophy in response to exercise; 10 (all powerlifters) had interventricular septum thickness greater than 11 mm. Of the 27 participants with LV mass greater than 225 g and LV mass adjusted by BSA greater than 115g/m 2 , 13 (82%) were PG and 14 (63%) RG. The correlations between training parameters and echocardiographic and cardiopulmonary variables in PG are displayed in Table 5. There was a direct correlation between interventricular septum thickness and weight load in the deadlift, squat, and total load. Interestingly, no correlation was found with time of exposure, i.e., duration in years of strength training among powerlifters. SBP levels were directly correlated with training intensity; and DBP showed a stronger correlation with duration of strength training. For runners, interventricular septum thickness and resting heart rate were inversely correlated with VO 2 max and duration of strength training (Table 6). Finally, FMD measurements were directly proportional to training intensity (% 1-RM) in PG and weight load for the squat (Table 7). For RG, no correlation of FMD values was found with cardiopulmonary variables and resting heart rate. Furthermore, FMD values were correlated with duration of powerlifting training (years) and daily duration of training session. However, this same correlation was not seen among runners. 12 Discussion Our study found that, compared with long-distance runners, powerlifters showed greater interventricular septum thickness, LV posterior wall thickness and LV mass. However, after adjusting for BSA, no difference was observed in LV mass.Cardiac function was similar in powerlifters and runners. Together, these parameters suggest that specific cardiac remodeling may occur as a result of training, but with no impairment of cardiac functions. A major finding of our study was similar FMD measurements in both powerlifters and runners despite PVR being higher in powerlifters. 12 Although our findings are comparative and derive from a cross-sectional design, they suggest that high-intensity strength training does not necessarily cause damaging cardiovascular changes as it has been generally believed. Cardiac parameters Regarding cardiac parameters (anatomical structure, and diastolic and systolic function), the echocardiographic assessments showed increased interventricular septum thickness with slight or no chamber diameter reduction and slight increase in posterior wall thickness in powerlifters compared to runners. These changes may be because powerlifting involves a great amount of slow-speed contractions using high loads close to the maximum 13 in daily training sessions leading to LV pressure overload. As for the cutoff values, several studies with high-performance athletes have used to determine pathological hypertrophy cutoff values of 12-13 mm for maximum interventricular septum thickness and 55-60 mm for end-diastolic dimension, as described below.Whyte (2004) examined 306British elite male athletes (judo, n = 22; skiing, n = 10; pole vault, n = 10; kayak, n = 11; rowing, n = 17; cycling, n = 11; power lifters, n = 29; triathlon, n = 51; modern pentathlon, n = 22; middle distance, n = 45; rugby, n=30; tennis, n = 33; swimming, n = 19) and found interventricular septum thickness > 13 mm in ~3.0% of them. Riding (2012) examined 836 athletes (soccer, n = 586; basketball, n = 75; volleyball, n = 41 and handball, n = 35) and found interventricular septum thickness > 12 mm and typical features of concentric left wrestling, n = 14; judo, n = 13; luge, n = 13; field hockey, n = 13; table tennis, n=11; pentathlon, n = 7; weight-lifting, n = 7; golfing, n = 6; baseball, n=5; triathlon, n = 3; motor-racing, n = 3; body-building, n=3; other modalities n = 72) and found interventricular septum thickness > 13 mm in 1.1% of them. Moreover, they also found that 45% and 14% of the athletes studied exhibited end-diastolic dimension > 55 mm and > 60 mm, respectively. Thus, if we use these cutoffs, despite some anatomical cardiac changes, none of the study participants showed cardiac dimensions consistent with pathological hypertrophy. However, it is important to note a strong correlation between weight loads lifted in the squat and total load and cardiac dimensions including septum thickness, posterior wall thickness, and LV mass. Yet again, a possible explanation is that powerlifting involves a great amount of slow-speed contractions using high loads close to the maximum leading to a pressure overload. [9][10][11][12][13][14][15][16][17] With regard to LV mass, Gardin et al., 18 reported values of 225 g and 115 g/m² adjusted by BSA in individuals chronically exposed to pressure overload. LV mass was also measured in our study and we found values of 282 g and 135 g/m 2, among powerlifters. Interestingly, runners also showed high LV mass (236 g and 128 g/m 2 adjusted by BSA). Regardless of the training modality, cardiac remodeling occurred in response to exercise training in both groups. Though still controversial, echocardiographic measurements indexed to BSA allow to comparing individuals of different body sizes. BSA is affected by fat mass, and fat mass is neither correlated with nor predicts LV mass. 19 An alternative approach is to adjust echocardiographic parameters for lean mass. However, accurate measurements are not widely available and substitute methods such as skin-fold thickness measurements are relatively inaccurate. 20,21 Diastolic function assessment in the study revealed consistently normal values in long-distance runners. 22 In contrast, lower LA volume and transmitral A-wave velocity measures were found in powerlifters although these values were within normal limits. The difference of LA volume measures between both groups was ~22%, and it was even more pronounced after adjustment for BSA (~40%). D'Andrea et al., 23 and coworkers have assessed LA volume and BSA-indexed LA volume in 350 endurance athletes and 245 strength athletes. 23 For BSA-indexed measures, these authors defined values between 29 and 33 mL/m 2 as mild LA enlargement and values greater than 33 mL/m 2 as moderate LA enlargement. Thus, our results were all below the cutoff values set in D'Andrea et al., 23 As for LV systolic function assessed through estimates of ejection fraction and ejection fraction calculated by Simpson's rule, the echocardiographic assessment showed values within the normal range in all cases. Blood pressure The association of aerobic training with lower resting blood pressure is well established. 24,25 But a growing body of evidence shows that strength training can have a similar effect on blood pressure, 26 though there is not yet a consensus in the literature. 27 However, high-intensity strength training has been reported to negatively affect blood pressure. A meta-analysis showed that training modalities that basically consist of strength training (powerlifting, bodybuilding, and Olympic weightlifting) are associated with a higher risk of high blood pressure with mean SBP of 131.3 ± 5.3 mmHg and mean DBP of 77.3 ± 1.4 mmHg. 28 These values are consistent with those found in our study (SBP 130.0 ± 8.2 and DBP 82.1 ± 6.9 mmHg). Vascular function FMD measurements were similar in both powerlifters and runners. This is an interesting finding given that these two training modalities have different biomechanical and metabolic characteristics. Exercise training has been shown as an effective means for the improvement of endotheliumdependent vasodilation capacity. 29 Among high-performance athletes, long-distance runners with above average normal cardiac function show lower arterial stiffness, lower oxidative stress, and increased endothelium-dependent dilation 30 capacity when compared to sedentary individuals of the same age. 31 These data suggest that outstanding cardiac performance in athletes may be associated with improved vascular function induced by aerobic exercise training. It is well known that aerobic exercise improves endothelial function by producing increased shear stress on the vessel walls during exercise. 32 Yet, it has been suggested that strength training can increase hemodynamic stress due to the mechanical compression of blood vessels during active movements together with excessive vascular tension produced during strength exercises. 7 Thus, we can speculate that high-intensity strength training could acutely affect endothelium-dependent vasodilation and lead to permanent damage in the long run. In this regard, impaired vascular function has been demonstrated in strength athletes, though it appears to be related to the use of anabolic agents rather than an effect of training. 33,34 Heffernan et al. found increased forearm reactive hyperemia in healthy young individuals after 6-month strength training. 35 The most likely explanation for increased endothelium-dependent dilation in strength training is the assumption of the mechanical compression of resistance vessel walls during exercise, followed by blood flow release after cessation of exercise, producing a sharp increase in vessel wall shear stress. 36 Although training modalities involve different stimuli (running training: increased continuous blood flow; strength training: intermittent compression of the muscles and restoring blood flow) they ultimately produce the same effects on vessel wall shear stress. It is important to note that, despite increased blood pressure levels and greater posterior wall thickness and LV mass found in our study among powerlifters, they showed no cardiac and endothelial function impairment when compared to runners and all the parameters were above average. Therefore, high blood pressure found in powerlifters seems to be related to increased PVR rather than endothelial function impairment. Study strengths and limitations The key strengths of our study are the use of a homogeneous sample (within each group) and that all echocardiographic images were assessed by two independent examiners, one of them blinded. However, our data should be interpreted with caution due to some limitations including the small sample size (due to recruitment challenges as anabolic steroid use is common among powerlifters and few met our inclusion criteria), and the challenge of recruiting a sample of untrained healthy subjects; however, all parameters evaluated were compared with those findings of other studies and/or current guidelines. Conclusion Our study showed that cardiac remodeling seems dependent on training modalities and not on structural difference, as in BSA-indexed LV mass in both powerlifters and long-distance runners. Systolic and diastolic functions were preserved in both modalities. Powerlifters showed higher resting blood pressure, which can be explained by increased PVR. However, FMD measurements were similar in both groups studied and were well above average. Although our findings are comparative in nature and derive from a cross-sectional design, it is possible to speculate that high-intensity strength training for a significant number of years (~5 years or more) may be associated to borderline structural cardiac changes, though they are not accompanied by reduced cardiac function. Author contributions Conception and design of the research: Silva DV, Lehnen AM; Acquisition of data, Analysis and interpretation of the data, Statistical analysis and Writing of the manuscript: Silva DV, Waclawovsky G, Kramer AB, Stein C, Eibel B, Grezzana GB, Schaun MI, Lehnen AM; Obtaining financing: Waclawovsky G, Lehnen AM; Critical revision of the manuscript for intellectual content: Waclawovsky G, Eibel B, Grezzana GB, Schaun MI, Lehnen AM. Potential Conflict of Interest No potential conflict of interest relevant to this article was reported. Sources of Funding There were no external funding sources for this study. Study Association This article is part of the thesis of master submitted Diego Vidaletti Silva, from Instituto de Cardiologia -Fundação Universitária de Cardiologia (IC/FUC). Ethics approval and consent to participate This study was approved by the Ethics Committee of the Instituto de Cardiologia do RS / Fundação Universitária de Cardiologia under the protocol number #417492. All the procedures in this study were in accordance with the 1975 Helsinki Declaration, updated in 2013. Informed consent was obtained from all participants included in the study.
v3-fos-license
2020-03-13T14:41:56.977Z
2020-03-12T00:00:00.000
212681731
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://arthritis-research.biomedcentral.com/track/pdf/10.1186/s13075-020-2137-y", "pdf_hash": "cf7143af0e74ef3b775e6103de45e3ee3932b995", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43570", "s2fieldsofstudy": [ "Medicine" ], "sha1": "cf7143af0e74ef3b775e6103de45e3ee3932b995", "year": 2020 }
pes2o/s2orc
Identification of molecules associated with response to abatacept in patients with rheumatoid arthritis Background Abatacept (ABA) is a biological disease-modifying antirheumatic drug (bDMARD) for rheumatoid arthritis (RA). The aim of this study was to identify molecules that are associated with therapeutic responses to ABA in patients with RA. Methods Peripheral blood was collected using a PAX gene Blood RNA kit from 45 bDMARD-naïve patients with RA at baseline and at 6 months after the initiation of ABA treatment. Gene expression levels of responders (n = 27) and non-responders (n = 8) to ABA treatment among patients with RA at baseline were compared using a microarray. The gene expression levels were confirmed using real-time quantitative polymerase chain reaction (RT-qPCR). Results Gene expression analysis revealed that the expression levels of 218 genes were significantly higher and those of 392 genes were significantly lower in the responders compared to the non-responders. Gene ontology analysis of the 218 genes identified “response to type I interferon (IFN)” with 24 type I IFN-related genes. RT-qPCR confirmed that there was a strong correlation between the score calculated using the 24 genes and that using OAS3, MX1, and IFIT3 (type I IFN score) (rho with the type I IFN score 0.981); the type I IFN score was significantly decreased after treatment with ABA in the responders (p < 0.05), but not in the non-responders. The receiver operating characteristic curve analysis of the type I IFN score showed that sensitivity, specificity, and AUC (95% confidence interval) for the responders were 0.82, 1.00, and 0.92 (0.82–1.00), respectively. Further, RT-qPCR demonstrated higher expression levels of BATF2, LAMP3, CD83, CLEC4A, IDO1, IRF7, STAT1, STAT2, and TNFSF10 in the responders, all of which are dendritic cell-related genes or type I IFN-related genes with significant biological implications. Conclusion Type I IFN score and expression levels of the nine genes may serve as novel biomarkers associated with a clinical response to ABA in patients with RA. Background Rheumatoid arthritis (RA) is characterized by chronic inflammatory polyarthritis, which leads to the destruction of the joints causing pain and disability [1]. Cytotoxic T lymphocyte-associated antigen 4 immunoglobulin fusion protein (CTLA4-Ig, abatacept (ABA)) is a biological disease-modifying antirheumatic drug (bDMARD) for RA. T cells are activated by the interaction of HLA class II molecules on antigen-presenting cells (APCs) with a T cell receptor (TCR) on the surface of T cells in the presence of CD80/86 on APCs and CD28 on T cells. CTLA4-Ig inhibits the activation of T cells by selectively modulating the CD80/86-CD28 interaction [2]. Abatacept is as efficacious as other bDMARDs in terms of clinical, structural, and functional outcomes [3]. In a recent meta-analysis, it was found that the risk of serious infections in humans was lower for treatments using ABA than that using other bDMARDs [4]. The prediction of therapeutic responses to ABA could considerably help identify patients that can benefit from the treatment. Whole blood transcriptomic profiling using microarrays has been widely used to investigate the action mechanisms and identifying appropriate biomarkers predicting the efficacy or safety of various drugs or treatment. Microarrays have been applied to some bDMARDs including ABA [5][6][7][8][9] to realize precision medicine for RA. Although some promising data have been reported, endeavor to develop novel biomarkers is still required. Here, we report the results of our study to identify molecules associated with therapeutic responses of ABA for patients with RA using a microarray. Patients A total of 168 RA patients who fulfilled the 2010 American College of Rheumatology/European League Against Rheumatism classification criteria for RA [10] and who received ABA for the first time were enrolled in this multicenter, prospective cohort study from Keio University, Saitama Medical University and Tokyo Medical and Dental University from June 2010 to December 2012 [11]. Blood samples for the microarray and RT-PCR were collected from 129 of the 168 patients. Forty-five of the 129 patients were bDMARD-naïve, and they were enrolled in this study. All patients had active RA despite the use of conventional synthetic disease-modifying antirheumatic drug (DMARD) for at least 3 months. Treatment efficacy was evaluated using the European League Against Rheumatism (EULAR) response criteria [12]. Patients were observed for 6 months after the initiation of ABA treatment. This study was registered at the University Hospital Medical Information Network Clinical Trials Registry (UMIN000005144). This study was approved by the Ethics Committee of the Tokyo Medical and Dental University Hospital (#836 and #M2015-553-01) and the other participating institutions. All subjects provided written informed consent. RNA extraction Blood from the patients was collected in PAXgene Blood RNA tubes (PreAnalytiX) at baseline and at 6 months after the initiation of ABA treatment. Total RNAs were extracted using PAXgene Blood RNA Kits (PreAnalytiX) following the manufacturer's instructions. The total RNA quantity and quality were determined using a NanoDrop-1000 spectrophotometer (Thermo Fisher Scientific) and an Agilent 2100 Bioanalyzer (Agilent Technologies). Microarray experiment Cy3-labeled complementary RNAs (cRNAs) were synthesized using Quick Amp Labeling Kits (Agilent). The cRNAs were hybridized at 65°C for 17 h to Whole Human Genome 44 K Microarrays (Agilent, Design ID: 014850). After washing, the microarrays were scanned using an Agilent DNA microarray scanner (Agilent). The intensity values of each scanned feature were quantified using Agilent Feature Extraction Software (Agilent). Microarray data analysis Signal intensity was adjusted using quantile normalization plus ComBat to reduce the batch effect [13,14]. After excluding poorly annotated probes and low signal probes (average signal < 100), 10,420 probes were extracted for further statistical analysis. We implemented a functional genomic analysis using the PANTHER Overrepresentation Test. The reference list included all Homo sapiens genes, and the annotation dataset was obtained from the GO Ontology database (released November 30, 2016). Real-time quantitative polymerase chain reaction analysis Real-time qPCR (RT-qPCR) analysis was performed using a Custom RT2 Profiler PCR Array (QIAGEN) and RT2 qPCR Primer Assays (QIAGEN) according to the manufacturer's instructions. cDNA was generated using 400 ng of total RNA. Real-time PCR was performed with a Roche Lightcycler 480 (Roche Diagnostics) using 4 ng cDNA per reaction. The thermal profile was as follows: denaturation (95°C, 1 min) and amplification (45 cycles; 95°C, 15 s; 60°C, 1 min). The second derivative maximum method was used to determine the crossing point (Cp) values. The relative expression of the targeting gene was normalized to 18S rRNA (QIAGEN). Statistical analysis The primary objective of this study was to identify novel molecules associated with therapeutic responses to ABA for patients with RA, and the secondary objective was validation of the results of the previous study [9]. Fisher's exact test and Student's t test were used to compare the categorical and continuous variables between two groups, respectively. The differences in gene expression at baseline obtained using the microarray and RT-qPCR were analyzed using the Welch's t tests; p < 0.05 was considered statistically significant. The type I IFN score was calculated using the Z-score methods [15]. Correlation between the IFN signature with 24 genes and that with a smaller number of genes was analyzed by Spearman's correlation test. The optimal cut-off value for discriminating the responders and non-responders to ABA treatment were determined by receiver operating characteristic curve (ROC) analysis. Clinical characteristics of the patients at baseline Of the 45 bDMARD-naïve patients with RA from whom blood sample for microarray research was obtained, 27 were classified as good responders (described as responders hereafter, 60.0%); 10, as moderate responders (22.2%); and 8, as non-responders (17.8%) using EULAR response criteria [12]. In order to extract responseassociated molecules efficiently, we compared baseline data of the responders and non-responders (Table 1). There was no significant difference in age, sex, prevalence of rheumatoid factor and anti-cyclic citrullinated peptide (CCP) antibody, disease activity, and the use of prednisolone (PSL) between the two groups. For the responder group, the disease duration tended to be longer and methotrexate (MTX) was used more frequently. Genes associated with clinical response to ABA treatment To identify novel biomarkers associated with clinical responses to ABA treatment, we compared gene expression levels at baseline between the responders and the nonresponders. The expression levels of 218 genes in the responders was significantly higher than that of the nonresponders, and the expression levels of 392 genes in the responders was significantly lower than that of the nonresponders (p < 0.05, false discovery rate (FDR) < 0.333 and fold change > 1.3) (Supplementary data 1). Gene ontology (GO) analysis of the 218 genes identified "response to type I interferon (IFN) (GO:0034340)" with 24 type I IFN-related genes: BST2, GBP2, IFI27, IFI35, IFI6, IFIT1, IFIT2, IFIT3, IFITM1, IFITM3, IRF7, ISG15, ISG20, MX1, MX2, OAS1, OAS2, OAS3, OASL, RSAD2, STAT1, STAT2, TRIM56, and XAF1 [16]. Twelve out of the 24 type I IFN-related genes were elevated (p < 0.05 without conditions of FDR or fold changes) in the responders compared to the moderate responders plus nonresponders (n = 18) (Supplementary Table 1 and Supplementary data 2) and the GO analysis again identified "response to type I interferon (IFN)." The GO analysis of the 392 genes downregulated in the responders did not identify a specific group of genes. The previously reported genes associated with therapeutic response to ABA, which were elongation arrest and recovery-related genes and CD56-specifically expressed genes [9], were not included in the over-or under-expressed genes. Type I IFN score and treatment response to ABA To evaluate the association of the type I IFN signature and treatment response to ABA, we calculated the type I IFN score using the average values of the Z-scored 24 type I IFN genes, as reported by Kennedy et al. [15]. The type I IFN score of the responders was significantly higher than the non-responders (p < 0.005, Fig. 1). In Values are expressed as the mean ± SD. Fisher's exact test and Student's t test were used to compare categorical and continuous variables between the two groups, respectively. p < 0.05 was considered statistically significant N.S. not significant, RF rheumatoid factor, CCP cyclic citrullinated peptide, DAS28-CRP disease activity score in 28 joints using C-reactive protein, PSL prednisolone, MTX methotrexate order to reproduce the type I IFN score with fewer genes, we compared the type I IFN score calculated using the 24 genes and the scores created by a combination of some of the genes (Supplementary Fig. 1A). We found that there was a strong correlation between the scores calculated by the 24 genes and the score created using genes of OAS3, MX1, and IFIT3 (rho with the type I IFN score 0.981) (designated as type I IFN score hereafter) (Supplementary Fig. 1B). To confirm the expression levels of genes using the microarray analysis and their association with the treatment response to ABA, we performed RT-qPCR; we quantified the expression levels of OAS3, MX1, and IFIT3 to calculate the type I IFN score using the same RNA samples used for microarray analysis. The type I IFN score using RT-qPCR of the responders was significantly higher than that of the non-responders (p < 0.0005, Fig. 2). We also compared the type I IFN score at baseline and at 24 weeks after the initiation of ABA treatment. The type I IFN score using RT-qPCR significantly decreased, albeit only a 15% reduction, after treatment with ABA in the responders (p < 0.05, Fig. 2); however, this was not observed for the non-responders. Other treatment response-associated molecules confirmed by RT-qPCR Since type I IFN is primarily produced by plasmacytoid dendritic cells (pDC), we selected dendritic cell-related genes or type I IFN-related genes with significant biological implications for quantification using RT-qPCR among the 218 genes as follows: BATF2, LAMP3, and CD83 are related to dendritic cell activation and maturation [17][18][19]; TNFSF10, BTLA, and IDO1 are expressed on dendritic cells (DCs) [20][21][22][23][24]. CLEC4A has a role in the production of type I IFN from pDC [25], and STAT1, STAT2, and IRF7 have roles in the signal of type I IFN production [26][27][28]. The expression levels of these 10 genes measured by qRT-PCR in the responders were significantly higher compared to those of the nonresponders except for BTLA (Fig. 3). We compared gene expression levels among patients with different disease activities at baseline and 24 weeks after the initiation of ABA treatment, and it found that all genes had no association with the disease activities at both time points (data not shown). We compared the expression levels of these 10 genes before and after treatment with ABA using RT-qPCR. The expressions of LAMP3 and STAT1 were significantly decreased after treatment with ABA; however, the percentage of reduction was relatively small (LAMP 41.7% and STAT1 17.4%, Fig. 3b, g). Discussion In this study, we demonstrated that the type I IFN score and the expression levels of BATF2, LAMP3, CD83, CLEC4A, IDO1, IRF7, STAT1, STAT2, and TNFSF10 are associated with a good clinical response to ABA in patients with RA. The family of type I IFNs, which consist of IFN-alpha and IFN-beta, has an important role in regulating immune response [28] [29]. The high expression of the type I IFN signature was found in 22 to 65% of the patients with RA [27,30] but was not associated with disease activity [31]. It Fig. 1 Comparison of type I IFN scores between responders and non-responders. Type I IFN score was calculated using the average values of the Z-scored 24 type I IFN genes, as reported by Kennedy et al. [15]. Responders to abatacept showed higher type I IFN score than non-responders (p < 0.005, the Mann-Whitney's U test) Fig. 2 Type I IFN score using RT-qPCR at baseline and 24 weeks after the initiation of abatacept treatment. The expression levels of OAS3, MX1, and IFIT3 were determined by using RT-qPCR to calculate the type I IFN score for the same RNA samples of microarray analysis (Fig. 2). p < 0.05 was considered statistically significant. *p < 0.05, **p < 0.01, ***p < 0.001. R, responders; N, non-responders; N.S. not significant has been reported that the type I IFN signature is highly expressed in the pre-clinical phase of RA with increased levels of anti-CCP antibody and rheumatoid factor [32,33]. In addition, IFN-administered patients often develop arthritis as an adverse drug reaction [34][35][36], which indicates that the increased levels of type I IFN, triggered by a viral infection or other immunological stimuli, may be involved in the pathogenesis of pre-clinical or early RA. It is reported that arthritis was mitigated in interferon alpha/beta receptor alpha chain-deficient mice and Fig. 3 Comparison of mRNA expression levels of the selected genes at baseline between the responders and the non-responders. Expression levels of BATF2, LAMP3, CD83, TNFSF10, BTLA, CLEC4A, IDO1, STAT1, STAT2, and IRF7 were determined using RT-qPCR and compared between the responders and the non-responders (a-j). p < 0.05 was considered statistically significant. *p < 0.05, **p < 0.01, ***p < 0.001. R, responders; N, nonresponders, N.S., not significant interferon regulatory factor-1-deficient mice [37,38]. These reports together with our data may indicate that ABA shows its clinical efficacy through the reduction of the type I IFN activity in patients with RA. The expression levels of genes related to the activation of dendritic cells, BATF2, LAMP3, and CD83, showed significant differences between responders and nonresponders at baseline. LAMP3 was one of the differentially expressed genes between RA and osteoarthritis patients [39], and CD83 was expressed in more than 20% of pDCs in the RA synovium [40]. In addition, early-stage RA patients had elevated levels of soluble CD83 in plasma [41]. Since CD83 is expressed as a membrane-bound form on mature dendritic cells and as a soluble form in plasma, further studies are warranted to evaluate the predictive ability of CD83 mRNA or proteins for responses to ABA treatment or to other treatments in patients with RA. Comparing the background at the start of ABA treatment, the percentage of MTX users was different in responders and non-responders. Recently, it has been reported that the expression level of the type 1 IFN is higher in patients that do not respond to methotrexate [42]. As there was no difference in the type I IFN scores among MTX users and non-users in both responders and non-responders in this study (data not shown), the cause of the difference in type I IFN expressions between the responders and the non-responders is not attributed to the percentage of MTX use. This study has some limitations. First is the small sample size. The association of IFN signature with therapeutic response to ABA identified between responders (n = 27) and non-responders (n = 8) was supported by the comparison between the responders vs moderate-plus no-responders (n = 18). Second, we did not have validation cohort, and the risk of over-fitting of models should be considered. Our results need to be confirmed in a future study. Third, we could not validate the results of the previous study, in which the signature scores of elongation arrest and recovery-related genes, and CD56-specifically expressed genes were significantly elevated in non-responders [9]. The characteristics of the patient population analyzed and the definition of therapeutic response applied may account for the difference between the studies. Conclusion Type I IFN score and expression levels of the nine genes-BATF2, LAMP3, CD83, TNFSF10, CLEC4A, IDO1, STAT1, STAT2, and IRF7-may serve as biomarkers for predicting the clinical responses to ABA treatment in patients with RA. Additional file 1: Figure S1A, B. Correlation between the IFN signature with 24 genes and the IFN signature with a smaller number of genes from the 24 genes. Figure S2. The receiver operating characteristics curve of the type I IFN score for the responders. (PPTX 65 kb) Additional file 2: Table S1. Clinical characteristics of EULAR responders vs moderate and non-responders at baseline. Additional file 3. List of genes with higher and lower expression levels in EULAR good responders compared to non-responders. Additional file 4. List of genes with higher and lower expression levels in EULAR good responders compared to moderate and non-responders. Availability of data and materials The datasets generated and/or analyzed during the current study are not publicly available due to future analysis plans but are available upon request under the condition of collaboration. Ethics approval and consent to participate The protocol was approved by the Institutional Review Board of Tokyo Medical and Dental University (#836, #115, and #M2015-553) and by the respective boards of other participating institutions. Written informed consent was obtained from each patient. The study was performed in compliance with ethical guidelines for epidemiological research in Japan and the Helsinki Declaration (revised in 2008). Consent for publication Not applicable. Competing interests WY-K received honoraria from Eisai Co., Ltd., Asahikasei Pharma Corp. HY is currently an employee of Nippon Boehringer Ingelheim Co., Ltd. TT received Grants from Astellas Pharma Inc., Chugai Pharmaceutical Co, Ltd., Daiichi
v3-fos-license
2014-10-01T00:00:00.000Z
2010-08-16T00:00:00.000
15787641
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://virologyj.biomedcentral.com/track/pdf/10.1186/1743-422X-7-191", "pdf_hash": "e3b39ab352b290eab188e94d013112666a565910", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43571", "s2fieldsofstudy": [ "Medicine", "Biology" ], "sha1": "e3b39ab352b290eab188e94d013112666a565910", "year": 2010 }
pes2o/s2orc
Bioelectrical impedance analysis in clinical practice: implications for hepatitis C therapy BIA and hepatitis C Background Body composition analysis using phase angle (PA), determined by bioelectrical impedance analysis (BIA), reflects tissue electrical properties and has prognostic value in liver cirrhosis. Objective of this prospective study was to investigate clinical use and prognostic value of BIA-derived phase angle and alterations in body composition for hepatitis C infection (HCV) following antiviral therapy. Methods 37 consecutive patients with HCV infection were enrolled, BIA was performed, and PA was calculated from each pair of measurements. 22 HCV genotype 3 patients treated for 24 weeks and 15 genotype 1 patients treated for 48 weeks, were examined before and after antiviral treatment and compared to 10 untreated HCV patients at 0, 24, and 48 weeks. Basic laboratory data were correlated to body composition alterations. Results Significant reduction in body fat (BF: 24.2 ± 6.7 kg vs. 19.9 ± 6.6 kg, genotype1; 15.4 ± 10.9 kg vs. 13.2 ± 12.1 kg, genotype 3) and body cell mass (BCM: 27.3 ± 6.8 kg vs. 24.3 ± 7.2 kg, genotype1; 27.7 ± 8.8 kg vs. 24.6 ± 7.6 kg, genotype 3) was found following treatment. PA in genotype 3 patients was significantly lowered after antiviral treatment compared to initial measurements (5.9 ± 0.7° vs. 5.4 ± 0.8°). Total body water (TBW) was significantly decreased in treated patients with genotype 1 (41.4 ± 7.9 l vs. 40.8 ± 9.5 l). PA reduction was accompanied by flu-like syndromes, whereas TBW decline was more frequently associated with fatigue and cephalgia. Discussion BIA offers a sophisticated analysis of body composition including BF, BCM, and TBW for HCV patients following antiviral regimens. PA reduction was associated with increased adverse effects of the antiviral therapy allowing a more dynamic therapy application. Background Bioelectrical impedance analysis (BIA) has been introduced as a non-invasive, rapid, easy to perform, reproducible, and safe technique for the analysis of body composition [1]. It is based on the assumption that an electric current is conducted well by water and electrolyte-containing parts of a body but poorly by fat and bone mass. A fixed, low-voltage, high-frequency alternating current introduced into the human body or tissue is conducted almost completely through the fluid compartment of the fat-free mass [2]. BIA measures parameters such as resistance (R) and capacitance (Xc) by recording a voltage drop in applied current [3]. Capacitance causes the current to lag behind the voltage, which creates a phase shift. This shift is quantified geometrically as the angular transformation of the ratio of capacitance to resistance, or the phase angle (PA) [4]. PA reflects the relative contribution of fluid (resistance) and cellular membranes (capacitance) of the human body. By definition, PA is positively associated with capacitance and negatively associated with resistance [4]. PA can also be interpreted as an indicator of water distribution between the extra-and intracellular space, one of the most sensitive indicators of malnutrition [5,6]. Objective The primary objective of the present study was to prospectively evaluate effects of antiviral therapy on BIAderived PA as a simple method for the estimation of body cell mass (BCM), body fat (BF), extracellular mass (ECM), and total body water (TBW) in 37 patients with chronic HCV infection. Patient population The study was performed on a consecutive case series of 37 patients with chronic HCV infection (October 2008 -September 2009). Inclusion criteria were age ≥ 18 years, chronic HCV infection, and a liver biopsy performed within the last 6 months. Exclusion criteria included decompensated liver disease, peripheral oedema, pre-existent malnutrition, decreased albumin levels (< 3.4 g/dl), hepatocellular carcinoma (HCC), active alcohol abuse, co-infection with HBV or HIV, chronic renal failure (GFR < 50 ml/min./1.73 m 2 ), and overt diabetes. Treated patients were divided into 2 groups according to HCV genotype and duration of antiviral therapy. All patients underwent baseline laboratory measurements. Full written informed consent was obtained from all subjects before entry into the study, and the clinic's ethics committee approved the protocol. All of the treated HCV patients received pegylated interferon-α (1.5 mg/kg body weight weekly s.c.) and ribavirin (12 mg/kg body weight daily p. o.) as antiviral therapy and completed the 24 or 48 week cycle with the starting dose. Patients with the need of dose adjustment were excluded in order to avoid effects of the dose on alterations in body composition. In addition, none of the included patients needed supportive medication with granulokine or epo. Moreover, no patient received other antiviral or steatosis-inducing drugs. Occurrence and severity of side effects was monitored by a study nurse who was blinded to the results of BIA measurements. Virology All HCV patients had a positive anti-HCV status (CMIA anti-HCV, Abbott Laboratories, Wiesbaden, Germany), positive HCV-RNA in serum, and increased liver enzymes. HCV genotyping was performed with INNO-LIPA HCV II kits (Siemens Healthcare Diagnostics, Marburg, Germany) according to the manufacturer's instructions. Amplicor-HCV-Monitor (Perkin-Elmer, Norwalk, Connecticut, USA) was used to quantify HCV-RNA levels in serum. The detection limit was < 615 copies/ml. BIA measurement procedures BIA was performed by a registered study nurse (M. N.). Impedance measurements were taken after 10 minutes of rest with a BIA impedance analyzer (BIA 101, Akern Bioresearch, Florence, Italy). Briefly, two pairs of electrodes were attached on the right hand and right foot with the patient in supine position, with legs slightly apart, and the arms not touching the torso [4] (Figure 1). Calculation of TBW, BF, and BCM was performed as previously described elsewhere [24][25][26]. Statistical analysis Statistical analysis was performed using the SPSS 11.5 system (SPSS Incorporation, Chicago, Illinois, USA). Continuous variables are presented as means ± standard deviation (SD) whereas categorical variables are presented as count and proportion. Comparison between groups were made using the Mann-Whitney U test or the Student's test for continuous variables, and the χ 2 or Fisher's exact probability test for categorical data. A pvalue < 0.05 was considered to be statistically significant. Multiple comparisons between more than two groups of patients were performed by ANOVA and subsequent least-significant difference procedure test. Spearman's correlation coefficient was calculated for testing the relationship between different quantities in a bivariate regression model. Table 1 shows the baseline characteristics of 37 patients with chronic HCV infection and 10 therapynaïve subjects with HCV infection (5 with genotype 1 and 5 with genotype 3). Genotype 1 was present in 15 patients (8 males, 7 females, mean age 48.1 ± 12.6 y) whereas 22 patients had genotype 3 (10 males, 12 females, 37.5 ± 9.5 y). Patients with genotype 3 were treated for 24 weeks whereas subjects with genotype 1 received antiviral therapy for 48 weeks. Virological response was observed in 73.3% of patients with genotype 1 and in 86.3% with genotype 3. In addition, we also performed ultrasound examinations to exclude ascites and used the FibroScan to measure extent of liver fibrosis. However, we found no positive correlation between BIA measurements and liver stiffness (data not shown). Patients' demographic data Body weight is significantly reduced in patients with genotype 1 receiving antiviral treatment for 48 weeks As demonstrated in Figure 2A, body weight significantly decreased in patients with genotype 1 following antiviral treatment for 48 weeks (78 ± 13.1 kg before therapy versus 71 ± 15.3 kg after therapy; p < 0.001). Body weight was also reduced in subjects with genotype 3 receiving antiviral medication for 24 weeks, though not statistically significant (75.5 ± 20.7 kg before therapy versus 68.5 ± 21 kg after therapy; n.s.). In contrast, almost no alterations in body weight were observed in the control group -irrespective of the genotype (genotype 1: 88.8 ± 3.1 kg at baseline, 87.4 ± 12.3 kg after 48 weeks; genotype 3: 86.6 ± 2.1 kg at baseline, 85.2 ± 2.2 kg after 24 weeks; n.s.). Values are presented as means ± SD. Genotype 1 was present in 15 patients with hepatitis C whereas 22 patients had genotype 3. Additionally, a group of 10 subjects with untreated HCV was used as a control. No relationship was found between BIA measurements and laboratory data. Body cell mass is reduced in HCV patients after antiviral therapy In HCV genotype 1 patients, BCM decreased from 27.3 ± 6.8 kg before antiviral treatment to 24.3 ± 7.2 kg (p = 0.02; Figure 2C). We also observed a significant reduction in BCM in patients with HCV genotype 3 (27.7 ± 8.8 kg before versus 24.6 ± 7.6 kg after treatment; p = 0.01). Again, no changes in BCM were observed in untreated HCV patients (for genotype 1: 28.0 ± 2.9 kg at baseline versus 26.6 ± 3.3 kg after 48 weeks and for genotype 3: 27.2 ± 3.5 kg at baseline versus 26.0 ± 3.3 kg after 24 weeks; p > 0.5). Determination of extracellular mass revealed no significant alterations in patients infected with hepatitis C following antiviral regimens As depicted in Figure 3A, ECM did not change in either HCV genotype 1 (28.1 ± 4.4 l before and 27.7 ± 5.2 l after therapy; p > 0.05) nor in HCV genotype 3 patients (27.4 ± 5.2 l before and 28.1 ± 6.0 l after therapy; p > 0.05). Similarly, no significant changes in ECM were detected within the untreated HCV cohort (for genotype Total body water is significantly reduced in HCV patients with genotype 1 following antiviral treatment for 48 weeks TBW was reduced in patients with genotype 1 following antiviral treatment for 48 weeks (41.4 ± 7.9 l pre-therapy vs. 40.8 ± 9.5 l post-therapy; p < 0.01; Figure 3B) whereas no significant alterations could be observed for HCV genotype 3 patients (40.3 ± 10 l pre-therapy vs. 40.4 ± 9.3 l post-therapy; n.s.). In addition, no significant changes for TBW were present in patients with untreated HCV infection (genotype 1: 41.2 ± 1.3 l at baseline, 40.8 ± 0.8 l after 48 weeks; genotype 3: 39.0 ± 1.5 l at baseline, 38.2 ± 1.7 l after 24 weeks; n.s.). Adverse effects of antiviral treatment are more prominent in HCV-infected patients with alterations in body composition In a further sub-analysis we found a reduction in BF and BCM to a similar degree in both HCV genotypes following antiviral therapy -without any correlation to the recorded adverse effects of antiviral treatment (Table 2). Interestingly, a decrease in TBW was more often accompanied with episodes of fatigue and cephalgia in patients with genotype 1. Moreover, we observed that a decline in PA was more often associated with flu-like symptoms -as revealed for patients with genotype 3. We speculate that this may be related to a delayed dehydration in this cohort of patients. Discussion BIA has been used for the assessment of malnutrition in patients with liver cirrhosis. In this setting, use of BIA has been demonstrated to offer a considerable advantage over other widely available but less accurate methods like anthropometry or the creatinine approach [27]. Despite some limitations in patients with ascites, BIA is a reliable bedside tool for the determination of BCM in cirrhotic patients. Pirlich and colleagues, however, demonstrated that removal of ascites had only minor effects on BCM as assessed by BIA [28]. In a recently published study by Antaki et al., BIA was used for the evaluation of hepatic fibrosis in patients with chronic HCV infection [23]. The aim was to assess whether BIA can differentiate between minimal and advanced liver fibrosis in a cohort of 20 HCV-infected patients. The authors found no significant differences with respect to PA, R, or Xc for the whole body and the right upper quadrant measurements in any axes -irrespective if minimal or advanced fibrosis was present. Furthermore, Romero-Gomez and co-investigators found that in HCV patients infected by genotype 3a, hepatic steatosis correlated significantly with intrahepatic HCV-RNA load. However, in genotype 1, hepatic steatosis was associated with host factors such as leptin levels, BMI, percentage of BF, and visceral obesity [29]. Following antiviral treatment, we found a significant reduction in body fat in patients with genotype 3. Interestingly, major alterations in BMI were not present. We suggest a loss in fatty tissue, which might be compensated e.g. by increased water storage. Although we have no evidence for this mechanism, as we did not further investigate this issue. For clinical purpose, body fat comprises an intrinsic risk factor for diabetes, hyperlipidemia, NAFLD, and cardio-vascular diseases whereas a higher body cellular mass is not associated to known health risks. In addition, analyzing TBW by BMI method may further improve to predict a patient's hydration level while ECM contains the metabolically inactive parts of the body components including bone minerals and blood plasma. In a further cross-sectional analysis by Delgado-Borrego and colleagues comparing 39 HCV-positive with 60 HCV-negative orthotopic liver transplant (OLT) recipients, the authors found by BIAderived measurements that HCV infection and BMI were independent predictors of insulin resistance (IR), respectively. HCV infection was associated with a 35% increase in IR [30]. The present study was conducted to investigate whether BIA can be used to monitor changes or alterations in body composition parameters in patients with chronic HCV infection following antiviral therapy for 24 or 48 weeks. Although compromised by the small sample size, our results suggest that bioelectrical impedance analysis does have the sensitivity required to distinguish significant differences in patients with chronic HCV infection with respect to body weight, BF, BCM, and TBW, in part related to the genotype. We also included a control group with untreated HCV infection whereas several studies of BIA in healthy subjects have shown mean PA values ranging from 6.3 to 8.2° [21,31]. Our findings for PA in untreated HCV patients did fall in that range. It should be noted that BIA can be affected by both BMI and age. A higher BMI is known to correlate with a higher PA, possibly secondary to the effect of adipose tissue on resistance [32]. Other studies have suggested a gradual decrease in PA with age [31,33]. Our results did not show a correlation between gender and age or biochemical and virologic response rates to PA (data not shown) in either group, probably due to the small sample size. However, to best of our knowledge this is the first study demonstrating alterations in body composition measured by BIA in patients with chronic HCV infection following antiviral treatment. The identification of prognostic factors in patients infected with HCV is of considerable importance for the clinical management of this disease. The current study was performed to investigate whether BIA-derived phase angle or alterations in body composition can predict or monitor the outcome to antiviral therapy in HCVinfected patients. Our study demonstrates that a Symptoms of fatigue and cephalgia were more evident in patients with genotype 1 whereas flu-like symptoms were more present in patients with genotype 3 following antiviral treatment (* p < 0.05). reduction in PA was clinically more often accompanied with episodes of flu-like syndromes in patients with genotype 3 whereas symptoms like fatigue and cephalgia were more evident after a decline in total body water in patients with genotype 1 ( Table 2). This information would be helpful in patient management and may implicate that for example in patients with genotype 1 following antiviral treatment fluid support should be planned or modified whereas in genotype 3 flu-like symptoms should be treated earlier with e.g. acetaminophen. As a step to further understand the clinical applications of BIA-derived assessments, we propose that similar studies with larger sample sizes are needed to further validate the prognostic significance of PA and TBW determinations in patients infected with HCV. Investigations into other non-invasive modalities for the assessment of alterations in body composition in patients with hepatitis C infection should be pursued.
v3-fos-license
2019-05-27T06:51:38.343Z
2019-04-23T00:00:00.000
150198703
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://se.copernicus.org/articles/11/37/2020/se-11-37-2020.pdf", "pdf_hash": "6dc6f3e5d474cd019afc56d10d3c09e3a0c80688", "pdf_src": "ScienceParsePlus", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43572", "s2fieldsofstudy": [ "Geology" ], "sha1": "b2aa142d3a49ecefaa5ad86f0fbf4a6ffdf8e843", "year": 2020 }
pes2o/s2orc
Can subduction initiation at a transform fault be spontaneous? . We present an extensive parametric exploration of the feasibility of “spontaneous” subduction initiation, i.e., lithospheric gravitational collapse without any external forcing, at a transform fault (TF). We first seek candidates from recent subduction initiation events at an oceanic TF that could fulfill the criteria of spontaneous subduction and retain three natural cases: Izu–Bonin–Mariana, Yap, and Matthew and Hunter. We next perform an extensive exploration of conditions allowing for the spontaneous gravitational sinking of the older oceanic plate at a TF using 2-D thermomechanical simulations. Our parametric study aims at better delimiting the ranges of mechanical properties necessary to achieve the old plate sinking (OPS). The explored parameter set includes the following: crust and TF densities, brittle and ductile rhe-ologies, and the width of the weakened region around the TF. We focus on characterizing the OPS conditions in terms of (1) the reasonable vs. unrealistic values of the mechanical parameters and (2) a comparison to modern cases of subduction initiation in a TF setting. When modeled, OPS initiates following one of two distinct modes, depending mainly on the thickness of the overlying younger plate. The astheno-sphere may rise up to the surface above the sinking old plate, provided that the younger plate remains motionless (verified for ages ≥ 5 Myr, mode 1). For lower younger plate ages (typically ≤ 2 Myr Introduction The process of spontaneous subduction deserves to be explored again following recent discoveries during Ocean Drilling Project 375 in the backarc of the proto-Izu-Bonin subduction zone.The nature and age of the basaltic crust drilled there appeared to be similar to those of the forearc basalts underlying the boninites of the present Izu-Bonin forearc (Hickey-Vargas et al., 2018).The consequences of this discovery are controversial since they are supposed to support the concept of spontaneous subduction for some authors (Arculus et al., 2015;Stern and Gerya, 2018), whereas for other authors, they do not (Keenan and Encarnación, 2016;Lallemand, 2016). The notion of "spontaneous subduction" originates from two observations: (1) Uyeda and Kanamori (1979) first described the Mariana-type extreme subduction mode, where an old oceanic plate sunk, driven by its weight excess into a vertical slab in association with backarc extension.(2) A few years later, in the early 1980s, Bonin Island volcanic rock analysis and deep sea drilling (Leg 60) in the adjacent Izu-Bonin-Mariana (IBM) subduction zone forearc revealed rocks called boninites that combined the characteristics of arc-lavas and MORB (Mid-ocean ridge basalt; Natland and Tarney, 1981;Bloomer and Hawkins, 1983).A conceptual Published by Copernicus Publications on behalf of the European Geosciences Union.model was then proposed by Stern and Bloomer (1992) reconciling these observations, in which an old plate may sink in the mantle under its weight along the weak boundary formed by a transform fault.Numerical models (Hall and Gurnis, 2003;Gurnis et al., 2004) first failed to support this process of spontaneous subduction and concluded that a tectonic force was required to initiate subduction.Later, they finally succeeded in simulating spontaneous subduction in specific contexts, such as lithospheric collapse around a plume head (Whattam and Stern, 2015) or the conjunction of a large density contrast with a very weak fault zone between the adjacent lithospheres (Leng and Gurnis, 2015). In this study, we will adopt the definition of Stern and Gerya (2018): spontaneous subduction is caused by forces originating at the subduction initiation site and not elsewhere (Fig. 1b).They define three different settings where spontaneous subduction may develop: passive margin, transform fault (TF), or plume head.The only Cenozoic examples that were attributed by Stern and Gerya (2018) to potential sites of spontaneous subduction initiation, i.e., IBM and Tonga-Kermadec, correspond to the TF setting (Fig. 1a).In these two examples, the relics of the subduction initiation stage date back to the Eocene and are thus subject to controversy.We first recall the natural examples for which oceanic TFs or fracture zones might have evolved into a subduction zone.Then, numerical models addressing subduction initiation processes in a similar context are analyzed before developing our own numerical approach.The range of parameters allowing for spontaneous subduction initiation in our models will finally be compared with the reasonable values characterizing the natural processes. 1.1 From oceanic transform faults or fracture zones to subduction in nature Table 1 and Fig. 1 summarize Cenozoic settings where oceanic TFs or fracture zones underwent deformation that sometimes evolved into subduction and at other times did not.The regions are classified in Table 1 such that the older plate (OP) underthrusts the younger in the first group (Fig. 1b, c, d, IBM, Yap, Matthew and Hunter, Mussau, Macquarie, and Romanche), the downgoing plate is the youngest in the second group (Fig. 1d, e, Hjort, Gagua, Barracuda and Tiburon), and finally those for which it appears to be impossible to determine the relative age of one plate with respect to the other at the time of initiation (Fig. 1f, Gorringe, St Paul and Owen).The analysis of all these natural cases shows that the 3-D setting and far-field boundary conditions are likely to play a major role in subduction initiation and on the selected age (old/young) of the subducting plate.Earlier studies showed that compression prevailed in the upper plate at the time of initiation for most of them, while it is unknown for IBM and Yap.In these two regions, subduction started more than 20 Myr ago (Hegarty and Weissel, 1988;Ishizuka et al., 2011), but, soon after they were initiated, they un-derwent one of the strongest episodes of subduction erosion on Earth (Natland and Tarney, 1981;Hussong and Uyeda, 1981;Bloomer, 1983;Lallemand, 1995), so all remnants of their forearc at the time of initiation were consumed (Lallemand, 2016, and references therein).Geological evidence of the stress state at initiation is thus either subducted or deeply buried beneath the remnant Palau-Kyushu Ridge.To date, some authors (e.g., Ishizuka et al., 2018;Stern and Gerya, 2018) still argue that spreading, i.e., extension, occurred over a broad area from the backarc to the forearc at the time of subduction initiation.Backarc extension concomitant with subduction initiation under compressive stress is compatible, as exemplified by the recent case of Matthew and Hunter at the southern termination of the New Hebrides subduction zone (Patriat et al., 2015, Fig. 1d).There, the authors suggest that the collision of the Loyalty Ridge with the New Hebrides Arc induced the fragmentation of the North Fiji Basin (Eissen spreading center and Monzier rift), whose extension yielded, in turn, a compressive stress along the southern end of the transform boundary (or STEP fault), accommodating the trench rollback of the New Hebrides trench.It is important to note that the geodynamic context of the Matthew and Hunter region is very similar to the one of the IBM protosubduction (Deschamps andLallemand, 2002, 2003;Patriat et al., 2015;Lallemand, 2016).Rifting and spreading in a direction normal to the TF has been documented at the time of subduction initiation.Since the conditions of spontaneous subduction do not require compressive stress, but rather the sinking of the oldest plate under its weight excess, and because of the lack of geological records of what happened there, we consider that IBM and Yap subduction initiation might be either spontaneous (Fig. 1b) or forced (Fig. 1c).To decipher between these two hypotheses, we conduct a series of numerical simulations. Modeling of spontaneous subduction initiation at a transform fault in previous studies Numerical experiments have shown that old plate sinking (OPS) could spontaneously occur for a limited viscosity contrast between lithospheres and the underlying asthenosphere (Matsumoto and Tomoda, 1983) in a model neglecting thermal effects.However, without imposed convergence, subduction initiation failed when thermal diffusion was taken into account, even in the most favorable case of an old and thick plate facing a section of asthenosphere (Hall and Gurnis, 2003;Baes and Sobolev, 2017), unless the density offset at the TF was emphasized by including a thick and buoyant crust at the younger plate (YP) surface (Leng and Gurnis, 2015).In most cases showing the instability of the thick plate, lateral density contrasts at the TF are maximized by imposing at the TF an extremely thin younger plate (0 or 1 Myr old at the location where instability initiates) in front of a thicker plate, whose age is chosen between 40 and 100 Myr, either in 2-D (Nikolaeva et al., 2008) or 3-D (Zhu et al., 2009(Zhu et al., , 2011;;Zhou et al., 2018).For similar plate age pairs, Gerya et al. (2008) showed that successful spontaneous initiation requires the OP slab surface to be sufficiently lubricated and strongly weakened by metasomatism to decouple the two adjacent plates as plate sinking proceeds, while the dry mantle is supposed to be moderately resistant to bending.Assuming such "weak" rheological structure, OPS triggering occurs and results in an asthenosphere rise in the vicinity of the subduction hinge, which yields a fast spreading (from a few centimeters per year to > 1 m yr −1 ).It has been described as a "catastrophic" subduction initiation (Hall and Gurnis, 2003).This catastrophic aspect is hampered when thicker YPs are considered (10 to 20 Myr old), when crustal and mantle rheologies are less weak, and when shallow plate weakening develops progressively through time, e.g., by pore fluid pressure increase with sea water downward percolation in a low-permeability matrix (Dymkova and Gerya, 2013). These previous numerical studies have helped to unravel the conditions leading to OPS without any imposed external forcing.Nevertheless, recent incipient subduction zones, the most likely to correspond to initiation by spontaneous sinking at a TF, are not all associated with a significant plate age offset at plate boundaries (Matthew and Hunter, Yap, Table 1).We thus propose a new investigation of the conditions of OPS to address the following three questions.What are the mechanical parameter ranges allowing for OPS, especially for the TF settings that are the closest to spontaneous subduction conditions?Are these parameter ranges reasonable?Are the modeled kinetics and early deformation compatible with natural cases observations? We choose a simplified setup, without fluid percolation simulations and in 2-D, to allow for a broad parameter exploration with an accurate numerical resolution. Model setup The numerical model solves the momentum, energy, and mass conservation equations, assuming that rocks are incompressible, except for the thermal buoyancy term in the momentum equation and for the adiabatic heating term in the energy equation (extended Boussinesq approximation).As shear heating has been shown to significantly improve strain localization within the subduction interface (Doin and Henry, 2001;Thielmann and Kaus, 2012), it is included in the heat conservation equation, as well as a uniform heat production (Table 2).The simulation box, 2220 km wide and 555 km thick, is chosen to be large enough to simulate convective interactions between shallow lithospheres and deep mantle Solid Earth, 11, 37-62, 2020 www.solid-earth.net/11/37/2020/that may be involved in the process of subduction initiation (Fig. 2).Density (ρ) is assumed to be temperature and composition dependent: where ρ ref is the reference density at the surface, C is composition (mantle, oceanic crust, or weak material; Sect.2.3), α is the thermal expansion coefficient, T is temperature, and T s is the surface temperature (Table 2).For the mantle, ρ ref m is fixed to 3300 kg m −3 , while ρ ref for the oceanic crust and the weak material is varied from one experiment to another (Sect.2.4). Rheology We combine a pseudo-brittle rheology to a non-Newtonian ductile law.Pseudo-brittle rheology is modeled using a yield stress, τ y , increasing with depth, z: where C 0 is the cohesive strength at the surface (Table 2), γ is a function of composition C, ρ is density, and g is the gravity acceleration.The parameter γ represents the yield strength increase with depth and can be related to the coefficient of internal friction of the Coulomb-Navier criterion (Sect.2.5).To simplify, we tag γ as the brittle parameter. The relationship between the lithostatic pressure ρgz and the normal stress σ n applied on the brittle fault will be derived in Sect.2.5.1.The brittle deviatoric strain rate is computed assuming the following relationship (Doin and Henry, 2001): ε = εref (τ/τ y ) n p , where ε is the second invariant of the deviatoric strain rate tensor, εref is a reference strain rate, and n p is a large exponent (Table 2).In the plastic domain, strain rates are close to zero if τ τ y but become very large as soon as stress exceeds the yield stress τ y .Recalling that τ = ν ε, the plastic viscosity, ν b , is written as follows: A dislocation creep rheology is simulated using a non-Newtonian viscosity ν d , defined by where B 0 is a pre-exponential factor, E a is the activation energy depending on composition C, V a is the activation volume, n is the non-Newtonian exponent, and R is the ideal gas constant (Table 2).The effective viscosity ν eff is computed assuming that the total deformation is the sum of brittle and ductile deformations.Note that the brittle behavior acts as a maximum viscosity cutoff.Regarding strain rate, a minimum cutoff is set to 2.6 × 10 −21 s −1 , but no maximum cutoff is imposed. www.solid-earth.net/11/37/2020/Solid Earth, 11, 37-62, 2020 (Ribe and Christensen, 1994;Arcay, 2012).The red dotted line represents the hot thermal anomaly ( T = +250 • C) imposed in some experiments.(b) Close-up on the TF structure.L w is the width at the surface of the younger plate and of the older plate (aged of A y and of A o Myr, respectively) over which the oceanic crust is assumed to have been altered and weakened by the TF activity.The meaning of labels 1 to 4 is given in Sect.2.3. Initial thermal structure and boundary conditions We investigate a wide range of lithosphere age pairs, the younger plate (YP) age, A y , varying from 0 to 40 Myr, and the older plate (OP) age, A o , from 5 to 150 Myr (Table 3), to cover the plate age ranges observed in nature (Table 1).The thickness of a lithosphere is here defined by the depth of the 1200 • C isotherm, z LB (A), classically estimated using the half-space cooling model (Turcotte and Schubert, 1982) by where κ is the thermal diffusivity (Table 2) and A is the plate age.However, the half-space cooling model, as well as some variations of it such as the global median heat flow model (GDH1; Stein and Stein, 1992), have been questioned (Doin et al., 1996;Dumoulin et al., 2001;Hasterok, 2013;Qiuming, 2016).Indeed, such conductive cooling models predict too cold young oceanic plates (by ∼ 100 to 200 • C) compared to the thermal structure inferred from high-resolution shear wave velocities, such as in the vicinity of the East Pacific Rise (Harmon et al., 2009).Similarly, worldwide subsidence of young seafloors is best modeled by taking into account, in addition to a purely lithosphere conductive cooling model, a dynamic component, likely related to the underlying mantle dynamics (Adam et al., 2015).Recently, Grose and Afonso (2013) have proposed an original and comprehensive model for oceanic plate cooling, which accurately reproduces the distribution of heat flow and topography as a function of seafloor age.This approach leads to young plates (< 50 Myr) 100 to 200 • C hotter than predicted using the half-space cooling and Parsons and Sclater (1977) models, especially in the shallowest part of the lithosphere.This discrepancy notably comes from, first, heat removal in the vicinity of the ridge by hydrothermal circulation and, second, the presence of an oceanic crust on top of the lithospheric mantle that insulates it from the cold (0 • C) surface and slows down its cooling and thickening.Taking into account these two processes reduces the surface heat flows predicted by the GDH1 model by 75 % (Grose and Afonso, 2013).Our study focuses on young oceanic plates that are the most frequent at TFs (A y 60 Myr, Table 1).Therefore, we calculate lithospheric thicknesses z LB (A) as 0.75 of the ones predicted by half-space cooling model.Moreover, plates warmer than predicted by the half-space cooling model are consistent with the hypothesis of small-scale convection occurring at the base of very young oceanic lithospheres, i.e., younger than a threshold encompassed between 5 and 35 Myr (Buck and Parmentier, 1986;Morency et al., 2005;Afonso et al., 2008).An early small-scale convection process would explain short-wavelength gravimetric undulations in the plate motion direction in the central Pacific and east-central Indian oceans detected at plate ages older than 10 Myr (e.g., Haxby and Weissel, 1986;Cazenave et al., 1987).Buck and Parmentier (1986) have shown that the factor erf −1 (0.9) ∼ 1.16 in Eq. ( 5) must be replaced by a value encompassed between 0.74 and 0.93 to fit the plate thicknesses simulated when early small-scale convection is modeled, depending on the assumed asthenospheric viscosity.This is equivalent to applying a corrective factor between 0.74/1.16-0.64www.solid-earth.net/11/37/2020/Solid Earth, 11, 37-62, 2020 and 0.93/1.16-0.80,which is consistent with the lithosphere thicknesses inferred from heat flow modeling by Grose and Afonso (2013).Between the surface and z LB (A), the thermal gradient is constant.The transform fault, located at the middle of the box top (x = 1110 km), is modeled by a stair-step function joining the isotherms of the adjacent lithospheres (Fig. 2).We test the effect of the TF thermal state, which should be cooled by conduction in the case of an inactive fracture zone, in a few simulations (Sect. 3.3). Moreover, we test the possible influence of the asthenospheric thermal state at initiation, either uniform over the whole box or locally marked by thermal anomalies resulting from the small-scale convection observed in a preliminary computation of mantle thermal equilibrium (Fig. 2).The results show that the process of subduction initiation, in the case of success or failure, does not significantly depend on the average asthenospheric thermal structure.Nevertheless, in a few experiments, we impose at the start of simulation a thermal anomaly mimicking a small plume head ascending right below the TF, 200 km wide and ∼ 75 km high, whose top is located at 110 km depth at start of simulation (Fig. 2).The plume thermal anomaly T plume is set to 250 • C (Table 3).Regarding boundary conditions, slip is free at the surface and along vertical sides.We test the effect of the box bottom condition, either closed and free-slip or open to mantle in-and outflows.When the box bottom is open, a vertical resistance against flow is imposed along the box base, mimicking a viscosity jump 10 times higher than above (Ribe and Christensen, 1994;Arcay, 2017).The results show that the bottom mechanical condition does not modify the future evolution of the fracture zone.The thermal boundary conditions are depicted in Fig. 2. Lithological structure at simulation start The TF lithological structure is here simplified by considering three different lithologies only: the vertical layer forming the fault zone between the two oceanic lithospheres (label 1 in Fig. 2) and assumed to be the weakest material in the box, the oceanic crust (label 3), and the mantle (label 4).In all experiments, the Moho depth is set to 8.3 km for both oceanic lithospheres, and the width of the vertical weak zone forming the fault 1 is equal to 8.3 km.The depth of the weak vertical zone 1 depends on the chosen older plate age, A o : it is adjusted to be a bit shallower than the OP base, by ∼ 15 to 30 km.Furthermore, we want to test the effect of the lateral extent of this weakening, outside the gouge fault, L w (label 2 in Fig. 2).Indeed, depending on the type of TF, the weak zone width may be limited to ∼ 8 km, such as for the Discovery and Kane faults (Searle, 1983;Detrick and Purdy, 1980;Wolfson-Schwehr et al., 2014), implying L w = 0 km in our model or, in contrast, that the weak zone width may reach 20 to 30 km, such as for the Quebrada or Gofar TFs (Searle, 1983;Fox and Gallo, 1983); thus, L w can be varied up to 22 km.In most experiments, we impose the same value for the lateral extent of crust weakening on both lithospheres: L w (A o ) = L w (A y ), except in a few simulations. Parametric study derived from force balance The first-order forces driving and resisting subduction initiation at a transform fault indicate which mechanical parameters would be worth testing to study OPS triggering.Without any external forcing, the unique driving force to consider is (1) the plate weight excess relative to the underlying mantle.Subduction is hampered by (2) plate resistance to deformation and bending; (3) the TF resistance to shearing; and (4) the asthenosphere strength, resisting plate sinking (e.g., McKenzie, 1977;Cloetingh et al., 1989;Mueller and Phillips, 1991;Gurnis et al., 2004).We vary the mechanical properties of the different lithologies forming the TF area to alter the incipient subduction force balance.The negative plate buoyancy ( 1) is related to the plate density, here dependent only on the thermal structure and plate age A (Sect.2.2) since we do not explicitly model density increase of metamorphized (eclogitized) oceanic crust.Nonetheless, we vary the crust density, ρ c , imposed at the start of simulation along the plate surface to test the potential effect on plate sinking.We also investigate how the density of the weak layer forming the interplate contact, ρ TF , which is not well known, may either resist plate sinking (if buoyant) or promote it (if dense).The plate strength and flexural rigidity (2) are varied in our model by playing on different parameters.First, we test the rheological properties of the crustal layer both in the brittle and ductile realms, by varying γ c and E c a (Eqs. 2 and 4).Second, the lithospheric mantle strength is varied through the mantle brittle parameter, γ m , that controls the maximum lithospheric stress in our model.Third, we vary the lateral extent (L w ) of the shallow lithosphere weakened domain, related to the crust alteration likely to occur in the vicinity of the TF. We study separately the influence of these six mechanical parameters (ρ c , ρ TF , γ c , E c a , γ m , L w ) for most plate age pairs.The TF strength (3) is often assumed to be quite low at the interplate contact (Gurnis et al., 2004;Gerya et al., 2008).We thus fill the TF "gouge" with the weak material (labeled 1 in Fig. 2) and, in most experiments, set it as γ TF = 5 × 10 −4 .In some experiments, we replace the weak material filling the TF gouge by the more classical oceanic crust (labeled 3 in Fig. 2) to test the effect of a stiffer fault.In that case, γ TF = γ c = 0.05 and L w = 0 km: the TF and both plate surfaces are made of gabbroic oceanic crust (Table 3).Note that when γ c = γ TF = 5 × 10 −4 , the weak layer and the oceanic crust are mechanically identical, and the weak layer then entirely covers the whole plate surface (L w = 1100 km).Similarly, as the activation energy E c a is the same for the oceanic crust and the weak material, assuming a low ductile strength for the TF is equivalent to covering the whole plate surface by the weak layer (setting L w = 1100 km). Solid Earth, 11, 37-62, 2020 www.solid-earth.net/11/37/2020/Apart from the six main physical properties that are repeatedly tested (Sect.2.5), we perform additional experiments for a limited number of plate age combinations to investigate a few extra parameters.In this set of simulations, we vary the asthenosphere resistance competing against plate sinking (4), either by changing the asthenospheric reference viscosity at the lithosphere base or by inserting a warm thermal anomaly simulating an ascending plume head (Fig. 2).We also test the influence of the lithosphere ductile strength that should modulate plate resistance to bending (2) by varying the mantle activation energy, E m a .At last, we further explore the TF mechanical structure (3) by imposing an increased width of the TF weak gouge, and different thermal structures of the plate boundary forming the TF. 2.5 Ranges of investigated physical properties 2.5.1 Brittle properties for oceanic crust, transform fault and mantle lithologies The brittle parameter γ in Eq. ( 2) is related to the tectonic deviatoric stress, σ xx , and to the lithostatic pressure, σ zz (Turcotte and Schubert, 1982): σ xx = γ σ zz .One may derive the relationship under compression between γ and the classical coefficient of static friction, f s , defined by f s = τ/σ n , where τ is the shear stress along the fault (Turcotte and Schubert, 1982): where λ is the pore fluid pressure coefficient, ρ w is the water density, and p w is the pore fluid pressure, assuming that p w = ρ w gz if λ = 0 and p w = ρgz if λ = 1.The brittle parameter γ moderately depends on the average density in the overlying column, ρ (Fig. S1 in the Supplement).The internal friction coefficient, f s , initially considered approximately constant (f s ∼ 0.6 to 0.85; Byerlee, 1978) is suggested to vary with composition from recent experimental data.For a dry basalt, f s would be encompassed between 0.42 and 0.6 (Rocchi et al., 2003;Violay et al., 2012).Assuming high pore fluid pressure in the oceanic crust (λ ≥ 0.45), γ c from Eq. ( 6) is then close to 0.8 (Fig. S1).If the oceanic crust is altered by the formation of fibrous serpentine or lizardite, f s decreases to 0.30 (Tesei et al., 2018), entailing γ c ∼ 0.05 if the pore fluid pressure is high (λ = 0.9), which we consider the minimum realistic value for modeling the crustal brittle parameter (Fig. 3a).In the presence of chrysotile, f s may even be reduced to 0.12 at low temperature and pressure (Moore et al., 2004), which would reduce γ c to ∼ 0.01 (for λ = 0.9), deemed as the extreme minimum value for γ c .Note that relationship between the presence of fluid and its effect on the effective brittle strength (λ value) depends on the fault network and on the degree of pore connectivity, which may be highly variable (e.g., Carlson and Herrick, 1990;Tompkins and Christensen, 1999). At mantle depths, the effect of pore fluid pressure on brittle strength is more questionable than at crustal levels.To simplify, we suppose the pore fluid pressure p w to be very low, close to zero, assuming that the lithospheric mantle is dry in absence of any previous significant deformation.The coefficient of internal friction from Eq. ( 7) for a dry mantle decreases from f s = 0.65 (Byerlee, 1978) to f s ∼ 0.35 or 0.45 if peridotite is partly serpentinized (Raleigh and Paterson, 1965;Escartín et al., 1997), leading to γ m between 2.8 and 0.8.However, assuming γ m = 2.8 would lead to an extremely high lithospheric strength (∼ 1 GPa at only 11 km depth) since our rheological model neglects other deformation mechanisms.We thus restrict the maximum γ m to 1.6, which has been shown to allow for a realistic simulation of subduction force balance for steady-state subduction zones (Arcay et al., 2008).The most likely interval for γ m is eventually [0.8-1.6] (Fig. 3b).The mantle brittle parameter γ m might decrease to ∼ 0.15 (f s = 0.12) if chrysotile is stable, which is nevertheless unexpected at mantle conditions.Lower γ m are considered unrealistic, even if γ m = 0.02 has been inferred to explain plate tectonic convection (in the case of a mantle devoid of a weak crustal layer; Korenaga, 2010). Crust and transform fault densities The oceanic crust density is varied from the classical value for a wet gabbro composition in the pressure-temperature conditions prevailing at the surface (2920 kg m −3 ; Bousquet et al., 1997;Tetreault and Buiter, 2014).Crust density in the blueschist facies reaches 3160 kg m −3 , but we try even higher densities by imposing a mantle value.This would correspond to crust eclogitization and the heaviest crust to maximize the column weight within the older plate (OP) to promote its gravitational instability (Fig. 3c).Rocks forming the fault "gouge" are likely to be vertically highly variable in composition, possibly rich in buoyant phases such as serpentine and talc close to the surface (e.g., Cannat et al., 1991) and more depleted in hydrous phases at the deeper level.Below the Moho, down to its deepest portion, the fault may be compounded of a mix between oceanic crust and altered mantle (Cannat et al., 1991;Escartín and Cannat, 1999).The density of the fault gouge is thus likely to increase from the surface toward the deeper part of the fault, from a hydrated gabbro density to a mantle density.We thus test for ρ TF values spanning from a gabbroic density to a mantle one (Fig. 3d).Note that these densities correspond to reference values at surface conditions (T = 0 • C and P = 0 kbar), knowing that density www.solid-earth.net/11/37/2020/Solid Earth, 11, 37-62, 2020 is here a function of temperature through the coefficient of thermal expansion (Table 2). Activation energy for the crust The most realistic interval for the crustal activation energy E c a can be defined from experimental estimates E exp a for an oceanic crust composition.Nonetheless, E exp a values are associated with specific power law exponent, n, in Eq. ( 4), while we prefer to keep n = 3 in our numerical simulations for the sake of simplicity.Therefore, to infer the E c a interval in our modeling using a non-Newtonian rheology, we assume that without external forcing, mantle flows will be comparable to sublithospheric mantle convective flows.The lithosphere thermal equilibrium obtained using a non-Newtonian rheology is equivalent to the one obtained with a Newtonian ductile law if the Newtonian E a is equal to the non-Newtonian E a multiplied by 2/(n + 1) (Dumoulin et al., 1999).As sublithospheric small-scale convection yields strain rates by the same order of plate tectonics (∼ 10 −14 s −1 ; Dumoulin et al., 1999), this relationship is used to rescale the activation energies experimentally measured in our numerical setup devoid of any external forcing.We hence compute the equivalent activation energy as follows: where n e is the experimentally defined power law exponent.The activation energy E exp a in the dislocation creep regime is encompassed between the one for a microgabbro, 497 kJ mol −1 (Wilks and Carter, 1990, with a non-Newtonian exponent n e = 3.4) and the one of a dry diabase, i.e., 485 ± 30 kJ mol −1 (Mackwell et al., 1998, with n e = 4.7 ± 0.6).For a basalt, E (n e = 3) if hornblende and plagioclase are present in high proportions (Yongsheng et al., 2009).This activation energy, as well as the one of a wet quartzite (E exp a = 154 kJ mol −1 , n e = 2.3; Ranalli, 1995), though used in numerous thermomechanical modeling studies of subduction, is considered an unrealistic value in a TF setting.Nevertheless, a low plate ductile strength promoted by a thick crust has been suggested to favor spontaneous subduction initiation at a passive margin (Nikolaeva et al., 2010).We choose to not vary the crustal thickness but to test in a set of experiments the effect of a very low crustal activation energy instead (equal to 185 kJ mol −3 , Fig. 3e). Distance from the transform fault of crust weakening Regarding the lateral extent of the weak material, L w , we test values in agreement with the observed large or relatively small TFs (L w ≤ 20 km, as described in the previous section) and increase them up to the extreme value of 50 km (Fig. 3f).The simulation results prompt us to perform experiments in which both lithospheres are entirely covered by the weak layer (L w ∼ 1110 km) to achieve the conditions of spontaneous subduction initiation. Numerical code and resolution The models are performed using the thermo-chemomechanical code of convection developed by Christensen (1992), which is based on an Eulerian and spline finite element method.Conservation equations are solved to obtain Solid Earth, 11, 37-62, 2020 www.solid-earth.net/11/37/2020/two scalar fields, which are temperature and stream function (Christensen, 1984).The simulation box is discretized into 407 × 119 nodes.The resolution is refined in x and z directions in the area encompassing the TF, i.e., between 966 and 1380 km away from the left-hand box side, and for depths shallower than 124 km, where node spacings are set to 1.67 km.Outside the refined domain, node spacing is 10.5 km in both directions.The tracer density is uniform over the simulation box (∼ 3.2 per km 2 ), verifying that at least nine tracers fill the smallest meshes.This numerical discretization has been tested and validated in a previous study (Arcay, 2017).Note that because the total pressure is not directly solved by the code in Christensen (1992), the lithostatic pressure is used instead in Eq. ( 4). The original code has been adapted to allow for the simulation of three different lithologies within the simulation box (Doin and Henry, 2001;Morency and Doin, 2004): the mantle, oceanic crust, and a weak layer that would mimic an altered or hydrated and, hence, weakened region around a TF, with specific densities and rheologies (see Sect. 2.3).Composition is tracked by markers advected along flow lines using a fourth-order Runge-Kutta scheme (van Keken et al., 1997). Results Here, we summarize first the experiments without OPS and then the simulations showing spontaneous gravitational instability of the OP.Next, we detail the effect of the different mechanical and geometrical parameters.Table 3 compiles the experiments explicitly quoted in the main paper.The exhaustive list of simulations performed in this study can be found in the Supplement: the experiments are compiled as a function of the plate age pair imposed at the TF in Table S1 in the Supplement, while they are ranked according to the simulated deformation regime in Table S2. Overview of simulated behaviors other than old plate sinking We obtain numerous behaviors different from OPS, varying as a function of (1) the plate age pair (A y , A o ) and (2) the combination of densities, rheological parameters and the weak layer lateral extent (L w ).This large simulation set shown in Fig. 4 represents ∼ 73 % of the 302 experiments presented in this study, which do not show a clear OPS.First, no tectonic deformation is modeled in many experiments, i.e., deformation only occurs within the asthenosphere below the plates but is almost totally absent at shallower depths where plate cooling takes place (Fig. 4.1).This is notably obtained if the YP is too old, that is, for A y ≥ 3 up to 17 Myr depending on the physical parameter set (Fig. 6). Second, we observe the YP ductile dripping, leading to the plate dismantlement, corresponding to a series of several fast lithospheric drips, soon after the simulation start (Fig. 4.2), modeled when ductile strengths are low.The OP is not affected and solely cools through time. Third, a transient retreat of the YP is modeled, in very few experiments, while the OP remains motionless (Fig. 4.3).This occurs if the YP is very young (A y ≤ 2 Myr) and if the TF density, ρ TF , is low (equal to the gabbro density).Because of its buoyancy, the weak material forming the TF rises up to the surface as soon as simulation starts.This fast vertical motion (velocities ≥ 50 cm yr −1 ) is partly transmitted horizontally and deforms the weaker and younger plate, triggering a backward motion.Velocities vanish as plate cooling proceeds. Fourth, the YP sinking is triggered in some models (Fig. 4.4).The gravitational instability of the YP is very similar to the one expected for a thick plate spontaneous sinking (as sketched in Fig. 1).The polarity of the YP sinking depends on the density imposed for the TF interface (ρ TF ) and on whether (or not) the weak layer covers the YP surface (L w > 0 km).The duration of YP spontaneous sinking is always very brief (< 0.5 Myr): either the process does not propagate fast enough to compete against plate cooling and strengthening (Fig. 4.4b) or the diving YP segment is limited by the imposed length L w of lithosphere recovered by the weak material (Fig. 4.4a). Fifth, in one experiment, a double subduction initiation is observed: while YP sinking initiates, the OP also becomes unstable and starts sinking when a wider portion of weak and dense material (L w = 50 km) is included (Fig. 4.5).Nevertheless, the OP slab rapidly undergoes slab break off once the L w long weak segment has been entirely subducted (Fig. 4.5, 0.62 Myr), which we deem as too short to represent a successful OPS initiation since the subducted slab length is limited to 50 km. Sixth, the vertical subduction of the YP initiates at the TF when the TF material is as dense as the mantle and vertically drags the YP into the mantle (Fig. 4.6).The motion can be transmitted away from the TF up to 500 km backward but systematically entails a YP stretching at the surface, as the slab is young and soft (A y ≤ 7 Myr).This prevents subduction from lasting more than 1.5 Myr.Moreover, plate cooling frequently freezes the downward YP flow (Fig. 5.6, bottom row). Finally, in ∼ 40 % of experiments in which OPS initiation appears to start, the process freezes up and does not evolve into a developed plate sinking.The OP bending stops very early, typically in less than ∼ 0.4 Myr (Fig. 4.7), especially for OP older than 80 Myr.The velocities within the OP then vanish quite fast (Fig. 4.7a).OPS also aborts even when the mechanical decoupling does occur at the TF if hot mantle flows are too slow and/or if the lateral extent of the weak material L w is narrow (Fig. 4.7b). Modes of OPS triggering Spontaneous subduction is modeled when one of the two lithospheres is gravitationally unstable, which occurs if the total lateral density offset (vertically integrated) at the plate boundary is not balanced by plate, mantle, and TF resistance to deformation, as summarized in Sect.2.4.We observe the spontaneous sinking of the OP for quite various pairs of lithosphere ages (Fig. 5), which mostly depends on the chosen set of rheological parameters and on the presence of the weak layer at the whole plate surface.When simulated, OPS occurs following one of two basic ways, later called mode 1 and mode 2. Mode 1 happens in approximately one-half of OPS cases (Fig. 5a) and is the closest to the mechanism envisioned in the spontaneous subduction concept (Fig. 1b).The mantle flow generated by the OP sinking triggers an asthenospheric upwelling focusing along the weak TF "channel" up to the surface ("asthenosphere invasion" in Fig. 1b), while the YP remains mostly motionless.The subduction process develops due to a fast hinge rollback.As mantle velocities are huge, exceeding tens of meters per year in many cases, the asthenosphere catastrophically invades the box surface, filling a domain that is soon larger than 200 km, as depicted in Fig. 5a. In mode 2, asthenosphere invasion does not occur at the surface and is often limited to the YP Moho.Mantle flow induced by OP bending drags the YP toward the OP (Fig. 5b, c).As a consequence, a significant mass of dense crust is transferred from the top of the YP to the one of the OP, where the accumulated crust builds a crustal prism that loads the OP, amplifying its bending and sinking.This phenomenon is observed in numerous cases, systematically if the YP age is 2 Myr (Table 3), and in several cases when A y is either 0 or 5 Myr (simulations S1a to S2b, S22j-k).In both initiation modes, velocities at the slab extremity are very high (14.6 cm yr −1 in simulation S1a, 0 vs. 2, up to ∼ 180 cm yr −1 in simulations S10a, 0 vs. 80 and S11a, 0 vs. 100).The duration to form a slab longer than ∼ 200 km is less than 1.5 Myr.The kinetics of the OPS process modeled in this study are consequently always very fast.This swiftness most likely comes from the significant weakness that must be imposed in our modeling setup to obtain OPS triggering (see Sect. 3.3.2). Influence of tested parameters The regime diagrams displayed as a function of the plate age pair (A y , A o ) sum up our main results obtained as a function of the assumed rheological set, density field, and the lithological distribution at the surface (oceanic crust vs. TF weak material; Fig. 6).These eight regime diagrams bring out the respective influence of the main physical parameters tested in this paper, especially for deciphering conditions allowing for OPS.YP dismantlement, basically occurring when the ductile crust is softened, is not represented in the regime diagrams (discussed at the end of the section). Transform fault and oceanic crust densities Densities strongly affect the evolution of the TF system.If the TF weak medium is buoyant (ρ TF = ρ c = 2920 kg m −3 ), the TF material rises up to the surface forming a small and localized buoyant diapir that pushes laterally on the younger lithosphere (Fig. 4.3).The YP either shortens if it is weak enough (A y ≤ 2 Myr, Fig. 6a) in a backward motion or starts sinking if the YP thickness is intermediate (2 < A y < 20 Myr).On the other hand, a heavy material filling the TF gouge (ρ TF = 3300 kg m −3 ) inverts the aforementioned mechanics by pulling the YP downward at the TF to form a vertical subduction (Fig. 4.6, labeled YPVSI for "YP vertical subduction initiation" in Table 3).Note that when the fault density ρ TF is very high, the oceanic crust density, ρ c , buoyant or not, does not actually affect the mode of YP deformation (compare diagrams b and d in Fig. 6). Lateral extent of the weak material The results presented in Sect.3.3.1 are obtained when the weak material is localized at the TF only (L w = 0 km).Assuming that the weak material laterally spreads out away from the TF (L w > 0 km), the mode of YP vertical subduction switches to YP sinking by gravitational instability.This is observed when young plates are modeled on both sides of the TF (A y < 5 Myr, A o < 40 Myr, Fig. 6c).The boundary between the dense weak material and the buoyant and stronger oceanic crust more or less acts as a "secondary" plate boundary, decoupling the two lithological parts of the YP, which does not occur if there is no buoyancy contrast between the crust and the weak material (Fig. 6e). Moreover, we observe that enlarging the weak domain enables OPS in some cases if the YP is very thin (A y ≤ 2 Myr), regardless of the oceanic crust density (Fig. 6c, e), although OPS aborts fast, as OP subduction is limited to the weakened length L w (set to 50 km, Fig. 6e).Simulations show that OP sinking is enhanced if L w is much wider than expected in nature (L w ≥ 50 km, Fig. 3f, g, h).Otherwise, the backward propagation of bending is hindered, which stops the OPS process.We conclude that a very wide area of crust weakening on both sides of the TF is a necessary condition to simulate OPS.We quantify more accurately for different pairs of plate ages with minimum length L w , allowing for a developed OPS in the Supplement (Sect.S2).These age pairs are selected to cover a wide range of YP ages (2 to 20 Myr).We find that the domain of weakened crust to impose in the vicinity of the TF is too large to be realistic, at least for classical mantle rheology, with the only exception being the setting with a very thin YP (A y = 2 Myr).These results suggest the strong resistant characteristic of thick YP in OPS triggering. Crust brittle strength What is the threshold in crust weakening enabling OPS?A usual value of the crust brittle parameter (γ c = 0.05) does not allow for OPS (Fig. 6a to e).Our simulations show that if γ c is 100 times lower (γ c = 5 × 10 −4 ), OPS can initiate for numerous plate age pairs if the whole crust is mechanically weak (L w = 1100 km, Fig. 6f) but such a brittle parameter seems unrealistic.To determine the threshold in γ c allowing for OPS, we choose a high plate age offset, 2 vs. 80, the most propitious for OPS (keeping L w = 1100 km).We determine that the threshold in γ c is encompassed between 10 −3 and 5 × 10 −3 (simulations S18b, c, d, and e), which is still less than the lower bound of acceptable γ c ranges (Fig. 3a).We hypothesize that for a small plate age offset, the threshold in γ c would have to be even lower to observe OPS triggering. Plate bending and mantle brittle parameter Surprisingly, a very low crust brittle parameter is not sufficient for simulating OPS for some large plate age offsets, such as for (A y , A o ) = 10 vs. 100 or 5 vs. 120 (simulations S41a and S29b, Table 3, Fig. 6f).A mechanism is thus hindering OPS.We assume that thick OPs are too strong to allow for bending.We test this by reducing the mantle brittle parameter, γ m , that affects the maximum lithospheric stress in our brittle-viscous rheology, from 1.6 (Fig. 6f) to 0.1 (Fig. 6g) and 0.05 (Fig. 6h).The domain of the plate age pair where OPS can occur is then greatly enlarged toward much lower plate age offsets.We note that in most experiments showing "mantle weakening"-induced OPS, OPS stops by an early slab break off, once the infant slab reaches 200 to 300 km length because the reduced slab strength cannot sustain a significant slab pull (Fig. 5c). In a limited set of experiments, we determine the threshold in γ crit m below which OPS occurs.This threshold depends on the OP age: γ crit m ∼ 0.06 for the plate age pair 10 vs. 40 (simulations S37c to f) but the threshold is higher (γ crit m ≥ 0.1) for the plate age pair 10 vs. 50 (sim.S38a).The thicker the OP is, the easier the OPS triggering, as one may expect.We next compare experiments in which the OP and the YPs are both progressively thickened by considering the following age pairs: 10 vs. 50 (sim.S38a), 15 vs. 60 (S47a), 20 vs. 80 (S51a-e), and 25 vs. 100 (S57a-b).The experiments show that γ crit m is ≥ 0.1, ≥ 0.1, ∼ 0.07, and ∼ 0.06, respectively.Hence, the plate rigidity has to be reduced as YP thickness increases, despite the joint OP thickening, down to extremely weak γ m ranges (γ crit m 0.1, Fig. 3).Despite the driving influence of thicker OP, thickening the YP impedes OPS in a much stronger way.Moreover, we test different means to lower the OP rigidity.For four plate age pairs for which OPS aborts (5 vs. 35, 7 vs. 70, 7 vs. 80, and 7 vs. 90), we decrease the mantle ductile strength by lowering the activation energy E m a (Table 2) but keep constant the mantle viscosity at 100 km depth and the mantle brittle parameter (γ m = 1.6).We find that lowering E m a instead of the mantle brittle parameter is much more inefficient for obtaining OPS (Table S1).Finally, our results suggest that the factors of the resistance to OPS mainly come from OP flexural rigidity and YP thickness and stiffness, in agreement with previous studies (Nikolaeva et al., 2010;Zhou et al., 2018). Solid Earth, 11, 37-62, 2020 www.solid-earth.net/11/37/2020/In panel (f), the boundary between the "no subduction" and "OPS" domains corresponds to the relationship A o /A 2.5 y 0.75 Myr −1.5 .When OPS is simulated (panels e to h), the conditions in A o -A y prevailing at subduction initiation inferred for Yap, IBM, and Matthew and Hunter (Table 1) are superimposed on the regime diagrams. Ductile strength decrease The main effect of imposing a decrease in the crust and TF ductile strength (lowering E c a to 185 kJ mol −1 ) is to trigger the fast dismantlement of YP by lithosphere dripping if the YP is young (A y = 2 Myr, Fig. 4.2).Otherwise, a low E c a has no effect on the deformation of the two plates.One exception appears in simulation S14e, in which the weak ductile strength triggers mode 2 OPS.In this particular setup, both lithospheres are very thin ((A y , A o ) = 2 vs. 5) and could be regarded as "crustal" plates because the mantle lithosphere is very thin or almost absent.In this simulation, the YP strength profile is actually similar to the other cases yielding mode 2 OPS (see Sect.S4 in the Supplement), which should explain why decreasing E c a allows for OPS in this unique case.The YP destabilization and dripping result from the high crust density (ρ c =3300 kg m −3 ) assumed in experiments performed with a reduced E c a .Indeed, in experiments using a usual crustal density (ρ c = 2920 kg m −3 ), YP vertical subduction is obtained instead (simulation S15g, 2 vs. 10, for instance). Plume-like thermal anomaly The thermal anomaly simulating an ascending plume head below the TF produces effects very similar to those of a reduced E c a : no effect if plates are older than 2 Myr and YP dismantlement if A y = 2 Myr and if the crust is dense (ρ c = 3300 kg m −3 ).Otherwise, for a normal crust density, a short stage of YP vertical subduction occurs after plume impact (2 vs. 10, simulation S15h).The hot thermal anomaly never triggers OPS in our modeling, contrary to other studies, even if we have investigated large plate age contrasts (2 vs. 40, sim.S17j, and 2 vs. 80, S18k) as well as small age offsets and plates younger than 15 Myr (Table S1).To obtain a successful plume-induced subduction initiation, it has been shown that the plume buoyancy has to exceed the local lithospheric (plastic) strength.This condition is reached eiwww.solid-earth.net/11/37/2020/Solid Earth, 11, 37-62, 2020 ther when the lithosphere friction coefficient is lower than ∼ 0.1 (Crameri and Tackley, 2016), and/or when the impacted lithosphere is younger than 15 Myr (Ueda et al., 2008), or when a significant magmatism-related weakening is implemented (Ueda et al., 2008) or assumed (Baes et al., 2016) in experiments reproducing modern Earth conditions.We hypothesize that if the mantle brittle parameter was sufficiently decreased, we would also achieve OPS by plume head impact.Besides, lithosphere fragmentation is observed by Ueda et al. (2008) when the plume size is relatively large in relation to the lithosphere thickness, in agreement with our simulation results showing the dismantlement for a significantly young (A y = 2 Myr) and thin lithosphere. 3.3.7Additional tests on OPS conditions: fault gouge strength and width, transform fault vs. fracture zones, and asthenosphere viscosity We sum up in this section the extra experiments performed to provide the precision of the mechanisms involved in OPS triggering.The detailed results are described in Sect.S3 in the Supplement.We first test the necessity of the fault softness to simulate OPS by inverting the oceanic crust and TF respective brittle parameter for models that originally displayed OPS (thus by setting for the inversion experiments: γ TF = 0.05, while γ c = 0.0005).We find that a very low TF strength is critical to model OPS. We next wonder if OPS (when not modeled) could be triggered by widening the fault gouge from the surface to the bottom of the fault (domain 1 in Fig. 2) by setting the fault width to 20 km instead of 8.3 km in experiments that did not initially show OPS.The simulations show that OPS still does not occur, even if the mechanical decoupling is maximized (γ TF decreased to 5 × 10 −5 ).The mechanical interplate decoupling is hence not sufficient alone to trigger OPS, at least for a 20 km wide fault.Subsequently, we investigate the possible role of the TF thermal structure.The interplate domain is assumed to be very thin and modeled by a stair-step function (Sect.2.2).In nature, this setup would correspond to an active transform fault.If the fault is instead inactive (fracture zone), the thermal state of the plate boundary is likely to be cooled by thermal conduction, possibly stronger and more resistant to plate decoupling.We test the effect of the TF thermal structure for the two plate age pairs for which OPS is simulated when crust weakening is assumed (γ c = 0.0005) by widening the TF thermal transition (from 11 up to 70 km), keeping the weak material forming the fault gouge at the center of the thermal transition in all cases.All these experiments show OPS.We here verify that the fault gouge weakening, governed by the soft material brittle properties, is independent of temperature and, at first order, is independent of the fault activity in our 2-D setup. We finally test if a decrease in asthenosphere strength could help OPS triggering by unbalancing the OP weight excess.The experiments show that asthenospheric velocities and OP deformation are slightly amplified but still not enough to trigger OPS. Analysis: mechanism and conditions of OPS triggering We derive the main processes involved in successful OPS triggering from the results presented in Sect.3. In the following paragraphs, we discuss the relationship between the different thermomechanical parameters and their influence on the forces driving and resisting to OPS, which are summed up in Table 4. OPS mode 1 vs. mode 2 The velocity fields show that modes 1 and 2 differ by the resulting YP kinematics.When mode 1 occurs, the YP remains almost motionless with respect to underlying asthenospheric flows (speeds ≤ 1 and ≥ 20 cm yr −1 , respectively; see Supplement Fig. S7).In contrast, velocities during mode 2 are closer between the YP and the asthenosphere, where speeds are high (between 25 and 100 cm yr −1 ).Moreover, within the set of simulations showing OPS (Sect.3.2), we find that mode 2 occurs in simulations where the YP age is 2 Myr for various rheological sets, or if A y = 5 Myr, provided that the mantle brittle strength is reduced.In all these experiments, the strength of the YP bottom part is the closest to the asthenospheric one (viscosity ratios 10 2 to 10 3 ; Fig. S8 in the Supplement).In contrast, the focusing of asthenosphere flows toward the weak TF is observed when the viscosity offset between the YP and the underlying mantle exceeds 10 2 .We hypothesize that mode 2 results from a strong coupling between the YP and the asthenosphere, which is related to the asthenosphere ductile strength (Table 4).The particular (though not meaningful) case of A y = 0 Myr is addressed in the Supplement (Sect.S4). The mode 2 OPS may be envisioned as an asymmetric double-sided subduction (Gerya et al., 2008, see the sketch in Fig. 5).In this subduction mode, the thick OP sinking drives the YP downward flow at the proto-slab surface because plate decoupling at shallow levels does not occur.The shallow interplate decoupling is hence not required in mode 2 since the YP is easily dragged by asthenosphere.In contrast, asthenospheric flows related to OPS in mode 1 might not be able to drag the YP because of the high viscosity offset between YP and asthenosphere .The asthenosphere upwelling along the TF ("upwelling force" in Table 4) would then result from the need to decouple the respective motions of the two plates, i.e., to accommodate OP downwelling and hinge retreat, whereas the YP is almost motionless.Simulations by Gerya et al. (2008) suggest that the thick OP lubrication by metasomatism is a way to force plate decoupling to model one-sided subduction. Relationship between investigated thermomechanical parameters and the main forces involved in the spontaneous subduction initiation modeled in this study.When increased from an experiment to another, the mechanical parameters are interpreted as either favoring OPS (plus sign), hampering OPS (minus sign), or not affecting OPS triggering (zero).Empty cells indicate that the relation between parameter and force cannot be inferred from our results, while a question mark means that the assumed relationship is not clear. If parameter increases (↑) . . . . . .does the force promote (+) or inhibit The boundary between OPS and the absence of subduction can be defined for a normal mantle brittle strength γ m = 1.6 (Fig. 6f) using simulations in which OPS aborts (such as simulations S25a, 5 vs. 35; S29b, 5 vs. 120, or S33a, 7 vs. 80, Fig. S3 in the Supplement).We observe a dichotomy in the OPS domain boundaries.On the one hand, for thick OP (A o > 100 Myr), OPS is prevented if the YP is not extremely thin (plate age younger than 5 Myr).On the other hand, for thinner OP (A o ≤ 100 Myr), we experimentally show that the OPS condition corresponds to the following relationship: A o /A 2.5 y 0.75 Myr −1.5 (Fig. 6f).In both cases, the influence of A y is either strong or predominant.The YP age is the major determining factor in the TF evolution, compared to the OP age (separately considering the cases where the mantle brittle strength is reduced), which confirms the conclusion derived in Sect.3.3.4on the highly resistant effect of the YP thickness.This hindering effect results from two processes.On the one hand, high A y ages yield low-pressure gradients across the TF due to a density contrast that decreases with YP aging (e.g., Hall and Gurnis, 2003).On the other hand, YP aging increases the YP strength competing against asthenosphere upwelling in the vicinity of the TF in OPS mode 1 and YP stretching far away from the TF to accommodate YP dragging in mode 2 (Table 4).As a result, the conditions that are the most propitious for OPS correspond to TFs, where the thinner lithosphere is as young as possible. Parameters resisting and promoting old plate sinking OPS is triggered if the pressure gradients at the TF related to the density offset exceed plate and mantle resistance to deformation.Density contrasts are maximized when the YP is thin, which partly explains the dominant role of the age of the YP ("proto-overriding" plate), compared to A o , on subduction initiation, as already underlined in other studies (Nikolaeva et al., 2010;Dymkova and Gerya, 2013).Our results show that plate instability is essentially promoted by three mechanical conditions: when low brittle strengths are assumed for (1) the oceanic crust and (2) the mantle and (3) if the TF allows for plate decoupling.A weak brittle crust (1) enhances the fast propagation of deformation at shallow depths, which cannot be obtained in our modeling by crust ductile softening (contrary to Nikolaeva et al., 2010, in a passive margin setup).Moreover, the lowering of the crust brittle strength must be developed far away from the TF to allow for OPS.Although the minimum spatial scale of crust softening depends on plate age pair, we find that it is generally on the order of a hundred(s) of kilometers.Low brittle mantle strength (2) strongly promotes not only the OP plate bending and sinking by limiting the plate flexural rigidity but also YP deformations close to the plate boundary where asthenosphere upwelling focuses in mode 1.Finally, the TF must also be weak to enable mechanical decoupling between neighboring plates (3).The amplitude of the preceding processes is regulated by five of the six physical parameters investigated in this study, as the activation energy E c a does not actually affect OPS triggering (Fig. 3).Clearly, OPS cannot be simulated for a realistic set of physical parameters (Fig. 6a).To achieve OPS, the cursors controlling the plate mechanical structures have been tuned beyond the most realistic ranges ("yellow" domain, Fig. 3) for two parameters at least, and beyond reasonable values for at least one parameter ("red" domain, Fig. 6e to h).Nevertheless, combining different unlikely ("yellow") parameter values (for ρ TF and L w ) does help to achieve OPS for slightly less extreme mechanical conditions, as one parameter only has to be pushed up to the unrealistic ("red") range (ρ c , Fig. 6e).Note, however, that the plate age intervals showing OPS are then extremely narrow (A y < 3 Myr, A o < 25 Myr) and are not consistent with the three potential candidates of natural OPS. 2-D versus 3-D setup Subduction initiation at a TF is here simplified using a 2-D process, whereas the fault strike-slip basically implies that it takes place as a 3-D phenomenon.For instance, the setting of Matthew and Hunter used to be a subduction-transform edge propagator ("STEP") fault 2 Myr ago (Patriat et al., 2015), where 3-D mantle flows associated with the Australian plate subduction likely affected the TF structure and evolution, possibly favoring subduction initiation.This case exemplifies the role of 3-D far-field tectonics during subduction infancy (Table 1) and the potential role of deep mantle flows.Upward and downward mantle flows, even far away from the initiation site, have been shown to be able to initiate subduction in 2-D models (Lu et al., 2015;Baes and Sobolev, 2017).On the other hand, studies in 3-D by Boutelier and Beckett (2018) and Zhou et al. (2018) showed that subduction initiation depends on along-strike variations in plate structure.However, the strike-slip kinematics of an active TF have up to now, to our knowledge, never been taken into account in subduction initiation simulations.We show that a TF thermal state cooler than that modeled by a stair-step function does not hinder OPS.However, our results verify that in 2-D, without simulating the TF strike-slip, the process of spontaneous OPS has to occur at simulation onset to avoid the impeding effect of plate stiffening with further conductive cooling.Including the TF motion in 3-D experiments would compete against this strengthening effect in the area nearby the active spreading center. Solid Earth, 11, 37-62, 2020 www.solid-earth.net/11/37/2020/Finally, one may argue that a 3-D setup would intrinsically facilitate OPS propagation at a transform fault.Plate sinking might initiate at the location where the offset in plate thickness is maximum (in the vicinity of a ridge spreading center) and then propagate away from this point (Zhou et al., 2018).However, as we focus on subduction initiation strictly speaking and not on subduction propagation, the use of a 2-D setup should remain meaningful to unravel the conditions of spontaneous sinking for a given plate age pair, considering apart the problem of the transform fault slip. Free slip vs. free surface condition Our results show that in most of our experiments showing OPS, the oceanic crust must be significantly weak (Fig. 6fg).One may argue that the necessity of decoupling propagation close to the surface by shallow softening is related in our modeling to the absence of free surface (e.g., Crameri and Tackley, 2016).We test it by seeking for the threshold in the crustal brittle parameter allowing for OPS for one plate age pair 5 vs. 40 (sim.S26a in Table 3) as a function of the mechanical boundary condition imposed at the box top, either free-slip without vertical motion or free surface, mimicked by inserting a "sticky water" layer (see the Supplement Sect.S3 and Fig. S6).For the selected plate age pair, the threshold in crustal brittle parameter turns out to increase from ∼ 0.0025 without free surface to ∼ 0.0175.Hence, the necessary crust weakness that must be imposed to model OPS may be overestimated by a factor ∼ 7.This result agrees with previous studies showing that the free surface condition promotes the triggering of one-sided subduction in global mantle convection models (Crameri et al., 2012).Nevertheless, note that the threshold enabling OPS when the free surface is taken into account is still an unlikely value, since it is close to the limit of the extremely low range of the crust brittle parameter ("red" domain, Fig. 3). Initiation swiftness and influence of elastic rheology In a TF or fracture zone numerical setting without any external forcing, if subduction initiation has to occur, it can only take place at simulation onset because plate cooling results in a fast stiffening of oceanic lithospheres and, second, quickly attenuates the plate density offset (Hall and Gurnis, 2003). The process of subduction initiation modeled in our study systematically occurs very briefly after the simulation start, in less than 1 to 1.5 Myr.This quite "catastrophic" way of initiation has also been simulated in less than 0.8 Myr for other tectonic settings or triggering modes, such as passive margins (Nikolaeva et al., 2010;Marques and Kaus, 2016) or plumeinduced mantle flows (Lu et al., 2015), using rheological conditions very similar to the ones assumed in this study.The initiation process is slightly slowed down but remains fast (duration < 3 Myr) when the necessary weakness of the plate's stronger part is not fully imposed at simulation onset but progressively develops due to damaging or water-related weakening effects (Hall and Gurnis, 2003;Gurnis et al., 2004;Gerya et al., 2008;Dymkova and Gerya, 2013).Moreover, such unrealistically high velocities at plate sinking onset may result at least in part from the 2-D setup since, in a 3-D setup, the along-strike propagation slows down the initiation process; however, speeds of hinge retreat remain significantly high (between 13 and 20 cm yr −1 in Zhou et al., 2018).In addition, by neglecting elastic deformation, the amount of plate and interplate weakening required to trigger OPS may be excessive (Farrington et al., 2014).Nonetheless, the potential effect of elasticity on the OPS kinetics is not clear.On the one hand, including elasticity could slow down OPS initiation by increasing the threshold in the strength contrast, as aforementioned.On the other hand, the incipient subduction has been shown to remain as fast as modeled in the present study in elasto-visco-plastic models testing different modes of subduction initiation (Hall and Gurnis, 2003;Thielmann and Kaus, 2012;Baes et al., 2016).Previous modeling of subduction initiation including elasticity showed that the elastic flexure was a basic term of the subduction force balance (McKenzie, 1977;Hall and Gurnis, 2003;Gurnis et al., 2004).In our model, plate bending occurs by viscous (and brittle) deformations, as in numerous approaches to subduction that have successfully reproduced topographies and the strain and stress patterns observed in natural cases (e.g., Billen and Gurnis, 2005;Buffett, 2006).However, if elasticity might compete against subduction initiation by limiting the localization of lithospheric shearing, it may also help incipient subduction through the following release of stored elastic work (Thielmann and Kaus, 2012;Crameri and Tackley, 2016).Consequently, the threshold in mechanical parameters necessary to achieve OPS would probably be offset if elasticity was included. Weakening of the oceanic mantle lithosphere Our results show that OPS is strongly facilitated if the lithospheric mantle is weak.Drury, 2005).In our experiments, the simulated deviatoric stress is generally much lower than 100 MPa (Sect.S5 in the Supplement).For such low stresses, neither the Peierls nor the GBS creeps would be activated, hence we do not expect a major change in our results if they were implemented.Grainsize sensitive (GSS) diffusion linear creep (3) can strongly localize deformation at high temperature (e.g., Karato et al., 1986).In nature, GSS creep has been observed in mantle shear zones in the vicinity of a fossil ridge in Oman in contrast at rather low temperature ( 1000 • C; Michibayashi and Mainprice, 2004), forming very narrow shear zones (< 1 km wide).However, the observed grain-size reduction of olivine is limited to ∼ 0.2-0.7 mm, which cannot result in a noticeable viscosity reduction.A significant strength decrease associated with GSS linear creep requires additional fluid percolation once shear localization is well developed within the subcontinental mantle (e.g., Hidas et al., 2016).The origin of such fluids at great depth within an oceanic young lithosphere is not obvious.Furthermore, GSS linear creep may only operate at stresses < 10 MPa (Burov, 2011), which is not verified in our simulations (Sect.S5 in the Supplement). In our models, mantle weakening is achieved by decreasing the mantle brittle parameter γ m , to mimic the weakening effect of hydrated phases (4), such as talc or serpentine minerals (Sect.2.5.1).Dymkova and Gerya (2013) show that the percolation of sea water down to ∼ 25 km depth during early OP deformation can enable the thick plate bending, assuming low porosity (≤ 2.5 %) and low mantle matrix permeability (10 −21 m 2 ) to significantly increase pore fluid pressure.In our approach, a high pore fluid pressure ratio (λ > 0.5) is associated with a low mantle brittle parameter (γ m < 1, Fig. S1), for which OPS is modeled for a broad ranges of plate ages (Fig. 6g-h), in agreement with results by Dymkova and Gerya (2013).However, the low permeabilities assumed by Dymkova and Gerya (2013) are questioned by recent experiments of mantle hydration at ridge and of water percolation in a peridotite, and by estimates from a peridotite aquifer (Dewandel et al., 2004;Godard et al., 2013;Farough et al., 2016).These studies rather infer permeabilities between 10 −19 and 10 −16 m 2 , which would hamper high pore fluid pressures and, eventually, plate bending. Comparison between OPS model requirements and natural cases When analyzing the results of our parametric study (Sect.4), the striking conclusion is that none of the realistic sets of parameters allowed for spontaneous subduction.Figure 6e, f, g, and h show that, even if one of the plates is ex-tremely young (< 7 Myr), the oceanic crust should be very dense (ρ c = 3300 kg m −3 ), as well as drastically weakened (γ c = 5×10 −4 ) at considerable distances from the TF (L w ≥ 50 km), to satisfy OPS necessary conditions.Assuming that such rather extreme conditions were fulfilled, OPS must develop (1) at simulation onset before plate cooling so that gravitational instability is maximal and (2) catastrophically in terms of the kinetics of the process, with sinking rates ≥ 15 up to 180 cm yr −1 .As depicted in Sect. 1, only two natural cases, IBM and Yap, attest to subduction initiation of an old oceanic plate beneath a young one at a TF, which later evolved into a mature subduction during the Cenozoic.The range of ages for both plates at time of initiation (Table 1) has been plotted in Fig. 6f, g and h.The plate ages at Yap subduction initiation are incompatible with the conditions of OPS inferred from our modeling results, suggesting that spontaneous subduction of the thicker plate is highly unlikely.Only IBM falls into the OPS domain based on age pairs at onset.There, the initial state of stress being unknown, as both plates edges have been consumed in subduction, it begs the question as to whether the old Pacific plate sunk spontaneously.Now, the question is as follows.To what extent are the rheological parameters and the characteristics of subduction initiation satisfied?Beyond the unrealistic values of crustal densities and brittle properties, the expected sinking rates and asthenosphere rise are high.The slab typically reaches a 200 km length within ∼ 1 Myr, so the remnants of the resulting "forearc crust" should be restricted to a very short time span.The argument of Arculus et al. (2015) that new findings of juvenile 52-48 Myr old oceanic crust in the Amami-Sankaku Basin that are far away from those already found in the IBM forearc (so-called "forearc basalts") and the confirmation that the spontaneity of the subduction initiation and wide extent of the asthenosphere invasion was refuted by Keenan and Encarnación (2016) since younger juvenile oceanic crust cannot be used as a test for early uplift in presubduction initiation basement rocks.A second argument comes from the boninitic nature of the primary embryonic arc combining MORB and slab-derived hydrous fluid signatures (Ishizuka et al., 2006).Those boninites erupted between 51 and 44 Ma in the Bonin present arc and forearc (Ishizuka et al., 2011;Reagan et al., 2019) and between 48 and 43 Ma in the Mariana present forearc (Johnson et al., 2014).This time span appears incompatible with the swiftness of the processes required in our models.An alternative geodynamic scenario satisfying both the magmatic and the tectonic constraints has been proposed by Lallemand (2016).The kinematic change of Pacific plate motion following the Izanagi slab break off at approximately 60-55 Ma (Seton et al., 2015) created enabling conditions for convergence across a major TF or fracture zone.Compressive deformation progressively localizes until subduction starts at approximately 52 Ma.At approximately the same time, the occurrence of the Oki-Daitō plume has produced the splitting of the remnant arcs brought by Solid Earth, 11, 37-62, 2020 www.solid-earth.net/11/37/2020/the younger plate (Ishizuka et al., 2013).Oceanic basalts, called FABs (forearc basalts), spread along the axes perpendicular to the nascent trench (Deschamps andLallemand, 2002, 2003).Boninites erupt as soon as hydrous fluids from the subducting plate metasomatize the shallow asthenosphere beneath the newly formed oceanic crust.Later, tectonic erosion along the IBM trench removes the frontal 200 km of the upper plate, exposing in the present forearc basalts and arc volcanics initially formed far from the trench. As observed along the Hjort trench, subduction may start as soon as compressive tectonic forces are applied across a TF, but one should note that the subducting plate is the youngest there (Table 1; Abecassis et al., 2016).IBM and Yap cases likely fall in this "forced" category as mentioned above, but we lack direct field evidence, as they were both deeply eroded along their overriding edge. Failure of old plate sinking is not excluded Among the numerous parameter values tested in this study, especially those within reasonable (green) ranges, we have observed that most of them led to incipient subduction of either the young or the old plate but failed soon after (Fig. 6a to e).We have compiled in Table 1 several cases of potential subduction initiation along TFs or fracture zones that either failed (Romanche, Gagua, Barracuda, Tiburon, Saint Paul and Owen) or were just initiated, so we still ignore how it will evolve (Matthew and Hunter, Mussau, Macquarie, and Gorringe).The advantage of studying the aborted cases is that we still have access to the deformation that accompanied subduction initiation, and compression was always recorded in the early stages (see the references in Table 1).These incipient subduction areas are either restraining bends along transform faults or underwent changes in plate kinematics from strike-slip to transpression.A major limiting factor is the cooling of adjacent plates, as the distance from the spreading center or the plume increases, inhibiting their flexural capacities. Conclusions We perform a large set of 2-D thermomechanical simulations to study the conditions of spontaneous sinking of the older plate at a TF by investigating broad intervals of plate ages and by paying special attention to the mechanical parameter ranges allowing for OPS.OPS is simulated notably if the oceanic crust is dense and mechanically soft far away from the TF on both sides of the plate boundary.Our results confirm that the OP resistance to bending and the YP thickness are the most significant factors preventing OPS.Reducing the brittle properties of the oceanic lithosphere is thus the most efficient way to trigger OPS, compared to a softening by lowering the ductile strength, imposing a hot thermal anomaly or reducing the asthenospheric viscosity.When these extreme conditions are imposed, two processes of OPS are obtained, depending mainly on the assumed YP thickness.They can be summed up as (1) an OP rapid sinking that is decoupled from the YP kinematics and associated with a significant rise in the asthenosphere toward the subducting slab hinge and (2) a dragging of the YP by the sinking OP that is considered a two-sided subduction mode.In all cases, whatever the mode, OPS occurs in less than 1.5 Myr, that is, in an extremely short time span, and only if the initial mechanical setup is adjusted beyond reasonable limits for at least one key thermomechanical parameter.In addition, we find that neither the thermal structure and blurring of the transform fault area nor a plume head impact are able to affect OPS triggering in our modeling setup.Our study highlights the predominant role of a lithospheric weakening to enlarge the combination of plate ages allowing for OPS. From the parametric study, we conclude that OPS cannot be simulated for a realistic combination of mechanical parameters.By comparing our modeling results to the most likely natural cases where spontaneous subduction at a TF has been invoked, we find that even though extreme mechanical conditions were assumed, the plate age setting at Yap should prevent OPS.Regarding the case of Izu-Bonin-Mariana, in addition to the weakness of geological arguments, the kinetics of subduction initiation, evidenced by geological records, is not compatible with the catastrophic mode systematically simulated in our experiments.We finally conclude that the spontaneous instability of the thick OP at a TF is an unlikely process of subduction initiation in modern Earth conditions.Marguerite Godard, Martin Patriat, and Andrea Tommasi.We are grateful to Stéphane Arnal, Fabrice Grosbeau, and Josiane Tack for the maintenance and development of the lab cluster of computing nodes on which all numerical experiments were performed.We are also grateful to Anne Delplanque, who drew Fig. 1.We warmly thank Ben Maunder and an anonymous reviewer for their thorough and very constructive comments that significantly improved the article, as well as Susanne Buiter for handling the editing.Review statement.This paper was edited by Susanne Buiter and reviewed by Ben Maunder and one anonymous referee. Figure 1 . Figure 1.Various tectonic settings leading to vertical motion and/or convergence at transform plate boundaries, as detailed in Table 1.The convergent black heavy arrows represent far-field tectonic forces.The red light arrows outline the sense of motion of one plate with respect to the other.The red crosses and dots in circles indicate transform motion.The thicker plate is the older one. Figure 3 . Figure 3. Physical properties tested in this study and investigated ranges.(a) Brittle parameter for the oceanic crust, γ c ; (b) Brittle parameter for the mantle, γ m ; (c) Oceanic crust density, ρ c ; (d) Density of the weak medium forming the TF, ρ TF ; (e) Activation energy of the oceanic crust,E ca , assuming a non-Newtonian exponent n = 3 in Eq. (4); (f) Lateral extent of the weak domain on both flanks of the TF, L w .The parameter intervals vary from realistic ranges (in green) to extreme values (in yellow).They are still extended beyond these values, up to unrealistic ranges to achieve the conditions allowing for spontaneous subduction (in red). exp a has been recently estimated to 456 kJ mol −1 (Violay et al., 2012, with n e ∼ 3.6).Lower values inferred for other lithologies are possible but less likely, such as for a wet diorite (E exp a = 212 kJ mol −1 , n e = 2.4; Ranalli, 1995), and are used to define the lower bound of the "yellow" range for E c a (Fig. 3e).A few experiments have shown that E exp a can be as low as 132 kJ mol −1 Figure 4 . Figure 4. Illustration of the different simulated behaviors, OPS apart: close-up on the transform fault.(1) Absence of plate deformation (simulation S37x, Table 3).(2) Young plate dripping and dismantlement (simulation S17f).(3) YP retreat (simulation S16c).(4) Initiation of YP transient sinking (simulation S16b, panel a, and simulation S36b, panel b).(5) Simultaneous initiation of YP and OP sinking processes (simulation S14n).(6) Initiation of YP vertical subduction at the TF (simulation S17o).(7) OP sinking initiation that soon aborts (simulation S33a, panel a, and simulation S16a, panel b).No vertical exaggeration.The velocity scale depicted in green is specific to each simulation.The parameter boxes are color coded as a function of the investigated ranges depicted in Fig. 3. Figure 5 . Figure 5. Illustration of OPS: mode 1 in simulation S27c (panel a) vs. mode 2 in simulations S14i (panel b), and S22j (panel c).No vertical exaggeration.The parameter boxes are color coded as a function of the investigated ranges depicted in Fig. 3.Note that the velocity scale in panel (c) is specific for each snapshot.The dashed lines in the middle sketch are a schematic outline of the stream function, and the green arrows illustrate velocities. Financial support . This research has been supported by CNRS-INSU (National Institute of Universe Science) program "TelluS-SYSTER" (2015 and 2016). Table 1 . Oceanic TFs or fracture zones, where potential subduction initiated during Cenozoic times. Table 2 . Constant names and values. Table 3 . List of simulations quoted in the text.See the data in the Supplement for the complete simulation list.If one value only is indicated, the oceanic crust (3) in Fig.2is assumed to have the same brittle parameter as the weak material forming domains (1) and (2).bIfonly one value indicated, then ρ c = ρ TF .cWhenone value only is indicated, L w is identical on both plates.dTheweak material is imposed at the fault zone (1) only (Fig.2)."tw": thermal transition width at the plate boundary.T p : temperature anomaly within the plume head.ν ref : reference viscosity at the lithosphere-asthenosphere boundary (2.74 × 10 19 Pa s −1 ).OPS: older plate sinking.YPVSI: YP vertical subduction initiation (as in Fig. 5.6).YP retreat: backward drift of the younger plate, as sketched in Fig. 4.3.YPS: younger plate sinking, as sketched in Fig. 4.4b.Double SI: double subduction initiation (as in Fig. 4.5).SB: slab break off.YPD: young plate dragging and sinking into the mantle.Boundary conditions, initial thermal state, and material distribution at simulation start.One white isotherm every 200 • C. If the box bottom is open, a vertical resistance against flow is imposed to simulate a viscosity jump approximately 10 times higher below the simulation box a Demouchy et al., 2013)e strength depends on the plate age pair and on other mechanical parameters, such as γ m .A first-order estimate of the necessary mantle weakening is computed by comparing cases showing OPS to those in which OPS fails (Sect.S5 in the Supplement).Peierls plasticity (1) limits the ductile strength in a high-stress regime at moderately high temperatures ( 1000 • C;Demouchy et al., 2013)but requires a high differential stress (> 100 to 200 MPa) to be activated.Similarly, GBS power law regime (2) operates if stresses are > 100 MPa, for large strain and low temperature (< 800 • C; (Garel et al., 2014)979)ticity(Goetze and Evans, 1979), that enhances the deformation of slab and plate base(Garel et al., 2014); (2) dislocation-accommodated grain-boundary sliding (GBS; Hansen et al., 2012); (3) grain-size reduction when diffusion linear creep is activated; or (4) fluidwww.solid-earth.net/11/37/2020/Solid Earth, 11, 37-62, 2020 related weakening.
v3-fos-license
2016-05-12T22:15:10.714Z
2014-10-01T00:00:00.000
386276
{ "extfieldsofstudy": [ "Materials Science" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://bmcproc.biomedcentral.com/track/pdf/10.1186/1753-6561-8-S4-O34", "pdf_hash": "fabbcd17f50114d1e404e3ef768950963fe22d21", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43575", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "sha1": "9e099d187687dc8d577b22294bd4f9c9bd69ddbb", "year": 2014 }
pes2o/s2orc
Biomass pretreatment: a critical choice for biomass utilization via biotechnological routes The necessary biomass pretreatment step, to render the material accessible to the relevant enzyme pool, has been under thorough investigation as the production of biomass syrups, via enzymatic hydrolysis, with high sugars concentrations and yields and low inhibitors concentrations requires the pretreatment to be both efficient and low cost. A good choice for biomass pretreatment should be made by considering: (i) the possibility to use high biomass concentration; (ii) a highly digestible pretreated solid by either increasing the biomass superficial area or decrease in crystallinity or both; (iii) no significant sugar degradation into toxic compounds; (iv) yeast and bacterial fermentation compatibility of the derived sugar syrups; (v) lignin recovery; (vi) operation in reasonably sized and moderately priced reactors and (vii) minimum heat and power requirements [1]. Considering the most known pretreatments, such as diluted acid, hydrothermal processes, steam explosion, milling, extrusion, and ionic liquids, different pretreatment methods produce different effects on the biomass in terms of its structure and composition [2]. For example, the hydrothermal, steam explosion and acidic pretreatments conceptually remove mainly the biomass hemicellulose fraction whereas alkaline pretreatments remove lignin. On the other hand the product of a milling-based pretreatment retains the biomass initial composition. Furthermore, cellulose crystallinity is not significantly reduced by pretreatments based on steam, or hydrothermal, or acidic procedures, whereas ionic liquid-based techniques can shift crystalline cellulose into amorphous cellulose, substantially increasing the enzymatic hydrolysis rates and yields. As such, the choice of pretreatment and its operational conditions as well as the composition of the enzyme blend used in the hydrolysis step, determines the hexose and pentose sugars composition, the concentration and toxicity of the resulting biomass syrups. The activity profile of the enzyme blend and the enzyme load for an effective saccharification may also vary according to the pretreatment. Indeed, a low hemicellulase load can be used for a xylan-free biomass and a lower cellulase load will be needed for the hydrolysis of a low crystalline and highly amorphous pretreated biomass material. As the pretreatment choice will also be affected by the type of biomass, the envisaged biorefinery model will need to consider the main types of biomass that will be used for the biorefinery operation so as to select an appropriate, and versatile pretreatment method [3]. Considering the biorrefinery concept which broadens the biomass derived products, the C6 sugars could be fermented into ethanol, while the C5 stream could be used for the production, via biotechnological routes, of a wide range of chemicals with higher added value. To date, sugarcane and woody biomass, depending on the geographic location, are strong candidates as the main renewable resources to be fed into a biorefinery. However, due to major differences regarding their physical properties and chemical composition, the relevant pretreatments to be used in each case are expected to be selective and customized. Moreover, a necessary conditioning step for wood size reduction, prior to the pretreatment, may not be necessary for sugarcane bagasse, affecting the pretreatment energy consumption and costs. Moreover, the choice of pretreatment should take into account the foreseen utilization of the main biomass molecular components (cellulose, hemicelluloses and lignin). It is important to point out that lignin can be used as a valuable solid fuel or as a source of aromatic structures for the chemical industry. Sugarcane is one of the major agricultural crops when considering ethanol production, especially in tropical countries. In Brazil, sugarcane occupies 8.4 million hectares, which corresponds to 2.4% of farmable lands in Brazil. The gross revenue of this sector is about US$ 20 billion (54% as ethanol, 44% as sugar, and 2% as bioelectricity) [4]. In addition, up to 50% of all vehicles in Brazil are flex fuel cars, which corresponds to approximately 15 million cars [5]. Given the above, Brazil is an important player in this scenario, and, consequently, sugarcane bagasse and straw are promising feed stocks for biomass ethanol. Brazil produced, in 2008, 415 million tons of sugar cane residues, 195 million tons of sugarcane bagasse, and 220 million tons of sugarcane straw, whereas the forecast for the 2011 sugarcane production is 590 million tons, which would correspond to 178 million tons of bagasse, and 200 million tons of straw [6]. Currently, in Brazil, R&D on the use of biomass via biotechnological routes has been focused mainly on agricultural residues such as sugarcane residual biomass. Introduction The necessary biomass pretreatment step, to render the material accessible to the relevant enzyme pool, has been under thorough investigation as the production of biomass syrups, via enzymatic hydrolysis, with high sugars concentrations and yields and low inhibitors concentrations requires the pretreatment to be both efficient and low cost. A good choice for biomass pretreatment should be made by considering: (i) the possibility to use high biomass concentration; (ii) a highly digestible pretreated solid by either increasing the biomass superficial area or decrease in crystallinity or both; (iii) no significant sugar degradation into toxic compounds; (iv) yeast and bacterial fermentation compatibility of the derived sugar syrups; (v) lignin recovery; (vi) operation in reasonably sized and moderately priced reactors and (vii) minimum heat and power requirements [1]. Considering the most known pretreatments, such as diluted acid, hydrothermal processes, steam explosion, milling, extrusion, and ionic liquids, different pretreatment methods produce different effects on the biomass in terms of its structure and composition [2]. For example, the hydrothermal, steam explosion and acidic pretreatments conceptually remove mainly the biomass hemicellulose fraction whereas alkaline pretreatments remove lignin. On the other hand the product of a milling-based pretreatment retains the biomass initial composition. Furthermore, cellulose crystallinity is not significantly reduced by pretreatments based on steam, or hydrothermal, or acidic procedures, whereas ionic liquid-based techniques can shift crystalline cellulose into amorphous cellulose, substantially increasing the enzymatic hydrolysis rates and yields. As such, the choice of pretreatment and its operational conditions as well as the composition of the enzyme blend used in the hydrolysis step, determines the hexose and pentose sugars composition, the concentration and toxicity of the resulting biomass syrups. The activity profile of the enzyme blend and the enzyme load for an effective saccharification may also vary according to the pretreatment. Indeed, a low hemicellulase load can be used for a xylan-free biomass and a lower cellulase load will be needed for the hydrolysis of a low crystalline and highly amorphous pretreated biomass material. As the pretreatment choice will also be affected by the type of biomass, the envisaged biorefinery model will need to consider the main types of biomass that will be used for the biorefinery operation so as to select an appropriate, and versatile pretreatment method [3]. Considering the biorrefinery concept which broadens the biomass derived products, the C6 sugars could be fermented into ethanol, while the C5 stream could be used for the production, via biotechnological routes, of a wide range of chemicals with higher added value. To date, sugarcane and woody biomass, depending on the geographic location, are strong candidates as the main renewable resources to be fed into a biorefinery. However, due to major differences regarding their physical properties and chemical composition, the relevant pretreatments to be used in each case are expected to be selective and customized. Moreover, a necessary conditioning step for wood size reduction, prior to the pretreatment, may not be necessary for sugarcane bagasse, affecting the pretreatment energy consumption and costs. Moreover, the choice of pretreatment should take into account the foreseen utilization of the main biomass molecular components (cellulose, hemicelluloses and lignin). It is important to point out that lignin can be used as a valuable solid fuel or as a source of aromatic structures for the chemical industry. Sugarcane is one of the major agricultural crops when considering ethanol production, especially in tropical countries. In Brazil, sugarcane occupies 8.4 million hectares, which corresponds to 2.4% of farmable lands in Brazil. The gross revenue of this sector is about US$ 20 billion (54% as ethanol, 44% as sugar, and 2% as bioelectricity) [4]. In addition, up to 50% of all vehicles in Brazil are flex fuel cars, which corresponds to approximately 15 million cars [5]. Given the above, Brazil is an important player in this scenario, and, consequently, sugarcane bagasse and straw are promising feed stocks for biomass ethanol. Brazil produced, in 2008, 415 million tons of sugar cane residues, 195 million tons of sugarcane bagasse, and 220 million tons of sugarcane straw, whereas the forecast for the 2011 sugarcane production is 590 million tons, which would correspond to 178 million tons of bagasse, and 200 million tons of straw [6]. Currently, in Brazil, R&D on the use of biomass via biotechnological routes has been focused mainly on agricultural residues such as sugarcane residual biomass. Advantages and disadvantages of different types of pretreatments: Acid pretreatment. Pretreatment with dilute sulfuric acid has been reported as one of the most widely used processes due to its high efficiency. This pretreatment removes and hydrolyzes up to 90% of the hemicellulose fraction, rendering the cellulose fraction more accessible to hydrolytic enzymes. However, it presents important drawbacks related to the need for a neutralization step that generates salt and biomass sugar degradation with the formation of inhibitors for the subsequent fermentation step such as furfural from xylose degradation. The removal of inhibitors from the biomass sugar syrups adds cost to the process and generates a waste stream. Additionally, mineral acids are corrosive to the equipment, calling for the use of more sturdy materials alongside higher maintenance costs. Acid recovery is also costly. The availability of the biomass acid pretreatment and the knowledge that has been built up on this subject highlights its important and costly drawbacks. In addition, the environmental problems caused by its waste streams have called for the need for other options for the pretreatment of lignocellulosic materials. Mechanical pretreatments. Mechanical pretreatments of biomass aim primarily to increase the surface area by reducing the feedstock particle size, combined with defibrilization or reduction in the crystallinity degree. This approach facilitates the accessibility of enzymes to the substrate, increasing saccharification rates and yields. The most studied biomass mechanical pretreatment for biomass is the milling process, mainly the ball-milling, which presents a high energy consumption, and wet disk-milling pretreatments [7,8]. Another mechanical treatment to be considered is extrusion, even though this process involves additional thermal and/or chemical pretreatments. Liquid hot water (LHW) pretreatments. The liquid hot water (LHW) is based on the use of pressure to keep water in the liquid state at elevated temperatures (160-240 ºC). This process changes the biomass native structure by the removal of its hemicellulose content alongside transformations of the lignin structure, which make the cellulose more accessible to the further enzymatic hydrolysis step. Differently from steam-explosion treatment, LHW does not use rapid decompression and does not employ catalysts or chemicals. Nevertheless, as with the acid treatment, LHW depolymerizes hemicelluloses to the liquid fraction. In this case, sugars are removed mostly as oligosaccharides, and the formation of the inhibitors furfural and 5-hydroxymethyfurfural (HMF) is at a slightly lower level, depending on the process conditions. To avoid the formation of inhibitors, the pH should be kept at between 4 and 7 during the pretreatment, because at this pH, hemicellulosic sugars are retained in oligomeric form, and monomer formation is minimized. The removal of hemicellulose also results in the formation of acetic acid in the liquid fraction. LHW and steam pretreatments are attractive from a cost-savings perspective, as they do not require the addition of chemicals such as sulfuric acid, lime, ammonia, or other catalysts. Moreover, the reactors do not require high cost materials and maintenance due to their low-corrosion potential. Additionally, these treatments do not alter the biomass glucan content, as a glucose recovery rate of 97% was observed for sugarcane bagasse that was pretreated by both methods. The main differences between the features of the two treatments relates to hemicellulose extraction, which is higher for the LHW, and the biomass load, which is higher for the steam pretreatment, with the obvious corresponding advantages and disadvantages. In contrast to steam pretreatment, LHW allows for a higher pentosan recovery associated with the lower formation rate of inhibitors. Steam-explosion pretreatment. The main advantages of steam explosion relate to the possibility of using coarse particles, thus avoiding a biomass-size conditioning step, the non-requirement for exogenous acid addition (except for softwoods, which have a low acetyl group content in the hemicellulosic portion), a high recovery of sugars, and the feasibility for industrial implementation. Moreover, the soluble stream rich in carbohydrates derived from hemicellulose in the form of oligomers and monomers may be easily removed and used as feedstock for the production of higher addedvalue products such as enzymes and xylitol. Other attractive features include less hazardous process chemicals and conditions, the potential for significantly lower environmental impact, and lower capital investment. The fact that the steam-explosion process does not require previous grinding of the raw biomass is an important feature, considering that the energy required to reduce the particle size before the pretreatment (pregrinding) can represent up to one-third of the total energy required in the process. The main drawbacks related to steam-explosion pretreatment are the enzyme and yeast inhibitors generated during the pretreatment, which include furfural and hydroxymethyl furfural; the formation of weak acids, mostly acetic, formic, and levulinic acids, the two latter acids being derived from furfural's and hydroxymethyl furfural's further degradation; and the wide range of phenolic compounds produced due to lignin breakdown. Several detoxification methods have been developed in order to reduce the inhibitory effect, which represent additional costs in the overall process. Other limitations of this method include the incomplete disruption of the lignin-carbohydrate matrix. Ionic liquids pretreatment. ILs are able to disrupt the plant cell wall structure by the solubilization of its main components. This class of salts is also able to alter cellulose crystallinity and structure, rendering the amorphous cellulose prone to high rates and yields from enzymatic saccharification. Indeed, this combination of effects generates a pretreated material that can be easily hydrolyzed into monomeric sugars when compared to other pretreatment technologies, also rendering the enzymatic attack faster as the initial hydrolysis rate is greatly increased [9,10]. Nevertheless ILs are still too expensive to be used for biomass pretreatment at the industrial scale, as a inovative and promising biomass pretreatment technologies, the use of IL stands out. These versatile classes of chemicals can be tailored to suit the selective extraction and recovery of the biomass components, such as the recovery of a cellulose-hemicellulose rich material in an amorphous form which is prone to enzymatic hydrolysis with high yields and rates. Additionally, the possibility of recovering the extracted lignin broadens and increases the efficiency for the use of biomass. Alkaline pretreatment. In the alkaline process the biomass is soaked in the alkaline solution and mixed at a mild controlled temperature in a reaction time frame from hours to days. It causes less sugar degradation than the acidic pretreatments. The necessary neutralizing step, prior to the enzymatic hydrolysis, generates salts that can be partially incorporated to the biomass. Besides removing lignin the pretreated material washing also removes inhibitors, salts, furfural and phenolic acids. This pretreatment, whereby sodium hydroxide has been the most studied reagent is similar to the Kraft pulping process used in the pulp and paper industries. The main effect of alkaline pretreatments is the biomass lignin removal thereby reducing the steric hindrance of hydrolytic enzymes and improving the reactivity of polysaccharides. The addition of air/oxygen to the reaction mixture dramatically improves delignification. The alkali pretreatment also causes partial hemicellulose removal, cellulose swelling and cellulose partial decrystallization. Conclusion Several factors must be taken into account regarding the choice for biomass pretreatment regarding the most advantageous use of the biomass solid and liquid streams resulting from the subsequent enzymatic hydrolysis step. The resulting sugar syrups stream and the lignin stream, as either a solid or a liquid form must be carefully considered for the deployment of a fully integrated biorefinery, for the use of biomass as a source of fuels and chemicals in a sustainable an environmentally friendly way.
v3-fos-license
2024-03-08T06:16:03.072Z
2024-03-06T00:00:00.000
268260779
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/advs.202306901", "pdf_hash": "00bd6b09888a5c79b9c6bbef5bf6a0e8ebdd677b", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43576", "s2fieldsofstudy": [ "Engineering", "Materials Science" ], "sha1": "1a845f26e0331fe9e7659bc644495be0f75680a2", "year": 2024 }
pes2o/s2orc
Magnetic‐Assisted Control of Eggs and Embryos via Zona Pellucida‐Linked Nanoparticles Abstract Eggs and embryo manipulation is an important biotechnological challenge to enable positioning, entrapment, and selection of reproductive cells to advance into a new era of nature‐like assisted reproductive technologies. Oviductin (OVGP1) is an abundant protein in the oviduct that binds reversibly to the zona pellucida, an extracellular matrix that surrounds eggs and embryos. Here, the study reports a new method coupling OVGP1 to magnetic nanoparticles (NP) forming a complex (NPOv). NPOv specifically surrounds eggs and embryos in a reversible manner. Eggs/embryos bound to NPOv can be moved or retained when subjected to a magnetic force, and interestingly only mature‐competent eggs are attracted. This procedure is compatible with normal development following gametes function, in vitro fertilization, embryo development and resulting in the birth of healthy offspring. The results provide in vitro proof‐of‐concept that eggs and embryos can be precisely guided in the absence of physical contact by the use of magnets. Introduction [4] ART require manipulation of gametes and embryos without compromising their ability to fertilize and develop.Since initial successes among being developed for tagging or drug delivery to increase the odds of implantation, [29] but not for egg/embryo handling during ART. Despite their potential utility in biomedicine, modes of administration, dose and duration, size, shape, charge, and composition of surface-coating of NP raise concerns of potential toxicity. [30][35] Nevertheless, there needs to be an emphasis of safety in exploring interactions of NP and biological reproductive materials (sperm, eggs, or embryos) to improve the efficiency of ART. Nanotechnology has great potential in reproduction, [36] but development of practical applications in ART remains challenging. [30]Our group previously reported that recombinant OVGP1 (oviductin-Ov, an endogenous protein of the oviductal fluid) binds to the surface of the ZP. [37]Here we report the development of NP binding to the ZP via the OVGP1 protein (NPOv).This strategy enables efficient binding between the ZP and NP in eggs and embryos with no effect on gametes function or embryo development.We also provide insight into the use of the NPOv complex in ART including vitrification, egg/embryo positioning, and lab-on-chip assays. Safe Affinity-Based Technique to Externally Label Eggs and Embryos with Nanoparticles (NP) Successful conjugation of pig and rabbit recombinant OVGP1 protein to NP, designated NPOv, was confirmed by immunoblot.When probed with anti-OVGP1 (porcine) and penta-histidine (porcine and rabbit) antibodies, bands were present with the expected 75 kDa molecular mass (Figure 1A-middle panels).The conjugation to NP was stable for at least 31 days (Figure 1A-right panels) and NPOv was present uniformly on the ZP (Figure 1B) as the embryo developed from 2-cells to blastocyst (Figure 1B) (tested with porcine eggs and embryos).NPOv distribution was assessed in eggs because of the homogeneity of their size (≈150 μm).The ZP area covered by NP was significantly lower when eggs were incubated with control NP compared to those incubated with NPOv after incubation for 0.5, 1, and 6 h (p < 0.05).The maximum area covered by NP (≈40%) was obtained after 1 h of incubation in the NPOv-20 group (Figure 1B).These results provide evidence that NP specifically bound to the ZP when conjugated to OVGP1, although a low level of non-specific binding of control NP was observed that was dependent on time of incubation. To determine if NPOv bound to eggs had any effect on gamete function, we determined oxidative phosphorylation and glycolytic rates with OCR (oxygen consumption rate) and ECAR (extracellular acidification rate), respectively, after co-incubation with NPOv using an extracellular flux analyzer.No differences were observed in OCR and ECAR parameters between NPOv-eggs (10 and 20 groups) and the control group (without NPOv) (Figure 1C; Figure S1A, Supporting Information).NPOv incubation did not affect either sperm metabolism or motility (Figure S1B,C, Supporting Information).OVGP1 has been reported to be involved ZP hardening, [38] a phenomenon which could impede sperm penetration during fertilization.To investigate this possibility, we assessed ZP hardening by a digestion assay and observed no effect of the presence of NPOv on digestion times (control = 45.76 ± 18.5 s; NPOv = 43.22 ± 15.77 s; p > 0.05) (Figure 1C).To further assess eggs competence, eggs bound to NPOv were fertilized in vitro (Figure 1D) and their developmental rates were compared to a control group.One that was not incubated with NPOv and another where NPOv were mechanically separated from eggs by gently pipetting until detachment of the NP to test any detrimental effect of NP separation.The presence of NPOv around the ZP at fertilization did not affect egg penetration, monospermy, and IVF efficiency (representing the final number of putative zygotes per 100 penetrated eggs) (p > 0.05) (Figure 1D upper graph).No differences were observed between experimental groups in terms of embryo development (cleavage and blastocyst rates) and quality (diameter and number of cells per blastocyst) (p > 0.05) (Figure 1D lower graphs). Effect of NPOv on Reproductive Performance and Body Weight at Birth in Rabbits The OVGP1 rabbit protein was conjugated with NP and tested on two embryo stages (Figure 2A): zygote-2-cells and late morula/early blastocyst.Following an additional 72 h for zygote-2-cells and 24 h for late morula/early blastocyst of in vitro culture, embryos exposed to NPOv exhibited a comparable developmental capacity to reach the hatching blastocyst stage when compared to the control group (92.0 ± 5.43% vs 91.0 ± 3.90%) (p > 0.05) (Figure 2B).These findings were similar to observations with pigs.Thus, early development was not perturbed by NPOv binding in rabbits and pigs.To further test the safety of NPOv, pronuclear embryos were collected 14 h post-insemination, exposed to NPOv and transferred to foster mothers (Figure 2B, lower row).At day 2 and 6, respectively of the transfer, oviducts and uterine horns were collected for tissue evaluation.No histological abnormalities were found and cell proliferation, assessed by Ki-67, was similar between groups (p > 0.05) (Figure 2C).The expression of CD3, a marker of inflammation, was not affected by the presence of NPOv (p > 0.05), although the female reproductive tract that did not receive embryos (non-gest) showed a significant increase in CD3 positive cells compared with the rest of the groups (p ≤ 0.0001) (Figure 2C; Figure S2, Supporting Information).For the remaining foster mothers, there were no differences in terms of implantation (57/105 and 52/90 for (+)NPOv and (−)NPOv, respectively), offspring rate (54/105 and 43/90 for (+)NPOv and (−)NPOv, respectively), or body weight at birth between NPOv and control groups (p > 0.05) (Figure 2D). Safe Magnetic Force-Based Technique to Manipulate Eggs and Embryos To take advantage of the intrinsic superparamagnetic parameters of the NP used in NPOv, we developed an assay to isolate eggs/embryos by magnetic force.For this purpose, we used a magnetic stand to evaluate the number of attracted and nonattracted eggs/embryos under varying times and doses of NPOv.After ZP enclosed eggs were incubated with NPOv for 0.5, 1, and 6 h, ∼70% to 90% of the NPOv were attracted by magnetic force (Figure 3A, left graph).The magnetic attraction of eggs incubated with NP alone increased with time (15.7% at 0.5 h; 20.0% at 1 h and 77.2% at 6 h) (Figure 3A, left graph).These results together with those shown in Figure 1 collectively suggest that at 30 min, 25% of the ZP was covered by NPOv and most of the NPOv eggs were attracted by a magnetic force.In the case of embryos, 2 h of NPOv-incubation was necessary for most NPOv-embryos (≈90%) to be attracted by magnets (Figure 3A, middle graph).We determined that some eggs were not attracted (≈20%) by the magnetic field.As the ZP is modified during egg maturation, [39] we speculate that incompetent eggs would be less able to bind OVGP1.To test this hypothesis, we determined the maturation stage of attracted and non-attracted eggs.Most of the eggs that were not attracted were immature (88% vs 12%), whereas most of those attracted were mature (74% vs 26%) (p < 0.001) (Figure 3A right graph).These data document that specific binding of NPOv to the surface of the ZP correlates with their maturation state and more mature eggs are attracted by magnetic forces. To confirm the competence of NPOv coated eggs isolated by magnetic force, IVF was performed.Eggs were incubated for 30 min with NPOv, isolated by magnets and compared to nontreated eggs.Development to cleavage (58.03% to 68.03%) and to blastocysts (21.86% to 29.46%) was similar in both groups (p > 0.05) (Figure 3B).Using RNA-seq, we investigated any effect on the transcriptomes of blastocysts.A total of 15957 transcripts were detected by strand-specific RNA-seq of which 67 (14 up-regulated and 53 down-regulated) were differentially expressed (p adj< 0.1).Only 46 genes (7 up-regulated, 39 downregulated) were differentially expressed at log 2 FC (fold change) ≤1.5 (Figure 3B; Figure S3, Supporting Information).Additionally, oxidative stress was not altered in eggs (evaluated by intracellular ROS levels, p > 0.05) or embryos (evaluated by oxidative stress related-genes, p > 0.05) when subjected to a magnetic field (Figure S4, Supporting Information).Based on these observations, binding magnetic NPOv to the ZP surrounding eggs and embryos for isolation, had no detrimental effect on development and a very minor effect on the transcriptome (<0.3%). Although NPOv did not adversely impact fertilization and early development, we also investigated their removal from the ZP.We tested two protocols: chemical (trypsin) and/or mechanical (pipetting).Trypsin incubation for 75 min removed most NPOv from the ZP and reduced magnetic attraction from 100% to less than 20% of eggs (Figure 3C upper panels).Mechanical treatment (with or without trypsin) reduced magnetic attraction of NPOv coated eggs to 0% after 75 times of gentle pipetting (Figure 3C lower panels).There were no differences in zona matrix thickness between control and trypsin treated groups (19.05 ± 1.77 μm (n = 22) and 19.94 ± 1.54 μm, respectively (n = 23), p = 0.282). Biophysical Characterization of Magnetic Force-Based Manipulation Technique in ART After establishing an efficient method of egg/embryo isolation by NPOv and magnets, we focused on the dynamical behavior of NPOv-egg under different magnetic forces (Figure 4A,B; Figure S5, Supporting Information).NPOv-eggs were exposed to magnetic fields previously characterized by a theoretical model (Figure S6, Supporting Information).To estimate the effective range of different magnets, NPOv-eggs in PBS media were located on the axis of three magnets (designated S-02-02, S-02-05, S-03-06) at distances at which they were barely attracted (Figure 4A).The time needed for the NPOv-eggs to reach the magnet was recorded.Figure 4B (first row of graphs) shows the effective range (in mm) and the capture time (in seconds) obtained for each magnet.Both parameters increased with the size of the magnet.The trajectory graphs (Figure 4B, second, third, and last rows) show the dynamics of NPOv-eggs movement from their starting points to attachment on the magnets.NPOv-eggs initially move at a slow speed and an almost null acceleration (i.e., the net force over the particle is nearly zero) independent of the magnet used.Therefore, the attracting magnetic force, which Figure 1.Nanoparticles (NP) conjugated with OVGP1 can attach to the zona pellucida (ZP) of the eggs and embryos without impairing gametes, in vitro fertilization, and embryo development.A) OVGP1 was successfully conjugated to NP.Schematic (left panel) showing truncated porcine OVGP1 (pOVGP1t) and rabbit OVGP1 (rOVGP1) recombinant proteins and NP used in the study.After conjugation of OVGP1 recombinant proteins to NP (NPOv) resultant protein samples were separated by electrophoresis and analysed by western blot (WB) (polyclonal antibody anti-OVGP1) (left WB).Load line indicates pOVGP1t (6 μg; ≈75 kDa); Unbound line represents the medium collected after the conjugation of NP.The presence of a 75 kDa band means that not all the protein was conjugated; Wash 1 and Wash 2 lines mean the two washes of the NP performed after NPOv conjugation; Eluate line is the sample with the only presence of pOVGP1t bound to the NP.The middle WB represents truncated porcine pOVGP1t and rabbit rOVGP1 recombinant proteins before conjugation and the eluted fraction from 20 μL of NP conjugated to pOVGP1t (Nano-pOVGP1t) and NP conjugated to rOVGP1 (Nano-rOVGP1).M = molecular marker (kDa).The stability of the NPOv conjugation was evaluated over time (right WBs).Each time point indicates a sample with NP bound to OVGP1 after the days indicated.WBs are representative of experiments repeated three times.B) NPOv are able to attach to the ZP of porcine eggs and embryos.Schematic (left panel) showing NPOv incubation with eggs and embryos.The graph and bright microscope images (middle panels) show the distribution of NPOv around ZP surface (%) which varies between control (NP without OVGP1) and NPOv (10 and 20) groups at each time point (p < 0.05).10 and 20 groups indicate the volume (μL) of NP suspension added to a well of 500 μl.Data are presented as mean ± SEM (n = 3 replicates).Qwin software (Leica Microsystems Ltd., Barcelona, Spain) was used to evaluate the distribution of NP.Scale bar, 25 μm.Scanning electron microscope (SEM) images (right images) show NPOv (white dots) distributed around ZP of an egg (Scale bar, 25 μm).C) Eggs are not compromised by presence of NPOv.The metabolism of the eggs (oxygen consumption rate-ORC, pmol/min, and extracellular acidification rate-ECAR, milli-pH/min) was similar regardless of the presence and quantity of NPOv (p > 0.05).Metabolic measurements were performed with Seahorse XFe96 (Agilent Seahorse analyzer, Agilent Technologies).Data are presented as mean ± SEM (n = 3 replicates, 20 eggs per group and replicate).The ZP digestion time was not affected by NPOv presence (U-Mann Whitney test, p > 0.05).Data are presented as mean ± SD.Eggs in the control group were incubated without NPOv.D) In vitro fertilization (IVF) and embryo development were not affected by NPOv.Mature oocytes after incubation with NPOv (and after being removed using mechanical separation by gentle pipetting) were subjected to IVF (n = 7 replicates) showing similar output (penetration, monospermy, efficiency; %) between groups (Chi-Square test, p > 0.05).Cleavage (%), blastocyst rate (%), and blastocyst quality (diameter-μm and number of cells per blastocyst; data expressed as mean ± SD) had no differences between groups (Chi-Square test and Kruskal-Wallis test, p > 0.05) (n = 10 replicates). increases when approaching the magnet, is partially balanced with resistance forces.In this case, two different friction forces are postulated, a drag proportional to the speed that is due to movement in fluid, and a constant frictional force due to the contact between the surfaces on the petri dish and egg.The buoyant force exerted on the egg is less than gravity and the NPOv-egg is touching the surface of the dish.When the distance between the magnet and the NPOv-egg decreases, the magnetic force increases.The ferromagnetic response of the NPOv-egg is linear for low magnetic force and becomes non-linear for high magnetic force due to the magnetic force produced by the particle itself.Because of this non-linear response, an abrupt change in the acceleration of the particle was observed.The distance at which occurs this drastic change in acceleration depends on the external magnetic field and the number of ferromagnetic NP on the surface of the ZP surrounding the egg. Once fully evaluated, the ability of magnets to effectively isolate NPOv-eggs may be of interest for ART.We suggest that magnetic-pipettes would be superior to conventional aspiration for eggs/embryos isolation (Figure 4C).In this method, NPOveggs/embryos were placed in media (PBS or vitrification media) and thereby, structures were manipulated based on the magnetic field with a high efficacy nearly to 100% in all the cases, independently of the structure (egg or embryo), species (rabbit or pig) or media (PBS or vitrification) (Figure 4C, upper row).Moreover, the time consumed to move eggs between wells, was reduced using the magnetic pipette in comparison with the aspiration pipette in both media used, PBS (P< 0.05) and vitrification (p < 0.001), except for one-egg group when using the vitrification medium (p = 0.396) (Figure 4C, lower row). Discussion The ZP is a unique extracellular matrix that surrounds eggs and embryos [40] during their transit through the oviduct to the uterus.Once there and dependent on a suitable hormonal milieu, embryos hatch from the zona matrix to implant in the endometrium. [41]While the egg undergoes a dramatic cellular transformation to become an embryo, the ZP acts as a stable shield to protect it from the oviductal environment prior to implantation.This makes the ZP a suitable target for isolation of egg/embryos without impairing fertility or early development. We describe a new technology to attach magnetized NP to the zona matrix for isolation of eggs and embryos.OVGP1 is the most abundant protein in the oviduct and uterus that binds to the ZP in a wide range of mammals. [42]The N-terminal domain is highly conserved among species and is responsible for binding to the ZP.The C-terminal domain varies among mammals and dictates its order-specific function including the ability to penetrate through zona matrix. [37]We used recombinant OVGP1 proteins (pOVGP1t, 481 aa; rOVGP1, 475 aa) attached to NP that prevents internationalization.Our studies demonstrate that short-term incubation of eggs/embryos with NPOv is sufficient for targeted binding to the outer aspect of the zona matrix.The attachment of NPOv or its presence in media was neither detrimental to fertilization nor embryonic development.Even after removal of NPOv, eggs could be efficiently fertilized and developed in vitro to blastocysts.Co-incubation of NP with eggs and embryos had no harmful effects. [24,25,43]NPOv-embryos transferred in vivo into rabbit uteri had no adverse morphological effect on the endometrium at implantation [44,45] and there was no effect on the number of offspring or body weight at birth. Despite advances, developmental rates of in vitro produced embryos are suboptimal [46,47] and it is important to assess microfluidics and micro-nanoscale devices [6] that approximate in vivo conditions.These technologies need to precisely guide eggs/embryos through pre-designed medical equipment organized in 2-and 3-dimensions [48] and can be applied to ART.Magnetic NP permit manipulation of eggs/embryos without physical contact.With a brief co-incubation of eggs with NPOv, most in vitro matured eggs can be retrieved by a magnetic field.Most eggs recovered were competent to be fertilized in vitro while those that failed to be attracted by the magnet were immature which is a further advantage of this technology.Selecting cells with magnets can have adverse effects depending on the magnetic field intensity [49,50] but with relatively weak magnets, we have not observed any adverse effects on eggs and embryos.Eggs rescued by magnetic force and fertilized in vitro have normal development and their transcriptomes are not significantly different from controls.We further demonstrated that with simple pipetting or very light trypsin treatment, the eggs lose their ability to be attracted to the magnetic field.Thus, the magnetic-attraction property acquired by the presence of NPOv in the outer part of the ZP is reversible.show the embryo transfer laparoscopic procedure, implantation evaluation, and litter.A total of 195 zygotes, 105 for (+)NPOv and 90 for (-)NPOv, were transferred to ten recipients (five per each group).The transfers from the in vivo trial were conducted in two replicates.Regarding the in vivo results, consistent with what was observed in vitro, no differences were found in terms of implantation and live births (p > 0.05).Additionally, the body weight of the kits was similar between both groups (p > 0.05). The trajectory and speed of egg movement were characterized to explore the automatization of the movement of reproductive cells by magnetic force.Following the protocol described here, all eggs behaved similarly and with similar kinetics, so magnetic manipulation of eggs could be a crucial tool to guide cells toward pre-designed organizations in 3D cell cultures (i.e., oviduct-on-achip).Moreover, cell manipulation using magnetic labels offers high purity, selectivity, and recovery rates for cell separation. [48]rocesses including in vitro maturation, in vitro fertilization, embryonic culture, and development or vitrification require the manipulation of eggs and embryos to provide required media and reagents.Here we provide evidence in proof-of-concept experiments that eggs can be easily and quickly manipulated by magnetic pipettes using different conditions to avoid manipulation by aspiration.This may be an advantage for the manipulation of groups of eggs and embryos recovered from a single female for processing.However, additional studies are necessary to establish the safety of the intervention on human subjects and other animal species. In summary, we have determined that ferric NP attached to OVGP1 (NPOv) and bound to the ZP can manipulate eggs and embryos.Using NPOv, competent matured eggs were retained by an external magnetic field for use in ART while noncompetent eggs were discarded.We have demonstrated that NPOv-eggs/embryos can be moved in a desired direction which augurs well for use in state-of-art microfluidic technologies (i.e., a complete on-chip IVF platform).In addition, we have demonstrated that eggs and embryos can be manipulated using magnetic pipettes to avoid mechanical aspiration.We have thus demonstrated a robust, non-toxic technique that can potentially be used for gamete selection, embryo culture, and represents a new paradigm that facilitates entrance into a new era of ART. Experimental Section Reagents: Unless mentioned, reagents used were provided by Sigma-Aldrich (Madrid, Spain). OVGP1-Nanoparticles Conjugation: Recombinant truncated porcine and rabbit OVGP1 plasmid expression, protein production, and purification were previously described. [37]Carboxyl-Modified Paramagnetic particles (-COOH) (Estapor) (NP) with a diameter of 0.365 μm and a concentration of 1 mg ml −1 were used in this study.A magnetic rack (Cytiva Life Sciences MagRack6, Fisher Scientific) was used to handle NP. 10 μl of NP previously washed twice in 500 μl of milli-Q water after a vigorous vortex agitation were used for conjugation.First, the NP surface was activated by re-suspending in 240 μl of activation buffer (sodium phosphate 100 mm, pH 6.2), with 30 μl of conjugation buffer 1-(3-dimethylaminopropyl)−3-ethylcarbodiimide HCl or EDC (ProteoChem, Hurricane, UT, USA) (50 mg ml −1 diluted in water) and 30 μl of conjugation buffer Sulfo NHS (50 mg ml −1 diluted in water) (ProteoChem, Hurricane, UT, USA).Afterward, gentle agitation for 20 min at room temperature (RT) was performed.Subsequently, NP were washed twice in 500 μl of coupling buffer (sodium bicarbonate 0.1 m, pH 8).Then, NP were incubated with 6 μg of pOVGP1t in 300 μl of coupling buffer (sodium bicarbonate 0.1 m, pH 8) at RT for 2 h.Finally, NP were washed twice in sodium phosphate buffer 20 mm and kept in this media at 4 °C until use in a final volume of 400 μl.An electrophoresis and immunoblot with a polyclonal anti-OVPG1 (Abcam, Cambridge, United Kingdom) antibody were performed to test the conjugation.Image analyzer ImageQuant LAS 500 was used to detect the protein. Porcine Gamete Collection, In Vitro Fertilization (IVF), and Embryo Production: Cumulus-oocyte complexes (COCs) were obtained from ovaries (prepuberal females) collected at a local slaughterhouse and processed. [51]Briefly, COCs were collected by aspiration from antral follicles (3-6 mm diameter) and washed in Dulbecco's PBS with 1 mg ml −1 polyvinyl alcohol (DPBS-PVA).The COCs (50-55 per well) were incubated in 500 μl of NCSU-37a medium previously balanced for 3 h at 38.5 °C/5% CO 2 for 20-22 h in a Nunc four-well dish.Subsequently, the COCs were transferred to 500 μl of NCSU-37b medium free of eCG, hCG, and dibutyryl AMPc, where they were incubated for another 20-22 h under the same conditions. [52]Porcine IVF was performed with sperm (sperm-rich fraction from proven fertile boars) previously selected by discontinuous Percoll gradient and the concentration was adjusted to 1.5 × 10 6 sperm/ml. [53]or porcine embryo culture, putative zygotes were transferred to embryo culture medium NCSU-23a for 24 h at 38.5 °C, 5% CO 2, and 7% O 2 .At 48 h post-insemination (hpi), the cleavage rate was evaluated, and 2-4 cell embryos were transferred to embryo culture medium NCSU-23 until day 7 post-insemination (dpi) when blastocyst formation rate was evaluated. [52]ollection of Rabbit Embryos and In Vitro Development: Fifteen nulliparous New Zealand White females underwent superstimulation using a combination of FSH (Corifollitropin alfa, 3 μg, Elonva, Merck Sharp & Dohme S.A.) and hCG (7.5 UI). [54]After 72 h of superstimulation, the females were inseminated with pooled semen from New Zealand bucks of proven fertility.Ovulation was induced with 1 μg buserelin acetate (Suprefact; Hoechst Marion Roussel, S.A., Madrid, Spain).Females were euthanized in two groups, at 22 h (n = 9) and 72 h (n = 6) after artificial insemination, and the reproductive tract were immediately removed.Zygotes and two-cells, and late morulae and early blastocyst embryos were recovered by washing each uterine horn with 10 ml of DPBS containing 0.2% (wt/vol) bovine serum albumin (BSA).The collected embryos were counted and evaluated according to IETS criteria.At 22 h, only zygotes (two corpuscles and two pronuclei) and 2-cell embryos were categorized as suitable embryos, while at 72 h, embryos at the late morula and early blastocyst Figure 3. NPOv incubation with eggs/embryos results in an efficient (and reversible) union attracting them when subjected to a magnetic force without impairing further development.A) Eggs/embryos-NPOv respond to a magnetic force.Eggs/embryos were co-incubated with NPOv (upper panel).At each time point, eggs/embryos were placed in a 1.5 ml tube and subjected to a magnetic field (magnet, MagRack6, Cytiva).Thereafter, non-attracted and attracted eggs/embryos were removed and counted.For the control group, eggs/embryos were incubated with NP in absence of Ov.The percentage of attracted eggs (left graph) was ≈80% for each time point evaluated (0.5, 1, and 6 h) (n = 5 replicates, a total of 322 eggs) (Chi-Square test, p < 0.001).In the case of embryos, the greatest level of attraction was observed after 2 h of incubation (middle graph) (n = 4 replicates, a total of 120 embryos) (Chi-Square test, p < 0.001).The non-attracted and attracted eggs were fixed and stained (Hoechst 33342) to evaluate the level of maturation (right panel).More than 85% of non-attracted corresponded to immature oocytes (n = 4 replicates, a total of 203 oocytes) (Chi-Square test, p < 0.001).B) Embryo development of attracted eggs (NPOv-eggs) has similar performance to eggs non-subjected to a magnetic force.After IVF, zygotes were cultured in NCSU-23 media and evaluated for cleavage (48 hpi) and blastocyst development (7 dpi).The gene expression of the embryos was analysed by RNAseq.To determine the differentially expressed genes (DEGs) the volcano plot shows Log 2 FC (x-axis) against the p-value (y-axis) of significant DEGs (p ≤ 0.05 and Log 2 FC ≥1, red circles) (n = 4 replicates, each replicate was composed by a pool of ten blastocysts per group).C) Eggs-NPOv union is reversible after chemical (trypsin) and/or mechanical (gentle pipetting) treatments.Trypsin presence removes the NPOv reducing the attraction when subjected to a magnetic field from 100% to <20% of eggs in 75 min (upper panel, right graph) (n = 3 replicates, a total of 120 eggs, Chi-square test, p < 0.05).Mechanical treatment (combined or not with trypsin) reduced up to 0% of attracted eggs (lower panel, right graph) (n = 3 replicates, a total of 90 eggs, Chi-square test, p < 0.05).stages, exhibiting a homogeneous cell mass, a spherical mucin layer and ZP, were categorized as suitable embryos. NPOv and Egg/Embryos Incubation and Attachment Evaluation by Microscope: Eggs or embryos were co-incubated with NPOv (10 or 20 μl) in 500 μl of DPBS-BSA in a Nunc 4-well dish for at least 20 min at 38.5 °C (porcine) or at 22-25 °C (rabbit).The percentage of porcine egg surface cover by NPOv was measured with a computer-assisted image analyzing system (Q5501W, Leica Microsystem Imaging Solutions Ltd, Cambridge, United Kingdom) using Leica Qwin Pro, Version 2.2 software.This system acquires high-definition digital images of the sample.The area corresponding to the NPOv and complete egg were selected from automatic threshold of grey levels of the NP and the egg, respectively.For scanning electron microscope (SEM) groups of 10-20 in vitro matured and denuded porcine eggs were fixed in 2% of glutaraldehyde at 4 °C for 2 h followed by three washed in DPBS.The evaluation was carried out using the scanning electron microscope ApreoS (Thermo Fisher Scientific). Metabolic Activity Analysis in Gametes: Metabolic measurements were assessed after gametes-NPOv co-incubation.Seahorse XFe extracellular flux analyzer (Agilent Technologies, Inc., CA, EE.UU) with 96-well cell culture plate was used to measure metabolic status by real-time oxygen consumption rate (OCR, pmol of min −1 ) and extracellular acidification rate (ECAR, milli-pH min-1).The seahorse assay plate was equilibrated and calibrated with sterile water (sensor cartridge placed on top) overnight at 37 °C in absence of CO 2 .The Real-Time ATP Rate Assay kit was employed.After calibration, the 96-well plate was loaded with eggs (20 eggs per well) or sperm (1 × 10 6 sperm/well) in a final volume of 50 μl per well.For background corrections, four wells were left without cells.After 20-30 min of reading, a report was generated, and the data were collected and analyzed. Assessment of ZP Digestion: Matured eggs without cumulus cells were washed twice in PBS before being placed in 50 μl droplets of 0.5% pronase (wt/v in PBS).The dissolution of the ZP was continuously observed under a stereomicroscope.The time required for complete dissolution of the ZP was recorded. Evaluation of Porcine Oocyte Maturation, IVF, and Embryo Development: For the evaluation of oocyte maturation or IVF output, the cells were fixed for 15 min in 10% glutaraldehyde in PBS, stained for 15 min with 1% Hoechst 33342 in PBS, washed in PBS and mounted on glass slides for evaluation by epifluorescence microscopy.An oocyte was considered mature (egg) when the nucleus was in metaphase II stage and the first polar body was extruded.In the case of IVF, three parameters were calculated: penetration rate (%) (percentage of eggs with one or more male pronuclei of total eggs); monospermy rate (%) (percentage of penetrated eggs with only one male pronuclei), and efficiency rate (%) (percentage of eggs that were penetrated and monospermic from number of eggs inseminated). For embryo culture evaluation the cleavage rate (at 48 hpi) and the blastocyst rate (on day 7 dpi) were evaluated.On day 7 dpi blastocysts were photographed, and image analyses were performed by ImageJ software for diameter analysis.Furthermore, blastocysts were fixed for 15 min in 10% glutaraldehyde in PBS, stained for 15 min with 1% Hoechst 33342 in PBS, washed in PBS, and mounted on glass slides for later evaluation of number of cells by epifluorescence microscopy. Evaluation of Rabbit In Vitro Embryo Development: Zygotes-2 cell embryos and late morulae-early blastocyst embryos were cultured in vitro for 72 and 24 h, respectively.The culture was conducted in four-well multidish plates with 500 μl of TCM199 supplemented with 10% fetal bovine serum.The incubation temperature was maintained at 38.5 °C in a 5% CO 2 air environment.Following the in vitro culture, the embryos were morphologically evaluated for their developmental progression up to the hatching blastocyst stage using a stereomicroscope. Implantation Rate, Offspring Rate, and Body Weight at Birth in the Rabbit Model: Zygotes and 2-cell embryos (NPOv and control) were transferred according to the previously described method. [55]Briefly, ovulation was induced in 16 receptive females (determined by vulva color) through the administration of 1 μg i.m. of buserelin acetate (Hoescht, Marion Roussel, Madrid, Spain).On the day of the embryo transfer, foster mothers were anesthetized by an i.m. injection of 4 mg kg −1 of xylazine (Bayer AG, Leverkusen, Germany), followed 5-10 min later by an intravenous injection into the marginal ear vein of 0.4 ml kg −1 of ketamine hydrochloride (Imalgene 500, Merial SA, Lyon, France).During laparoscopy, 3 mg kg −1 of morphine hydrochloride (Morfina, B. Braun, Barcelona, Spain) was administered intramuscularly.The embryo transfer was performed by laparoscopy, introducing the zygotes-2 cells embryos into the oviducts (≈10 per oviduct).After the transfer, females were treated with antibiotics (4 mg kg −1 of gentamicin every 24 h for 3 days, 10% Ganadexil, Invesa, Barcelona, Spain) and analgesics (0.03 mg kg −1 of buprenorphine hydrochloride, [Buprex, Esteve, Barcelona, Spain] every 12 h for 3 days and 0.2 mg kg −1 of meloxicam [Metacam 5 mg mL −1 , Norvet, Barcelona, Spain] every 24 h for 3 days).For the evaluation of the inflammatory response to NP in the oviduct and uterus (see below), each foster mother (n = 6) received NPOv zygotes-2 cell embryos in one oviduct and control zygotes-2 cell embryos in the other.The transfers to either the right or left oviduct were randomized.In the remaining recipient females (n = 10, with 105 and 90 embryos transferred for (+)NPOv and (-)NPOv, respectively), the survival rate was assessed using laparoscopy, following the aforementioned procedure.The assessment included recording the implantation rate (number of implanted embryos on day 14 out of the total embryos transferred) and the birth rate (offspring born/total embryos transferred).Additionally, the body weight of the offspring was measured at birth. Inflammatory Response in the Oviduct and Uterine of Rabbits: Two and six days following the transfer of zygotes-2 cell embryos, six foster mothers were euthanized, and reproductive tracts were collected from each experimental group (NPOv and control).Oviducts and uterine horns were fixed in 10% buffered formalin for 24 h, processed, and embedded in paraffin.Sections (3 μm) from samples were then stained with a hematoxylin and eosin (H&E) stain for conventional histopathological evaluation and to determine embryo implantation.To determine changes of proliferative rate of epithelial cells from oviducts, an indirect polymer-HRP labelled ] under a stereomicroscope to record the tracking of the movement (from the starting movement by the magnetic attraction to the attachment to the magnet).The time (s), distance (x, mm), velocity (v, mm −1 s) and acceleration (a, mm −1 s 2 ) of the NPOv-eggs were analyzed.An example of the position as a function of time for a particle initially located at ≈2 mm from a S-02-02 magnet is provided (dots in top-right graph) and the position after smoothing by a polynomial fitting (solid line in the same graph).Velocity and acceleration of the particle, obtained by using central finite differences, as a function of its position are shown (dots in middle and bottom-right graphs), the solid lines were calculated by analytical differentiation of the fitted x(t) curve.B) Kinematics of NPOv-eggs under the action of the magnets presented in (A), where each column corresponds to the results obtained with different magnet: S-02-02 (graphs on the left column), S02-05 (graphs on the middle column) and S03-06 (graphs on the right column).Magnitudes of the effective range and the capture time (mean ± SD) for NPOv-eggs (graphs on the top row), position as a function of time (graphs on the second row, from top to the bottom), velocity as a function of distance (graphs on the third row) and acceleration as a function of the distance (graphs on the bottom row) are shown for each magnet.C) NPOv-eggs/embryos are effectively attracted by a magnetic pipette showing less time consuming than conventional pipettes.The aspiration pipette is the conventional method to move eggs/embryos in ART.A new system is provided using a magnetic pipette coupled to NPOv system to move eggs/embryos by a controlled magnetic force.Magnetic pipette was approached to eggs/embryos submerged in PBS or vitrification media and counted the number of attached structures.NPOv-eggs/embryos respond in a very highly efficient manner (graphs on the right, upper row).The time consumed to move eggs media was reduced by the use of magnetic pipette in comparison with aspiration pipette in both, PBS (n = 4 replicates; 416 eggs) and vitrification media (n = 5 replicates; 45 eggs) (lower row). Measurement of Intracellular ROS (Reactive Oxygen Species) Levels: The oxidative stress of porcine eggs was analyzed by intracellular ROS (reactive oxygen species) formation using a DCFDA/H2DCFDA-Cellular ROS Assay Kit (ab113851, Abcam, USA) and performed according to the manufacturer's instructions.Briefly, denuded eggs were incubated with 20 μm DCFDA in PBS containing 0.1% BSA for 45 min at 38.5 °C in the dark.After incubation, eggs were washed twice in PBS and then placed on glass slides.The fluorescent signal (in individual eggs) was evaluated immediately by a fluorescence microscope (Leica DMC6200).The fluorescent intensity was analysed using Leica Application Suite X software after normalization through subtraction of the background intensity to that of control eggs. Analysis of NPOv-Eggs Tracking When Subjected to a Magnetic Field: NPOv-egg was under the effect of an external magnetic field produced by commercially available neodymium magnets (Supermagnete, Gottmadingen, Germany), that were previously characterized by measuring the axial component of the magnetic field (13610.93Teslameter and 13610.01Axial Hall Probe, PHYWE Systeme GmbH & Co. KG, Göttingen, Germany) (Figure S6, Supporting Information).The dynamics of the NPOveggs were recorded under a stereomicroscope coupled to a camera and later digitalized (Tracker 6.0.8 software, https://physlets.org/tracker/).The temporal evolution of the position, as well as the velocity and the acceleration, were plotted as a function of the position, where velocity and acceleration were calculated by using finite differences. NPOv-Egg/Embryos Movement through a Magnetic Pipette: The efficiency of porcine eggs and rabbit embryos displacement between wells using magnetic fields was evaluated in PBS and vitrification media (Kitazato, BioPharma, Shizuoka, Japan).First, NPOv-eggs/embryos were placed in four-well plates and the number of attached structures was counted in both media when the magnetic field was immersed on them.Moreover, the time spent to move porcine NPOv-eggs (in groups of 1, 10, and 20 eggs) from well 1 to well 4 (in total three eggs passages) using magnetic versus aspiration pipettes (The Stripper, CooperSurgical) was evaluated.Additionally, NPOv-eggs were subjected to vitrification (Kitazato, BioPharma, Shizuoka, Japan) in groups of 1, 3, and 5 eggs to compare the time with each pipette.The vitrification process ends with the attachment of NPOveggs to the cap of the magnetic pipette in the last step or positioning the NPOv-eggs in the Cryotop (Kitazato, BioPharma, Shizuoka, Japan) using the aspiration pipette. Statistical Analysis: Statistical analysis for sperm parameters was performed using SAS University edition (SAS, 2016) software.All the motion parameters were compared with the mixed model of SAS.For other results, statistical analysis was performed using IBM SPSS v.23 (SPSS Inc.Chicago, IL, USA).Pearson Chi-squared test was used to analyze percentage data (penetration, monospermy, efficiency, cleavage, and blastocyst rate).For embryo diameter and embryo cell number, Kruskal Wallis was used to determine statistics after a normality test by Kolmogorov-Smirnov.For Seahorse results a Shapiro-Wilks test was made for normality assessment, since both parameters showed normal distribution an ANOVA test was used.For ZP digestion time data a Kolmogorov-Smirnov test was carried out for normality evaluation and U-Mann Whitney test for groups comparison.For ROS data the Shapiro-Wilks test was carried out for normality evaluation and Kruskal-Wallis test for groups comparison. A generalized linear model (GLM) including the rabbit embryo [(+)NPOv and (-)NPOv] as fixed effects was used.The error was designated as having a binomial distribution using probit link function.Binomial data for in vitro development, implantation rate, and offspring rate at birth were assigned as 1 if positive development had been achieved or a 0 if it had not.Also, a GLM was fitted for body weight analysis including the experimental group as a fixed effect and common litter as a random effect.Differences were considered statistically significant at p < 0.05. Ethics Approval Statement: The procedures involving porcine species were approved by the Ethical Committee of the University of Murcia on 1 June 2020 (reference project PID2019-106380RB-I00 and ethical committee reference 567/2019).The animal study protocol involving rabbits was reviewed and approved by the "Universitat Politècnica de València" Ethical Committee prior to initiation of the study (research code: 2018/VSC/PEA/0116).Animal experiments were conducted in an accredited animal care facility (code: ES462500001091).All experiments were performed in accordance with relevant guidelines and regulations set forth by Directive 2010/63/EU EEC. Figure 2 . Figure 2. Reproductive performance and oviduct/uterine tissues are not affected after NPOv embryo transfer (rabbit model).A) Experimental design of NPOv application in the rabbit model.The schematic representation shows the incubation of embryos (zygote-2 cells and late morula) with NPOv after in vivo collection and developed in vitro or transfer to host mothers.B) In vitro development of NPOv embryos shows a similar performance to their counterparts without NPOv.The first row of images displays the zygote-2 cell stages and their in vitro development into blastocysts in the (-)NPOv and (+)NPOv groups (Bar scale, 0.1 mm).The second row of images shows late-morula embryos and their in vitro development into blastocysts in the (-)NPOv and (+)NPOv groups (Bar scale, 0.1 mm).A total of 111 zygote-2 cells and 109 morulas were used.The assessment of in vitro embryo development revealed similar results in both the zygote-2 cell and morula stages between the two groups (-NPOv and +NPOv) (p > 0.05).Data are expressed as mean ± SEM.C) The female reproductive tract is not affected after NPOv-embryo transfer.A total of 120 (-)NPOv and (+)NPOv zygote-2 cells were transferred (d0) to 6 foster mothers, with the (+)NPOv zygote-2 cells being placed in one oviduct and the (-)NPOv zygotes in the other oviduct.After 2 and 6 days (d2 = 3 females and d6 = 3 females) post-transfer, oviducts and uterine horns were collected for tissue evaluation.Representative histology micrographs with hematoxylin and eosin staining are shown.Box plots in the left graph indicate the rate (%) of proliferative cells (analyzed by ki67) not showing differences between groups (p > 0.05).Box plots in the right graph indicate the inflammatory reaction (CD3 positive cells, %) in oviduct and uterine tissues.The results indicate no differences between (-)NPOv and (+)NPOv groups (p > 0.05).Black arrows in the histological images indicate embryonic tissue.D) In vivo development of NPOv-embryos shows similar performance that their counterpart without NPOv.Imagesshow the embryo transfer laparoscopic procedure, implantation evaluation, and litter.A total of 195 zygotes, 105 for (+)NPOv and 90 for (-)NPOv, were transferred to ten recipients (five per each group).The transfers from the in vivo trial were conducted in two replicates.Regarding the in vivo results, consistent with what was observed in vitro, no differences were found in terms of implantation and live births (p > 0.05).Additionally, the body weight of the kits was similar between both groups (p > 0.05). Figure 4 . Figure 4. NPOv-eggs/embryos responsiveness to a controlled magnetic force.A) Scheme of the experimental design where NPOv-eggs were subjected to different magnetic fields [neodymium magnets: S-02-02 (2 mm ø × 2 mm height), S-02-05 (2 mm ø × 5 mm height), S-03-06 (3 mm ø × 6 mm height)] under a stereomicroscope to record the tracking of the movement (from the starting movement by the magnetic attraction to the attachment to the magnet).The time (s), distance (x, mm), velocity (v, mm −1 s) and acceleration (a, mm −1 s 2 ) of the NPOv-eggs were analyzed.An example of the position as a function of time for a particle initially located at ≈2 mm from a S-02-02 magnet is provided (dots in top-right graph) and the position after smoothing by a polynomial fitting (solid line in the same graph).Velocity and acceleration of the particle, obtained by using central finite differences, as a function of its position are shown (dots in middle and bottom-right graphs), the solid lines were calculated by analytical differentiation of the fitted x(t) curve.B) Kinematics of NPOv-eggs under the action of the magnets presented in (A), where each column corresponds to the results obtained with different magnet: S-02-02 (graphs on the left column), S02-05 (graphs on the middle column) and S03-06 (graphs on the right column).Magnitudes of the effective range and the capture time (mean ± SD) for NPOv-eggs (graphs on the top row), position as a function of time (graphs on the second row, from top to the bottom), velocity as a function of distance (graphs on the third row) and acceleration as a function of the distance (graphs on the bottom row) are shown for each magnet.C) NPOv-eggs/embryos are effectively attracted by a magnetic pipette showing less time consuming than conventional pipettes.The aspiration pipette is the conventional method to move eggs/embryos in ART.A new system is provided using a magnetic pipette coupled to NPOv system to move eggs/embryos by a controlled magnetic force.Magnetic pipette was approached to eggs/embryos submerged in PBS or vitrification media and counted the number of attached structures.NPOv-eggs/embryos respond in a very highly efficient manner (graphs on the right, upper row).The time consumed to move eggs media was reduced by the use of magnetic pipette in comparison with aspiration pipette in both, PBS (n = 4 replicates; 416 eggs) and vitrification media (n = 5 replicates; 45 eggs) (lower row).
v3-fos-license
2024-02-11T16:02:41.229Z
2024-02-01T00:00:00.000
267586943
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://assets.cureus.com/uploads/original_article/pdf/224114/20240209-1865-i8qgt3.pdf", "pdf_hash": "0fedd403d00e46e2419c659b038cf8250df99e11", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43577", "s2fieldsofstudy": [ "Medicine", "Computer Science" ], "sha1": "e100f07a3654e85c8ec938081b729f315d5e30a7", "year": 2024 }
pes2o/s2orc
Enhancing Postoperative Cochlear Implant Care With ChatGPT-4: A Study on Artificial Intelligence (AI)-Assisted Patient Education and Support Background: Cochlear implantation is a critical surgical intervention for patients with severe hearing loss. Postoperative care is essential for successful rehabilitation, yet access to timely medical advice can be challenging, especially in remote or resource-limited settings. Integrating advanced artificial intelligence (AI) tools like Chat Generative Pre-trained Transformer (ChatGPT)-4 in post-surgical care could bridge the patient education and support gap. Aim: This study aimed to assess the effectiveness of ChatGPT-4 as a supplementary information resource for postoperative cochlear implant patients. The focus was on evaluating the AI chatbot's ability to provide accurate, clear, and relevant information, particularly in scenarios where access to healthcare professionals is limited. Materials and methods: Five common postoperative questions related to cochlear implant care were posed to ChatGPT-4. The AI chatbot's responses were analyzed for accuracy, response time, clarity, and relevance. The aim was to determine whether ChatGPT-4 could serve as a reliable source of information for patients in need, especially if the patients could not reach out to the hospital or the specialists at that moment. Results: ChatGPT-4 provided responses aligned with current medical guidelines, demonstrating accuracy and relevance. The AI chatbot responded to each query within seconds, indicating its potential as a timely resource. Additionally, the responses were clear and understandable, making complex medical information accessible to non-medical audiences. These findings suggest that ChatGPT-4 could effectively supplement traditional patient education, providing valuable support in postoperative care. Conclusion: The study concluded that ChatGPT-4 has significant potential as a supportive tool for cochlear implant patients post surgery. While it cannot replace professional medical advice, ChatGPT-4 can provide immediate, accessible, and understandable information, which is particularly beneficial in special moments. This underscores the utility of AI in enhancing patient care and supporting cochlear implantation. Introduction The field of otolaryngology has experienced significant advancements in integrating artificial intelligence (AI) and telehealth technologies.Among these developments, Chat Generative Pre-trained Transformer (ChatGPT), an AI-driven language model has emerged as a promising tool, revolutionizing patient care and information dissemination.ChatGPT's role has been evolving in otolaryngology, particularly in postoperative care, highlighting its potential to enhance patient outcomes and healthcare accessibility [1,2].Telehealth has transformed the landscape of postoperative care, enabling remote monitoring, consultation, and patient education.This is especially crucial in cochlear implantation, a complex surgical intervention requiring meticulous post-surgical management. The postoperative phase is critical for patient recovery and the long-term success of the implant.However, consistent and reliable access to healthcare professionals can be a challenge.Here, ChatGPT's role becomes pivotal, offering an innovative solution to bridge the gap in patient education and support [3,4].ChatGPT, with its advanced language processing capabilities, can provide immediate, accurate, and comprehensible responses to patient queries.This aspect of AI is particularly beneficial for demystifying medical jargon and making postoperative instructions more accessible to patients.In the realm of telehealth, ChatGPT can serve as a first line of information, supplementing the efforts of healthcare professionals by addressing common patient concerns and questions.This enhances patient understanding and compliance and reduces the burden on healthcare systems [4][5][6]. This manuscript delves into the role of ChatGPT in enhancing postoperative care for cochlear implant patients within the realm of otolaryngology.It critically examines the efficacy of ChatGPT as an adjunctive resource in post-surgical patient management, exploring its capabilities in augmenting patient care and support amidst the growing influence of telehealth services.The manuscript examines several critical aspects of ChatGPT's application in otolaryngology.Firstly, it highlights ChatGPT's capability to provide immediate and accessible information, effectively bridging the knowledge gap for patients following surgery.Secondly, it explores the AI tool's function in simplifying complex medical directives and translating technical jargon into language that is easily comprehensible to patients.Furthermore, ChatGPT's potential to offer emotional support and address a range of non-medical patient concerns is discussed, contributing to a more comprehensive approach to postoperative care.The overall objective of this manuscript is to present a detailed understanding of ChatGPT's role and its growing importance in the contemporary healthcare context. Study design This study evaluated the effectiveness of ChatGPT-4, an advanced AI language model, in providing accurate and helpful information to cochlear implant patients in the postoperative phase.The primary aim was to assess whether ChatGPT-4 could be a reliable source of information for patients, particularly in scenarios where access to healthcare professionals is limited, such as when patients cannot physically reach a hospital.In this research, we employed a laptop computer's older GPT-3.5 version of ChatGPT (dated May 24, from OpenAI, San Francisco, CA). Data collection The methodology involved asking each question sequentially without resetting the ChatGPT session.Responses provided by ChatGPT were systematically recorded.Experienced otolaryngologists gathered questions frequently asked by patients and their families over a three-month period.From this collection, the most commonly posed questions were selected for analysis.Prior to initiating the ChatGPT simulation, an assessment was conducted to ensure the readability of these questions, focusing on their clarity and understandability.Consequently, the five main postoperative questions, typical of those asked by cochlear implant patients following surgery, were posed to ChatGPT-4 for evaluation (Table 1). Number Questions 1 What are the signs of infection or complications to watch for after a cochlear implant surgery? 2 How should the implant site be cared for, and what are the best practices for hygiene? 3 When can a patient expect to start hearing, and how will the sound be different? Are there any activities or environments to avoid during recovery? 5 How can a patient manage feedback or discomfort from the implant? TABLE 1: Postoperative questions asked by cochlear implant patients These questions were formulated based on common concerns and information needs identified from clinical experience and existing cochlear implant postoperative care.Five specialists in otolaryngology assessed the answers provided by ChatGPT to the posed queries.To evaluate the accuracy, clarity, understandability, and relevance of ChatGPT-4's responses, a survey was conducted with the options 'yes' or 'no.' Analysis of ChatGPT-4 In the study, the responses provided by ChatGPT-4 were evaluated using several criteria.The first criterion was the accuracy of the information, where the responses were checked for medical accuracy and their alignment with current postoperative care guidelines for cochlear implant patients.The second criterion involved measuring the response time, highlighting ChatGPT-4's efficiency in providing timely information. The third criterion focused on the clarity and understandability of the information, ensuring it was easily comprehensible to patients without medical backgrounds.Lastly, the relevance of the responses was assessed to ascertain their alignment with the actual concerns and needs of postoperative cochlear implant patients. The study explored the feasibility of using ChatGPT-4 as a supplementary information resource for cochlear implant patients, especially when traditional medical consultation is not readily accessible.The objective was to understand if ChatGPT-4 could provide reliable and comprehensible answers quickly, potentially filling an informational gap for patients in remote or resource-limited settings. TABLE 2: Evaluation of ChatGPT-4's responses to common postoperative questions Medical professionals evaluated the responses on several criteria, including accuracy of information, response time, clarity and understandability, and relevance. Descriptive Statistics ChatGPT-4's responses to all five questions demonstrated 100% accuracy.This uniform accuracy strongly aligns with current medical guidelines for cochlear implant postoperative care. Descriptive Statistics Consistently rapid response times were noted for all questions, emphasizing ChatGPT-4's efficiency.Each question was answered within seconds, suggesting its utility in providing timely information. Descriptive Statistics The average clarity and understandability score was 98%, indicating that the evaluating doctors deemed most of the responses clear and easy to understand. Descriptive statistics The relevance of the responses averaged 92%, showing high pertinence to the patients' postoperative concerns. Binomial Proportion Confidence Interval This was used to calculate the confidence interval for the proportion of relevant responses.Assuming a hypothetical p-value of <0.05, the interval would suggest statistical significance in relevance. Content Analysis This qualitative approach supports the statistical findings, affirming that the responses were comprehensive and adequately addressed post-surgical care and patient guidance. P-value Consideration For the relevance and clarity scores, where there was some variability in the responses (80% and 92% in some cases), a hypothetical p-value less than 0.05 would suggest that the differences in ratings are statistically significant.This would imply that while the majority of responses are clear and relevant, there may be room for improvement in certain areas. The comprehensive statistical analysis indicates that ChatGPT-4 is a highly reliable tool for providing accurate, timely, clear, and relevant information to cochlear implant patients in the postoperative phase. The uniformity in accuracy and response time, combined with high scores in clarity and relevance, reinforces the potential of ChatGPT-4 as a valuable supplement to traditional patient education, particularly when direct medical consultation is not accessible. Discussion Exploring ChatGPT as an assistive tool in the postoperative care of cochlear implant patients reveals promising prospects.This advanced AI language model demonstrates a significant potential to augment patient support and information dissemination in several ways [7][8][9]. ChatGPT offers immediate, round-the-clock access to information, which can be particularly valuable in addressing common concerns and questions that patients may have outside of regular healthcare provider hours.Its ability to provide instant responses can reduce anxiety and improve patient satisfaction by filling gaps between professional consultations [10,11].Our study supports the growing trend of integrating AI into healthcare, particularly in otolaryngology.ChatGPT's role in our research demonstrates its potential in patient care through effective information dissemination.This aligns with studies highlighting AI's role in enhancing patient education in telehealth settings, where direct human interaction is limited. As demonstrated in our study, ChatGPT's role in demystifying complex medical information is vital for enhancing patient comprehension and adherence to postoperative care instructions.This is particularly important for tasks like implant site care, recognizing complications, and adapting to life with a cochlear implant.The study underscores ChatGPT's effectiveness in providing accessible, accurate post-surgery information, which is crucial for bridging gaps in patient-physician communication, especially when direct consultation is impossible.This aligns with existing literature emphasizing the importance of effective communication in postoperative recovery [12,13]. While not a substitute for professional psychological support, ChatGPT can offer basic emotional reassurance and guidance, an essential aspect of recovery.The journey with a cochlear implant can be challenging and emotionally taxing, and having a readily available source of information and support can be comforting to many patients [14,15]. ChatGPT-4 demonstrated high accuracy, rapid response, clarity, and relevance in our study.These attributes are crucial in medical information dissemination.The AI chatbot's performance not only meets but also sets new benchmarks for patient information provision, showcasing AI's potential to augment traditional healthcare methods.ChatGPT offers immediate access to information and is valuable for addressing patient concerns outside of regular healthcare hours.Its capability to simplify complex medical information into more understandable terms can enhance patient comprehension and adherence to postoperative care instructions. Limitations and ethical considerations Our study also highlights the limitations of AI in healthcare.It underscores that AI should supplement, not replace, professional medical advice.Ethical concerns, including misinformation risks and the need for human oversight, are crucial considerations.This emphasizes the importance of ethical guidelines in AI deployment. Future directions and research Our findings suggest avenues for future research, including evaluating the long-term impact of AI tools in patient education and care.Continuous updates and validation of AI-provided information are necessary to maintain its accuracy and relevance to current medical guidelines. While ChatGPT cannot replace healthcare professionals, it is a valuable adjunct in postoperative management.By providing accessible information, clarifying medical guidance, and offering general support, ChatGPT enhances the overall patient experience and contributes positively to the recovery journey. Conclusions In conclusion, this study reveals the significant potential of ChatGPT-4 as a tool for enhancing postoperative care for cochlear implant patients.It effectively translates complex medical directives into patient-friendly language, ensuring clarity and comprehensibility.ChatGPT-4's accuracy, response time, and relevance to patient-specific concerns highlight its utility in bridging the communication gap between healthcare providers and patients.These findings suggest that AI can play a crucial role in postoperative patient education and support, pointing towards its broader applicability in the healthcare sector.However, it is important to consider the limitations of AI in personalized medical advice and the necessity for continual updates to align with evolving medical practices. 2024 Aliyeva et al.Cureus 16(2): e53897.DOI 10.7759/cureus.538973 of 7 Questions ChatGPT-4's responses to common questions Evaluation of ChatGPT's responses by medical professionals (Yes/No) , monitoring for signs of infection or complications is important.These can include increased redness, swelling, or warmth at the implant site, fever, unusual pain or discomfort, and discharge or bleeding from the incision area.If you experience any of these symptoms, contacting your healthcare provider immediately is important.Persistent headaches or dizziness should also be reported, which might indicate the implant site is crucial for healing and preventing infection.Keep the area clean and dry.Follow your surgeon's instructions on how to clean the incision area, which typically involves gently washing with soap and water after a specified period.Avoid rubbing or applying pressure to the implant site.It's also important to keep the area dry, especially when showering until your doctor says it's safe to get it wet.through a cochlear implant can be a different experience and may require adjustment.The implant is usually activated a few weeks after surgery, allowing time for healing.Initially, sounds may seem unusual or mechanical.Over time, and with auditory training, your brain will adapt, making sounds more natural.The timeline for this process varies among individuals, and it's important to have realistic expectations and regular follow-up appointments.phase, avoiding activities that might strain your healing process is advisable.These include strenuous physical activities, swimming, or exposing the implant site to dirty or dusty environments.Also, avoid using hair dryers or other electronic devices near the implant site until cleared by your doctor.Be cautious of environments with high noise levels to protect your feedback from the implant is normal initially.If you experience minor discomfort, over-thecounter pain relievers might be helpful, but consult your doctor before taking any medication.For feedback issues, such as whistling or static, it's important to have your implant settings adjusted by your audiologist.They can fine-tune the device to minimize these issues and ensure a comfortable listening experience.100% (5/0) 100%
v3-fos-license
2022-07-14T06:16:13.497Z
2022-07-12T00:00:00.000
250490018
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://jgeb.springeropen.com/track/pdf/10.1186/s43141-022-00380-x", "pdf_hash": "3a419d4fb418763c4ceeb72564505285294199cc", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43579", "s2fieldsofstudy": [ "Biology" ], "sha1": "832393e1b79ad3dffe7d7b609506c035947b2ceb", "year": 2022 }
pes2o/s2orc
Comparative phylogeny and evolutionary analysis of Dicer-like protein family in two plant monophyletic lineages Background Small RNAs (sRNAs) that do not get untranslated into proteins exhibit a pivotal role in the expression regulation of their cognate gene(s) in almost all eukaryotic lineages, including plants. Hitherto, numerous protein families such as Dicer, a unique class of Ribonuclease III, have been reported to be involved in sRNAs processing pathways and silencing. In this study, we aimed to investigate the phylogenetic relationship and evolutionary history of the DCL protein family. Results Our results illustrated the DCL family of proteins grouped into four main subfamilies (DCLs 1–4) presented in either Eudicotyledons or Liliopsids. The accurate observation of the phylogenetic trees supports the independent expansion of DCL proteins among the Eudicotyledons and Liliopsids species. They share the common origin, and the main duplication events for the formation of the DCL subfamilies occurred before the Eudicotyledons/Liliopsids split from their ancestral DCL. In addition, shreds of evidence revealed that the divergence happened when multicellularization started and since the need for complex gene regulation considered being a necessity by organisms. At that time, they have evolved independently among the monophyletic lineages. The other finding was that the combination of DCL protein subfamilies bears several highly conserved functional domains in plant species that originated from their ancestor architecture. The conservation of these domains happens to be both lineage-specific and inter lineage-specific. Conclusions DCL subfamilies (i.e., DCL1-DCL4) distribute in their single clades after diverging from their common ancestor and before emerging into higher plants. Therefore, it seems that the main duplication events for the formation of the DCL subfamilies occurred before the Eudicotyledons/Liliopsida split and before the appearance of moss, and after the single-cell green algae. We also observed the same trends among the main DCL subfamilies from functional unit composition and architecture. Despite the long evolutionary course from the divergence of Liliopsida lineage from the Eudicotyledons, a significant diversifying force to domain composition and orientation was absent. The results of this study provide a deeper insight into DCL protein evolutionary history and possible sequence and structural relationships between DCL protein subfamilies in the main higher plant monophyletic lineages; i.e., Eudicotyledons and Liliopsida. Supplementary Information The online version contains supplementary material available at 10.1186/s43141-022-00380-x. role in the regulation of their cognate gene(s) expression in a locus-specific manner at both transcriptional level by DNA methylation and posttranscriptional level via mRNA cleavage and, or translational inhibition [14,40]. sRNAs and their processing pathways are conserved and have distributed in almost all eukaryotes [15,23,46]. Such regulatory mechanisms are essential for fine-tuning the expression of the corresponding genes [25]. The relatively processing of small RNAs and their transcript silencing process is a complex system [6,10,14], which prompts comprehensive understanding of its components an absolute necessity. Dicer (DCL), a unique class of Ribonuclease III (RNase III) family of enzymes that is one such component, interacts with several associated proteins in the processing of small RNA precursors [12,43]. It exhibits a key role in processing long doublestranded RNA substrates into uniformly sized small RNA(s) with 2-nucleotide overhangs at the 3′-ends [27,42]. Plant dicer protein is a large multi-domain (six domains: DExH Helicase, DUF283, PAZ, RNase IIIa, RNase IIIb, and dsRNA binding (dsRB) domain) protein as delineated by its crystal structure [12,28]. One or more occupations may be eliminated or absent from the final folding [28]. The double-stranded RNA-binding (dsRBD) domain recognizes and binds to dsRNA in a non-specific manner [31]. The C-terminus of dsRBDs can interact with protein rather than dsRNA to pair with DCL proteins [5]. The PAZ domain is directly connected to the RNaseIIIa domain by a long α-helix. It can recognize and bind to two overhang bases at the 3′-end of the dsRNA precursor. It is also interesting to consider that the PAZ domain can bind single-stranded RNAs [21]. The two RNase III domains provide the main catalytic activity, cut dsRNA precursor to release short RNA duplexes with 2-nucleotide overhangs at the 3'-end and phosphorylated 5′-ends [5,21]. It argues that the distance between the PAZ and RNaseIII domain determines the length of the cleaved sRNA, and it is considered the source of mature sRNA length variants [48]. Plants evolutionary have expanded the number of their DCLs: four in Arabidopsis (DCL1, DCL2, DCL3, and DCL4) and six in Medicago truncatula [29,34]. The homologs enzymes produce mature small RNAs with distinct sizes and regulatory speciation [30]. In A. thaliana, DCL1 and DCL4 yield 21 nt, DCL2 generate 22 nt, and DCL3 creates 24 nt [37]. Apart from the length, miRNA genes are formed by DCL1 [2,45]. They are involved in producing functional small RNAs from endogenous inverted repeats. However, DCL2 has a significant role in generating small RNAs from natural cis-acting antisense transcripts. DCL3 performs a direct role in creating 24 nt-long small RNAs related to site-specific DNA methylation and chromatin modification. DCL4 remains a critical component in the formation of ta-siRNA and performing post-transcriptional silencing [26]. DCL proteins are essential either in eukaryotic growth or in development. They are responsible for defending the cell against invading gene creatures, including but not limited to viruses and active transposable elements [16,41]. For the latter, DCL2 and DCL4 are the essential players in viral genome duplication and systemic infiltration in plants [35]. A convincing piece of evidence suggests that the DCL gene family originated early in the Eukaryote evolution right at the time of multicellularization and then expanded in the corresponding kingdoms [33]. In plants, DCL homologs diverged before the appearance of moss Physcomitrella patens and after the single-cell green algae Chlamydomonas reinhardtii [26]. However, many reports involved in the functional role of the Dicer proteins in the processing of non-coding RNAs, their evolution is still in its infancy. In this study, we investigated the pattern of plant Dicer evolutionary history and possible relationships between DCL protein families in the main plant monophyletic lineages via protein sequence analyses and conserved motifs composition, phylogenetic tree reconstruction, evolutionary history inference, and functional domain identification and architecture. Data collection Sequences stored as DCL protein for Liliopsida and Eudicotyledons plant species were isolated using a keyword search, "Dicer-like protein (DCL)" in a non-redundant protein database (http:// www. ncbi. nlm. nih. gov). Sequences were retrieved and stored in FASTA format. We removed duplicated and partial sequences in each plant species using Clustal Omega and CodonCode v.8.0.2 aligner tools. Additionally, we checked the structure, protein domain families, and function of all proteins using Uniprot, Pfam, and SMART databases and removed redundant sequences. We employed 274 fulllength Dicer-like proteins from Liliopsids and Eudicotyledons families for further analysis. Multiple sequence alignment (MSA) We constructed multiple protein sequence alignments (MSAs) using MAFT [20], MUSCLE [9], Kalign [22], T-Coffee [8], and Clustal Omega [39] with their default parameters. To measure the quality of the alignments and gauge the performance of the algorithms in aligning the data sets, we computed the sum-of-pair score (SP-score), the column score (C-score), and transitive consistency score (TCS-score) of the produced alignment. Then we evaluated the relative reliability of constructed MSAs for each data set using finding the best amino acid substitution model and calculating the maximum log-likelihood. MSA with the lowest Bayesian Information Criterion (BIC) score and maximum log-likelihood nearest to zero taken as the best structurally correct sequences alignment for further analysis. The alignment file was visualized and analyzed using the BioEdit sequence alignment editor [17]. Protein primary sequence features The protein primary sequence features were determined using the Molecular Evolutionary Genetics Analysis software (MEGA) 11.0.10 [44]. The amino acids frequencies were predicted in the DCL sequences set. We specified the best amino acid substitution pattern for the specific sequences and selected the successful model with the most negative BIC scores (Bayesian Information Criterion) for the amino acid substitution matrix. The evolutionary divergence estimated between each possible pair of sequences typically uses the best-fitting amino acid substitution model. Additionally, we used the selected substitution model to compute the number of amino acid substitutions per site from each pair of sequences and overall sequences. Motif and domain prediction We used the Multiple EM for Motif Elicitation (MEME; http:// memes uite. org/ [1]) to identify protein sequences containing motifs. The following parameters were set: (1) each motif site assigned zero or one occurrence per sequence; (2) optimum motif widths were between 6 and 50; ( Proteins are composed of multiple functional units of common descent, and comparing domain composition and architecture is a beneficial method for the evolutionary analysis of homologous proteins. In this sense, the arrangement and the order of the DCL protein domains on its primary sequence, queried from Pfam, were determined via the prediction of functional units at the Hmmscan search tool (https:// www. ebi. ac. uk/ Tools/ hmmer/ search/ hmmsc an [11]). Phylogenetic analysis For phylogenetic analysis of the DCL protein family in plant species, we constructed the unrooted tree based on maximum likelihood (ML) heuristic methods in MEGAX 11.0.10 [44]. We employed aligned sequences of the plant DCLs under the selected model for the substitution-rate matrix. Bootstrapping was performed with 500 replicates. Then we rooted the tree using an outgroup, DCL from Auxenochlorella protothecoides. The rooted tree represents the last common ancestor of all groups in the tree by directing evolutionary time. The trees were displayed using the iTOL v5 online tool [24]. Comparative analyses of protein structure HHpred server (https:// toolk it. tuebi ngen. mpg. de/ tools/ hhpred [50]) accessible in Toolkit (https:// toolk it. tuebi ngen. mpg. de/ [13]) was used to search a significant match with a protein of known structure in the PDB database. We employed the MODELLER to build the atomic coordinates of the proteins and create a structural file in PDB [47]. In addition, we used the DALI server (http:// ekhid na2. bioce nter. helsi nki. fi/ dali/ [18]) for structural comparison and visualization superimposition of the predicted models. Dali scores (Dali Z-scores) are used to establish structural similarity and relationships between proteins resulting from the dendrogram constructed by an average linkage clustering of the structural similarity matrix. Root mean square deviation (RMSD), which measures the deviation between two superimposed atomic coordinates, were compared among the encoded DCL subfamily structures of A. thaliana. The Ramachandran plot (https:// zlab. umass med. edu/ bu/ rama/) was employed to compare the allowed regions of conformational space available to the protein chains by uploading the PDB-predicted file. MSA and DCL sequence characteristics After discarding the redundant protein sequences obtained from the non-redundant protein database at NCBI, we collected 31 and 242 DCL candidate protein sequences from 13 and 60 Liliopsida and Eudicotyledons species, respectively (Supplementary file 1). From the constructed MSAs (Table 1), Muscle-based MSA resulted in the highest maximum log-likelihood trees and was considered the most reliable algorithm (Supplementary file 2) for phylogenetic and evolutionary analysis. We computed the Jones-Taylor-Thornton (JTT) +G+I+F model for the Liliopsids protein set and JTT + G for the Eudicotyledons proteins. These representations obtain the most negative BIC scores (61542.366184322 and 31230.133303795) for the series of the aligned sequence, Table 1 The quality of constructed multiple sequence alignment (MSA) with the algorithms based on the sum-of-pair score (SP-score), the column score (C-score), and the Transitive Consistency Score ( respectively. Therefore, they considered the best in describing the substitution pattern in these sets (Table 1). In addition, the discrete gamma distribution was estimated under these models to be 0.9976 and 1.2493 separately. Phylogenetic tree reconstruction and evolutionary analysis DCL proteins formed an expanding family across different plant lineages. To deepen our understanding of how the DCL protein family evolved and know the evolutionary relatedness of the DCL proteins in the plant lineages, we constructed their unrooted phylogenetic tree using the full-length aligned DCL protein sequences by the maximum likelihood method. Our results showed that the unrooted phylogenetic tree from all the plant DCL protein sequences (273 sequences belonging to 73 species) supported via the bootstrap values most probably due to the short divergence ( Fig. 1A). In this phylogenetic tree, the DCL family of proteins clustered into four main classes (DCL1, DCL2, DCL3, and DCL4 subgroups [33,49]). Our phylogenetic result was in agreement with the previous classification of the plant DCLs subfamilies in terms of the tree topology. Rooting the global DCL tree using outgroup assigned polarity to the unrooted tree, which proposed the most likely evolutionary events happen after the divergence from their common ancestor. Based on the topology of the rooted tree, DCL proteins divide into two distinct main clades after diverging from the common ancestor (Fig. 1B). The first clade comprises the DCL1 proteins separating from the other DCL clades. However, the second clade comprised of two DCL homologous sequences sets (DCL2 and DCL3/DCL4 subgroups), which one of them further subdivided into another two main subfamilies (DCL3 and DCL4 subgroups). All four-plant DCL type (DCL1-4) clades presented in either Eudicotyledons or Liliopsida. In addition, the tree revealed the evolutionary relationship within each clade. As expected, DCL subfamilies are distributed globally in a single clade, which could be due to the evolution of each subfamily from their common ancestor separately. Therefore, our phylogenetic analysis illustrates that each DCL subfamilies evolved independently in the monophyletic, followed in previous studies [49]. Eudicotyledons/Liliopsids DCL proteins' separation does not observe in the main tree. The DCL proteins from Liliopsida grouped in the DCL subfamilies did not separate from the others, and its homologs in a specific sub-branch. This finding suggests that they conserved across the lineage. There were also strongly supported bootstrap values for some interior nodes of the tree due to high sequence similarities indicating a relatively wellsupported phylogenetic tree reconstruction. The close observation on the phylogenetic tree strongly supports the independent expansion of Eudicotyledons and Liliopsida DCL proteins. Therefore, to generate a clear picture of how the independent development has occurred, we reconstructed the two monophyletic lineages as separate phylogenetic trees. The constructed phylogenetic tree from Eudicotyledons species revealed four distinct subfamilies, similar to the branching structure of the phylogenetic tree reconstructed from the complete set of the higher plant species (Supplementary file 3). The evolutionary relationship within every subgroup follows the same pattern of the entire plant DCL protein sequences tree. Gene duplications are identified by searching for all branching points in the topology with at least one species present in both subtrees of the branching point (Supplementary file 4). Some sub-clades clearly illustrate orthologues relationships (derived by speciation) based on their branch distance agreement with the species tree as in the cases of Prunus avium, P. persica, and P. mume DCL1 proteins. Some clades reflect a recent gene duplication event. For example, we detected the DCL protein duplicated copy in Camelina sativa, Medicago truncatula, Populous eupheratica, Nicotiana tomentoformis. Sometimes, the gene duplication precedes the speciation (e.g., Citrus clementina from Citrus sinensis or divergence of Solanum pennelii and S. tuberosum from their common ancestor). Liliopsida DCL protein phylogenetic analysis revealed the same trend (Supplementary file 5). Also, close observation revealed that the Liliopsida DCL proteins tree protothecoides DCL protein). The Phylogenetic relationship was inferred from full-length polypeptide sequences of the plant DCL proteins using the Maximum Likelihood method and JTT model [19] with log likelihood of − 18013.10. The percentage of trees in which the associated taxa clustered together in the bootstrap test (500 replicates) is shown as a symbol displayed on each branch (Felsenstein, 1985). Initial tree(s) for the heuristic search were obtained automatically by applying Neighbor-Joining and BioNJ algorithms to a matrix of pairwise distances estimated using the JTT model. Topology with superior log likelihood value was selected. A discrete gamma distribution was used to model evolutionary rate differences among sites (2 categories (+G, parameter = 1.6882)). The rate variation model allowed some sites to be evolutionarily invariable ([+I], 1.53% sites). The tree is drawn to scale, with branch lengths measured in the number of substitutions per site. This analysis involved 275 polypeptide sequences from Eudicotyledons (242) and Liliopsida (31) and Klebsormidium nitens and A. protothecoides DCL polypeptide sequences used as outlier. All positions containing gaps and missing data were eliminated (complete deletion option). 131 positions in the final dataset was seen. Evolutionary analyses were conducted in MEGA11 and visualized by iTOL v5 online tool [24] (See figure on next page.) topology is highly similar to the tree presented for Eudicotyledons. Our results suggest that the divergence of the DCL proteins in four main subgroups formed right before the split between the Eudicotyledons/Liliopsida lineages. The aggregation pattern of DCL proteins in the reconstructed trees showed that the DCL protein family in higher plants seem to share a common origin, and the main duplication events for the formation of subfamilies occurred before the Eudicotyledons/Liliopsida split. Therefore, the emergence of the four DCL subgroups can date back to before the derivation of these lineages, and they may have evolved independently from their ancestral DCL. Our results agree with Mukherjee et al. [33], who described that DCL proteins give rise to four distinct subgroups before or around the divergence of moss from higher plants. Our data revealed that after the Eudicotyledons/Liliopsida derivation, DCL proteins seem to undergo a similar evolutionary history before lineages separation. On the other hand, Mukherjee and colleagues identified the DCL1 and DCL3 moss and Selagaginella orthologues previously [33], which is the other evidence for the main plant DCLs origin. Moreover, MSA on DCL protein full-length sequences present a high similarity in the Eudicotyledons/Liliopsida monophyletic lineages, especially within the subfamilies (Data not shown but are available from the authors on request); inferring a high level of conservation. Analysis of conserved motifs and motif composition Analysis of protein conserved motifs and their motif composition provides additional clues about the evolutionary relationship of the protein family. In Eudicotyledons, the motifs 1, 2, 3, 6, 9, 10, 15, and 20 represented the RNASE_3_2 Ribonuclease III family, 1 and 2 helicase C-terminus, RNASE_3_2 Ribonuclease III, RNASE_3_2 Ribonuclease III, Dicer dsRNA-binding fold, PAZ, and Dicer dsRNA-binding fold domains, respectively. However, the other motifs have not yet been characterized ( Table 2). Similar trends resulted in Liliopsida species for DCL protein sequences. The motifs 1, 2, 3, 5, 7, 8, 9, and 11 maintain the specified domains. However, the others have not yet been determined ( Table 3). The DCL sequence motifs in Liliopsida and Eudicotyledons were compared and found highly conserved. The motifs 1 and 4 in Eudicotyledons DCL sequences were the same as the motifs 2 and 4 in Liliopsida DCL protein sequences. Additionally, the motifs 5, 6, 9, 14, 15, and 17 in the Eudicotyledons DCL proteins sequence set seemed similar to 7, 5, 8, 16, 11, and 13 motifs within Liliopsida, suggesting their biological importance. To obtain more insights into the diversity of motif compositions, the motifs identified from each DCLs were aligned and compared from species of Liliopsida and Eudicotyledons (Data not shown but are available from the authors on request). The results were evidence of the conservation of critical residues in lineage-specific and inter lineage motifs. However, the spacing between their completely conserved residues could vary considerably (as shown in the motif sequence logo in Tables 2 and 3). In each motif, the fully conserved residues from the same geometry in alignment might be a signature of specific domains. Such key residues may be critical for their function, and mutation of some of these residues probably can alter the protein function and even be deleterious. All the DCLs in Eudicotyledons lineages harbor the conserved motifs 1, 2, 3, 6, 9, 10, and 15 suggests the presence of these domains to be quintessential for the functionality of this family (Fig. 2). They were common among A. protothecoidesand the DCL proteins from Eudicotyledons/Liliopsida lineages (Fig. 3). We noticed that 1, 2, 3, 4, 5, 6, 8, 12, 13, 17, and 18 conserved motifs are common among A. protothecoides and Liliopsida DCLs. The motifs 1, 6, and 9 in Eudicotyledons and 2, 5, and 8 in Liliopsida were also detected among A. protothecoides rudimentary DCL form, indicating their deep conservation and importance; suggesting to have a common origin. The DCL1 clade members share the same conserved motifs. A similar motif compound offers the conserved role of the DCL1 proteins in the Eudicotyledons plant cells and their importance for their cellular function. DCL2 containing subgroups was predicted to lack the 13, 19, and 20 motifs. Our analysis indicated that the DCL2 clade members within the same subgroup exhibit similar motif composition revealed the relation to others. Analysis of motif conservation and motif composition in this clade indicated that they might derive from a common ancestor. Motif representation analysis within the DCL3/ DCL4 clade revealed the DCL4 subgroup a similar structure in terms of motif composition and orientation, and motifs 8 and 13 were not detected in all the subgroup sequences. It is possibly indicative of some degree of functional conservation. Domain identification and architecture Multi-domain proteins may exhibit more complex domain organization and architecture among the homologous sequences. Domain shuffling, intramolecular duplications, fusion and fission, novel domain acquisition, and its loss are the events that can cause some variations in the domain organization, i.e., both composition and orientation, creating independent domain combinations. In this sense, the protein sequences were searched against the Pfam to predict functional domains by Hmmscan. From N-to C-termini in Eudicotyledons DCL proteins, Ribonuclease III (PF00636.28), Dicer dimerization (PF03368.16), Ribonuclease 3-3 (PF14622.8), 16) domains. Domains were in the same combinations and order within the DCL1 subgroup, another reason for their conservation and evolution from their joint ancestor architecture (Fig. 2). Such localization seems significant for the DCL protein function. Members of the same subfamily have the same domain organization. As shown in Figs. 2 and 3, the DCL proteins in some branches have one or more domain gain and loss that makes them vary from those of the others. Domain gain and loss are frequent events in plants' multi-domain proteins evolution [51]. In contrast to DCL1, the DCL2 subfamily lack dsRM (PF00035.28) and DND1-dsRM (PF14709.9) domains. However, in some DCL2 members, the dsRM domain (PF00035.28) was detected. In the DCL3 subfamily members, we identified a similar trend. A branch within the DCL3 subfamily lacks the Dicer dimerization (PF03368.16) domain, excepting other branches (Fig. 2). The DCL3 subfamily shares similar domain architectures with the DCL2 subfamily suggesting close biological function. Some branches of the plant DCL4 proteins have lost the PAZ (PF02170.24) domain. Such a situation may indicate diverse functions for the DCL4 lacking the PAZ domain (PF02170.24). Species with DCLs lacking the PAZ domain need to consider with caution as the data on their proteome might be incomplete. In the true cases, functional consequences of proteins lacking the PAZ domain remain to address. Ribonuclease III (PF00636.28), PAZ (PF02170.24), Dicer dimerization (PF03368.16), Ribonuclease 3-3 (PF14622.8), Helicase C (PF00271.33), ResIII (PF04851.17), and DEAD (PF00270.31) were found in the Liliopsida DCL proteins as well (Fig. 3). However, DND1-dsRM (PF14709.9) and dsRM (PF00035.28) domains were the unpredicted domains in Liliopsids DCL2 and DCL3 subfamilies. In the DCL of Klebsormidium nitens, the PAZ (PF02170.24) domain-containing protein was detected, but not in A. protothecoides. These results suggest that the PAZ domain probably emerged de novo before the divergence of plants and K. nitens from their common ancestors from which the plants evolved. In general, our results support the hypothesis that the DCL protein subfamilies originated from the same ancestor before the divergence of the plant's main monophyletic lineages. Therefore, these subfamilies of the DCLs existed before the split of monocot and dicot plants. Additionally, the deviation between different branches originated from Liliopsida and Eudicotyledons architecture seems to be due to the recombination and domain loss rather than de novo domain gain in their predecessor. These results may explain some functional overlaps among plant DCLs. Structural comparative analyses of DCL protein subfamilies-a case study Changes of single residues, insertions, deletions, and repetition due to the mutations are common in proteins. They accumulate over the evolution and likely produce unfunctional proteins or proteins with uncharacterized functions. The other conceivable situation would be mutations in the active zone(s) that may alter the molecular function [36]. Gene duplication, exon shuffling, or post-translational modifications may lead to circular permutations between the two homologous proteins, with or without the functional domain(s). Such rearrangements make a non-sequential sequence/structure alignment between the two homologous structures [3]. Therefore, between two homologous proteins, the structural similarity is an elaboration from a common ancestor rather than the result of the parallel evolution. If the proteins retain the same molecular function, they have resulted from a light structural deviation. In this study, we conducted comparative analyses of the DCL protein structure within and between A. thaliana and A. protothecoides species that are sister to each other with a common ancestor. In addition, we compared the protein structures of the DCL4 subfamilies containing PAZ domain with non-containing ones. HHpred webserver was employed to search a significant match with a protein of known folding in the PDB database. Protein structural homology was deduced from those of the most similar sequences. The HHpred allows the MODELLER software to build the atomic coordinates of the DCLs in PDB format from their string (Supplementary file 7; Fig. 4). We employed the distance matrix alignment (DALI) server for structural comparison and visualization superimposition of the predicted models. The structural similarity between protein structures and their structural relationship resulted from the dendrogram constructed by average linkage clustering of the structural similarity matrix based on the Dali score. The Dali structural similarity dendrogram showed that DCL protein homologs in A. thaliana belong to DCL subfamilies and are structurally related to others. In particular, the results indicated that the DCL structures diverged from a common structural ancestor with A. protothecoides. We also considered all-against-all structural comparisons of the encoded DCL subfamilies in A. thaliana. The results indicated that the subfamily members were structurally similar (Table 4). It is noteworthy that DCL4 (NP197532.3 and AED92830) have the same structure. The result illustrated that the topmost similar structures were between DCL3 (NP001154662. (NP001190348) is a more distant structural relative to the A. protothecoides DCL. A consensus result of the structural similarity between DCLs among subfamilies proposed that they may have overlapping functions towards their dsRNA targets; results have already been reported elsewhere [12,27]. The structural RMSD comparisons (the deviation between two superimposed atomic coordinates) of all-against-all of encoded DCL subfamilies showed high similarities. In the case of PAZ domain-containing (A. thaliana; NP 197532.3) and PAZ domain-lacking (A. lyrata; XP 002873991.1) structural comparison, RMSD of the Cα atomic coordinates were 4.3 Å with the estimated Z-score of 36.5. The sequence identity between these sequences was 84%, both structurally were similar. The structural similarity suggests that they diverged from a common structural ancestor. However, some deviance was evident in the PAZ domain because of the mutations occurring in the sequences during their evolution. Besides, it seems that such departure could not alter the protein and, or PAZ domain function. To clarify this point and the need for more evidence, we considered their Ramachandran plots were predicted by uploading their PDB-predicted file to the Ramachandran Plot server (https:// zlab. umass med. edu/ bu/ rama/; Fig. 5). These values for the respective selected DCLs were as follows: PAZ domain loss (95.673% in the favored region, 3.001% in the allowed area, and 1.326% in outlier region), PAZ domaincontaining (93.942% in favored territory, 3.621% in allowed region, and 2.437% in outlier region). The plots provide an additional piece of evidence supporting the above hypotheses. Conclusions Small RNAs are essential mediators of gene expression in almost all eukaryotic lineages. They are involved in many biological processes, including but not limited to the development, organogenesis, and defense against genomic-invasive materials such as viruses and transposons, and in response to biotic and abiotic stresses. Several players and mediators are involved in long dsRNA precursors processing into mature small RNAs. However, Dicer or Dicer-like proteins are the key components, playing a pivotal role in small RNA biochemical processing and generation. We aimed to study the plant Dicer evolutionary history, possible sequence, and structural relationships between DCL protein subfamilies in two plant monophyletic lineages. According to our finding, four distinct conserved DCL subfamilies are among the two plant monophyletic lines. Each DCL (i.e., DCL1-DCL4) distribute in their single clades after diverging from their common ancestor and before emerging into higher plants. Therefore, it seems that the main duplication events for the formation of the DCL subfamilies occurred before the Eudicotyledons/Liliopsida split and before the appearance of moss, and after the single-cell green algae. It seems that the expansion of the DCLs in Eudicotyledons and Liliopsida has happened, resulting in speciation possibilities rather than duplication. However, we found limited duplicating events for DCLs among the plant species. We also observed the same trends among the main DCL subfamilies from functional unit composition and architecture. Despite the long evolutionary course from the divergence of Liliopsida lineage from the Eudicotyledons, a significant diversifying force to domain composition and orientation was absent. Thus, huge functional variation is not expected. The results of this study provide a deeper insight into DCL protein evolutionary history and possible sequence and structural relationships between DCL protein subfamilies in the main higher plant monophyletic lineages; i.e., Eudicotyledons and Liliopsida. Additional file 1. List of the DCL protein sequences considered for this study. Additional file 2. Multiple sequence alignment of plant DCL protein data set using Muscle with its default parameters. Additional file 3. Evolutionary analysis of Eudicotyledons DCL proteins. The evolutionary history was inferred using the Maximum Likelihood method and JTT matrix-based model. The tree with the highest log likelihood (-10878.12) is shown. The percentage of trees in which the associated taxa clustered together is shown below the branches. Initial tree(s) for the heuristic search were obtained automatically by applying Neighbor-Joining and BioNJ algorithms to a matrix of pairwise distances estimated using the JTT model. The topology with superior log likelihood
v3-fos-license
2021-08-02T00:05:34.064Z
2021-05-14T00:00:00.000
236552794
{ "extfieldsofstudy": [ "Computer Science" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/2079-9292/10/10/1172/pdf", "pdf_hash": "ee4eb18607ea5a2dad61fcdef6ccfe06aff0c067", "pdf_src": "Adhoc", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43580", "s2fieldsofstudy": [ "Computer Science" ], "sha1": "3a77658f103fcc448bc9f49065e8a360b2cd0833", "year": 2021 }
pes2o/s2orc
Multi-Scale, Class-Generic, Privacy-Preserving Video : In recent years, high-performance video recording devices have become ubiquitous, posing an unprecedented challenge to preserving personal privacy. As a result, privacy-preserving video systems have been receiving increased attention. In this paper, we present a novel privacy-preserving video algorithm that uses semantic segmentation to identify regions of interest, which are then anonymized with an adaptive blurring algorithm. This algorithm addresses two of the most important shortcomings of existing solutions: it is multi-scale, meaning it can identify and uniformly anonymize objects of different scales in the same image, and it is class-generic, so it can be used to anonymize any class of objects of interest. We show experimentally that our algorithm achieves excellent anonymity while preserving meaning in the visual data processed. Introduction Video capture devices have become ubiquitous [1]. Modern cities are now densely covered by advanced surveillance cameras networks [2] and mobile devices with video capture capabilities are inexpensive and readily available in almost every country in the world. Even entry-level smartphones have the ability to record videos in Full High Definition (FHD) resolution (1920 × 1080 pixels) and frame rates up to 30 frames per second (FPS). In addition, advances in machine learning for visual data understanding mean that large amounts of recorded video can be processed quickly and easily, and semantic information extracted automatically. The net result of these advances is that personal privacy is rapidly shrinking. Constructing a video anonymization system is a common solution to protect privacy in systems that deal with visual or audio data [3,4]. The most common approach is to process a raw video or a set of images by applying multiple privacy filters. These filters either obfuscate sensitive information or completely replace it with unidentifiable versions of the that same data [2]. Two general types of algorithms have been developed. The first are global algorithms that apply a uniform transformation to the whole image, such as Gaussian blur, superpixelation, downsampling, or wavelet decomposition [5][6][7][8]. These methods are fast and simple to implement but have several downsides. First, because they are applied uniformly across an image, they do not provide the same level of anonymity to objects at different distances. For example, if a face is three feet from a camera, it will be much clearer than a face that is several yards away. In fact, to achieve sufficient anonymity for very near objects, it may be necessary to blur the image to the point that the most distant objects become indistinguishable from the background [8]. Second, because they transform the entire image, they may destroy information required for the task the video is recorded for. For example, blurring traffic camera data to anonymize faces may reduce license plate recognition rates. The second type of algorithm is machine learning based. These algorithms recognize certain features in images and apply local filters, masks, or transformations [1,9,10]. While these algorithms solve some of the problems with global algorithms, they also suffer from multiple shortcomings. The first is that they are generally specialized to detect and anonymize a particular aspect of the image, in almost all cases faces. While faces are definitely an important privacy feature, other aspects of the image may also be sensitive: license plates, street signs, car make and model, etc. Unfortunately most of these algorithms do not easily generalize to other classes of objects. For example, face detection and writing detection models are architecturally very different (see [11] for a good example of a stateof-the-art text detector). The other major drawback is that these types of algorithms have problems with multi-scale detection [12]. As a result, while faces in the foreground may be well recognized, smaller scale faces, such as those in the background, may be missed. Other related systems can be found in [13][14][15][16][17]. In this paper, we propose a different technique. Rather than developing a detector for a specific class of objects, we use semantic segmentation, which generates pixel-level class labels for the entire image, using the DeepLab algorithm [18,19]. This algorithm has several advantages. First, it can be trained on one or more classes, ranging from text to faces, allowing the use of a single model to anonymize a wide range of classes, or even multiple classes at the same time. Second, it is multi-scale, meaning it can correctly classify pixels belonging to objects for a wide variety of scales. Based on the output from the semantic segmentation stage, we perform a scale-dependent Gaussian blur on the pixels of interest. The resulting system gives us an extremely flexible method to effectively anonymize a wide range of object classes at a wide range of scales, without negatively affecting the performance in the task for which the video was recorded. To demonstrate the viability and flexibility of the system, we first show that we can train DeepLab to label pixels for a wide range of classes and scales. We then consider two tasks: human action recognition and license plate recognition. For human action recognition, we anonymize the human subject in the standard UCF101 dataset, and show that this has only a minimal effect on the action recognition rate. We repeat this at various scales. We then consider license plate recognition and show that our algorithm allows us to completely anonymize license plates in the Chinese City Parking Dataset (CCPD). Semantic Image Segmentation Semantic image segmentation is one of the fundamental topics in the field of computer vision [18]. The objective of semantic segmentation is to cluster all parts of an image that belong to the same object [20]. In pixel-level semantic image segmentation, every pixel in the target image should be classified as belonging to a certain object class and be labeled accordingly [19]. Generally, this results in an image "mask", with pixel classes indicated by the value of the corresponding pixel in the mask (see Figure 1). Different from object detection, semantic image segmentation does not distinguish different instances of the same class of objects [21]. Up until five years ago, traditional image segmentation algorithms heavily relying on domain knowledge (i.e., that did not apply neural networks) were regarded as the mainstream approach to computer vision tasks by the scientific community [20]. In these traditional approaches, a fundamental part of the process was choosing the features. Pixel colors, histograms of oriented gradients (HOG), scale-invariant feature transformations (SIFT), bag-of-visual-words (BOV), poselets, and textons were among the most frequently chosen features [20]. Picking several features for each pixel in high-resolution images leads to high computational loads in the model training process. Therefore, pre-processing methods of dimensionality reduction, such as image down-sampling and principal component analysis (PCA), were often used prior to semantic image segmentation [22]. In recent years, researchers have made numerous attempts to use deep-learning techniques in training of semantic image segmentation systems. The fundamental idea is to handle a trained neural network as a convolution and apply it on the input pixel data, thus efficiently implementing the sliding window process [20]. Published papers (e.g., [23,24]) show that the use of deep-learning techniques enhance many features of semantic image segmentation models. Moreover, these new deep-learning based semantic segmentation models have significant advantages on segmentation accuracy and efficiency over models trained with traditional approaches [18,24]. Semantic segmentation with deep neural networks is a well-studied topic. An excellent survey of these methods can be found in [25]. Some of the more recent methods include: MobileNetv3 [26], SVCNet [27], CFNet [28], and HFCNet [29]. DeepLab In this project, we utilize DeepLab to implement the analyzer component. DeepLab is a deep-learning based semantic image segmentation model developed by Google, delivering high performance on most commonly used computer vision testing datasets, such as PASCAL VOC 2012 and Cityscapes [19]. DeepLab combines networks trained for image classification with the "atrous convolution", atrous spatial pyramid pooling (ASPP), Deep Convolutional Neural Networks (DCNN), and fully-connected Conditional Random Fields (CRF). Atrous convolutions enable this model to explicitly control the resolution at which feature responses are computed with DCNNs and allows the model to incorporate a larger context without an increase in computational requirements. It is also notable that the model has the capacity to provide robust segmentation features at multiple scales by making use of ASPP [30]. Incoming convolutional feature layers can be probed by ASPP with multi-sampling-rates filters and effective fields-of-views. Finally, DeepLab achieves high accuracy in localizing entities by combining methods from DCNNs and probabilistic graphical models, to which a fullyconnected CRF is applied to eradicate any loss of localization accuracy [23]. Thanks to all these techniques, DeepLab can produce semantic predictions with a pixel-level accuracy and detailed segmentation maps along objects' boundaries. An illustration of the DeepLab network is shown in Figure 2. As some of the components of DeepLab are complex compared to other DNNs, we review how these components work. • Atrous Convolutions are a type of convolution that introduces a new parameter called the "dilation rate". While normal convolutional filters map each filter coefficient onto adjacent pixels, atrous convolutions allow for spacing between kernel values. For example, a 3 × 3 kernel with a dilation rate of 2 will convolve each filter weight with every other pixel (in a checkerboard pattern), effectively turning it into a 5 × 5 filter while maintaining the 3 × 3 filter computational cost. • Atrous Spatial Pyramid Pooling (ASPP) uses multiple atrous convolutions, each with different dilation rates, to capture image information at different scales. • Fully Connected Conditional Random Fields (CRF) are used to smooth segmentation maps as a post-processing step. These models have two terms. The first one corresponds to the softmax probability of the pixel class assigned to each pixel. The second is a "penalty term" that penalizes pixels that are close together but have different labels. Labels are assigned by finding the maximal probability label assignments under this model. Figure 2. A high-level illustration of DeepLabv3+. The general structure is an atrous convolution, followed by atrous pyramid pooling, with results from both layers concatenated and both used as inputs to the final layers. Several upgraded models of DeepLab have been developed and open-sourced by Google since the its first release. The specific version we chose for this project is DeepLabv3+, released in February 2018, and the latest at the time of the experiments. DeepLabv3+'s new features include a new encode-decoder structure, the module Xception, and atrous separable convolutions. By using the earlier versions of DeepLab for the encoder module and adding an effective decoder module to refine object boundaries [31], the model can achieve good performance in capturing sharp object boundaries. Additionally, the use of the Xception model, which has shown promising image classification and object detection results [24], allows the new model to be faster and have a better accuracy. The effectiveness of DeepLabv3+ is demonstrated by its accuracy of 89.0% and 82.1% on PASCAL VOC 2012 and Cityscapes datasets, respectively [18]. Gaussian Blur Algorithm Gaussian blur is a convolution filter that can provide anonymity to the applied images [5]. Due to its simplicity and practicability, it is widely used in many image processing-related applications, such as Adobe Photoshop [3]. Convolutional filters are one of the most fundamental image processing techniques. Convolutional filters are usually separately applied to every single pixel in the target image. In each convolution, the feature values of a pixel and its neighboring pixels are captured by a fixed-size convolution kernel [5]. According to the position of a pixel in the convolution kernel, this will be assigned a specific weight. Finally, a new feature value will be calculated and will overwrite the original value. This is calculated as the weighted average of the captured feature values. Various visual transformations, such as image sharpening, embossing, and image obfuscation, can be achieved by applying convolutional filters with different distributions of weights in the convolution kernel [32]. Gaussian blur is a convolutional filter whose kernel weights follow a normal (Gaussian) distribution [32]. Since the pixel matrix of a 2D image is two-dimensional, a 2D normal distribution is used in the Gaussian blurring algorithm [5]. Similar to a one-dimensional normal distribution, if a neighbor pixel is located close to the source pixel in the original image, the weight of that pixel will be higher than those that are more distant, which means it contributes more to the final result of the new feature value of the source pixel. This schema of distributed weights gives the Gaussian blur algorithm the ability to provide smooth image obfuscation. In in this equation, (x, y) refers to a coordinate position in the convolution kernel, (x 0 ; y 0 ) is the coordinate of the kernel's center, and σ x and σ y refer to the standard deviations in the directions of the abscissa and ordinate, respectively [5]. In this case, the coordinates of the kernel's center are always (0, 0), while the standard deviations in the two directions the same and are replaced by σ. Consequently, the previous function can be simplified to Equation (2): Because we are implementing this convolution filter for a discretized image, we need to discretize the Gaussian filter as well. This is done by approximating the continuous filter as an R × R matrix of coefficients, where R is odd. These coefficients are the values of the Gaussian kernel at discrete points around the center. This filter is convolved with the image, and the current pixel value is replaced by this weighted average of the surrounding pixels. Because our system handles anonymization at different scales, the value of R will vary, as well as the value of σ. System Design Our system has two stages. The first stage, which we call the analyzer, takes the original image and generates a semantic segmentation label mask. This mask, along with the original image, is fed into the anonymizer, which adaptively generates a Gaussian blurring filter based on the size of the region to be blurred. Figure 3 illustrates this basic architecture. Analyzer The analyzer component performs several tasks. The first is to convert the input data into the standard format (standard 24 bit RGB bitmap) for the semantic segmentation component. Because our system can handle either video or images, video input is decompressed and converted to individual frames, which are fed into the semantic segmenter. These frames will be recombined into the output video at the end of the anonymization process. The second task of the analyzer is to generate the pixel label mask image (see Figure 4). This is done using the semantic segmentation algorithm available in DeepLabv3+, the newest version of DeepLab developed and open-sourced by Google. The output mask image is the same dimension as the input image, with each pixel set to the identified class value or zero if the pixel was not identified as belonging to any of the known classes. This mask, along with the original image, is then passed to the anonymizer. Because our implementation follows the guidance provided by Google's official documentation, the model training process strictly follows the training protocols used in [18,33]. In this section, only some fundamentally important methods and parameter settings are listed. The complete versions of the training protocol can be found in [18,33]. A "poly" learning rate policy was employed in the training. The initial learning rate is set to 0.007. More details of the "poly" learning rate policy can be found in [19,34]. The output stride was set to 16. As DeepLabv3+ uses large-rate atrous convolutions, we must choose a large crop size. If our chosen crop size is too small, DeepLabv3+ can be affected [18,33]. Therefore, a large crop size (513 × 513) is used by the model for training. With the purpose to enrich the training dataset, we apply data augmentation by flipping and scaling the input images. The scaling factor is in the range of 0.5-2.0 and the flipping can be to the right or to the left. In addition, the choices of the scaling factor and the flipping direction are randomized [33]. Our implemented DeepLabv3+ system is trained with the augmented PASCAL VOC dataset. In the original PASCAL VOC dataset, made of 1464 training samples, 1449 validation samples and 1456 testing samples, images are annotated with their content at pixel-level. For the training phase, extra annotations provided by Dr.Sleep [35] are used for augmentation. As a result, there are 10,582 augmented training images in the dataset used [18]. The trained DeepLabv3+ model in our proposed system has the ability to perform semantic image segmentation by classifying pixels into 21 different classes of object (one of which is the background class). Each pixel of the output image contains a value that represents one class of objects [35]. For example, for each pixel classified into the class "Person" in a segmented image, the output image contains the RGB value (192, 128, 128). The mean of the intersection-over-union of pixels across the 21 classes (mIOU) is the performance measure. For this implementation, the trained model can achieve a 77.31% mIOU accuracy on the Pascal VOC 2012 validation dataset [18]. Anonymizer The anonymization algorithm used by the anonymizer is the Gaussian Blur, which replaces the feature value of a source pixel with the weighted average (following a normal distribution) of its neighboring pixels [5]. We implemented the anonymizer with Python. The core idea behind our implementation is same as that of general Gaussian Blur [5] and the pixel features we chose are the RGB values, which means that the Gaussian kernel needs to apply a convolution to the same pixel three times to get its new R, G, and B values. Different levels of object obfuscation can be achieved by choosing varying convolution kernels. These are defined by two modifiable parameters: the radius (r) and sigma (s) for the distribution of weights [5]. However, it is important to note that a large radius value generates a larger kernel, which requires more pixels when calculating the weighted averages. This means that the difference between the replacement values of two adjacent pixels are narrowed, and the relative visual effect is an image that looks more blurred. The value of the sigma parameter for the two-dimensional Gaussian can also be increased, resulting in a flatter peak and increasing the blurring effect [5]. Figure 5 shows how tweaking these two parameters affects the blurring effect. When applying the convolution filter, there are two issues that must be considered. The first is how to handle pixels on edges. Handling edges is important because if we simply apply the filter naively using pixels that are external to the object, the edges of the object become mixed with the background and no longer are clearly differentiated. This can have a negative impact on object detectors and action recognition classifiers. To solve this problem, we used a symmetry strategy to fill in the missing values. In the final implementation, for every kernel value not included in the object, a replacement value is taken from another pixel in the object. The position of the alternative pixel is chosen by symmetry on either the x or y axis, relative to the position of the source pixel. The second problem is selecting the correct filter radius and sigma. Since we can detect objects of the same class at different scales, there is no single radius that works for all filters. A filter radius suitable for small-scale objects will not adequately anonymize large scale objects, while a filter radius for large-scale objects smooths small-scale objects too much and results in excessive artifacts when dealing with edge pixels. To solve this, we compute a bounding box for each object, and set the filter radius to 1/4 of the average length of the two sides, rounded to the nearest odd number. Given this filter width, we set sigma equal to 10 times the radius, a value that we experimentally determined. Evaluation To evaluate our system, we look at several different features. First, to show that it can be used to anonymize very different classes of objects, we consider two different datasets: UCF101, a human action recognition dataset, and the Chinese City Parking Dataset (CCPD), a dataset of license plate photos. We then consider two different use cases. The first is the case where we want to anonymize objects in the scene without negatively impacting machine learning of other features of the video, a key capability for any anonymization system. For this case, we use the UCF101 dataset and demonstrate that we can anonymize the human figures in the dataset with minimal impact on action recognition classification rates. For the second use case, we want to completely anonymize an object so that it cannot be recognized by a machine learning algorithm. In this latter case, we show that we can anonymize the license plates in the CCPD dataset to the degree that they cannot be recognized even when the machine learning algorithm is trained with blurred data. Finally, we consider the performance on scaled objects by repeating the UCF101 experiments with multiple scaled versions of the original data. To compare the performance of our system against a standard benchmark, we ran identical experiments with a global Gaussian blur algorithm. To maintain the equivalent level of privacy with our adaptive algorithm, we chose set the filter radius and σ for the Gaussian blur to the maximum of all calculated radii and σ on the dataset being anonymized for the adaptive algorithm. Datasets UCF101 is an action recognition dataset composed of 13,320 realistic human action videos, collected from YouTube and classified into 101 action categories. The UCF101 dataset features a wide range of different actions and camera motions that are often present, as well as a variety of different objects, objects of different sizes, various viewpoints, illumination conditions, etc. [36]. CCPD (Chinese City Parking Dataset) is an open-source dataset for license plate detection and recognition [37]. It includes over 200,000 images of parked cars in a variety of lighting and weather conditions, with bounding boxes around their license plates. For the purpose of testing the system, 20,000 images from CCPD were chosen as our test dataset. We refer to CCPD* as the subset of 20,000 samples chosen. The remaining images were used to train DeepLabv3+ to label license plates, a class that was not included in the original model. Examples of the CCPD dataset can be seen in Figure 6. UCF101 Action Recognition The first set of tests are designed to check whether the utility of original video data is maintained after being processed by the anonymizer. These tests are conducted using the blurred UCF101 dataset. The 'utility' of visual data refers to the amount of useful information that can be extracted from it. Concretely, preserving utility in the anonymized videos from UCF101 means the blurred videos can still be used for some task, such as action recognition. For this test, we used a deep-learning based action recognition model called temporal segment network (TSN) to perform action recognition on the blurred UCF101 dataset. More details of its working principles can be found in [38]. In previously published experiments, TSN achieved a 93.5% action recognition accuracy on the original UCF101 dataset in the "RGB + Flow" mode (where "RGB" refers to the RGB video stream and "Flow" refers to how the input was processed, in a stream manner). In our testing, we trained the TSN model with the blurred UCF101 training dataset and measured the action recognition accuracy on the same dataset by following established guidelines [38]. The results for this experiment can be seen in Table 1. The base accuracy of TSN on this dataset was 93.5%. After training with anonymized training data, the recognition rate fell to 88.9 %. While some accuracy was lost, the algorithm was still reasonably accurate. Anonymized data specific algorithms (e.g., [7]) could potentially perform identically to the original algorithm. To demonstrate the ability of our system to handle multi-scale data, we performed a second round of experiments with the UCF101 dataset. The test was performed by first downsizing the original UFC101 videos to 1/2, 1/4, and 1/8 of their original size. Each frame of these downsized videos was then placed in the center of a black image the same size as the original image. This created a set of videos the same dimensions as the original videos, but with human actors a fraction of their original size. We then performed the same anonymization and classification tasks from the previous experiment. The results, included in Table 1, show that the scale of the objects has no effect on the anonymization process. All human figures were detected and anonymized, and the recognition rate remained similar to the full sized test, with only small, gradual deterioration, likely due to the loss of information from the down-scaling process. In comparison, the global Gaussian blur algorithm seriously deteriorated the performance of the classifier, with results ranging from 40% to 28.6%. This is primarily due to the need to maintain equivalent privacy, which results in selecting parameters that correspond to the worst (most highly blurred) case for the adaptive algorithm. The failure cases primarily occurred in instances where numerous objects of the same class overlapped, which resulted in a degenerate filter that resulted in an a video that was too blurred to recognize the action taking place. Examples of this can be seen in Figure 7. The first of these examples is correctly labeled "Marching Band" and the second should be labeled "Military Parade". However, as can be seen from the masks, the labeled human figures overlap to such a degree that the entire image is treated as a single large instance of the human class. Figure 7. Examples of failure cases from the the UCF101 dataset. The first example is labeled "Marching Band" and the second is labeled "Military Parade". In both cases, the clutter of same-label objects results in a degenerative blurring filter. CCPD* For the CCPD* dataset, we consider the case where the objects being anonymized are sensitive in nature, and we specifically want to prevent a machine learning algorithm from recognizing them. Different from the previous scenario, in this case, the successful outcome of the anonymization system is to be checked with a machine-learning license plate recognition system. We implemented this test with an open-source license plates detection and recognition model [39], which is used to detect the existence of a license plate in each of the images in the CCPD* and to detect the license plate number. This code implements the algorithm discussed in [40], which has a reported recognition accuracy of 98.4%. For this experiment, DeepLabv3+ was retrained to label license plates using the remaining 180,000 license plate images from CCPD. The results of this experiment can be seen in Table 2. We split the results into two parts: detection and recognition. The detection and recognition model [40] used was able to detect 100% of the license plates in the CCPD* dataset and recognize the license plate number 97.8% of the time. After training, DeepLabv3+ was able to detect 98.3% of the license plates in CCPD*. After anonymization, the detection rate for our model dropped to 10.7% with a recognition rate of 2.8%. The model used to recognize the license plates is a joint detection/recognition model, so blurring the text of the license plate reduces both detection and recognition of the license plate digits. Table 2. Detection and Recognition rates on the 20,000 image CCPD* dataset. Base detection and recognition rates are the performance of the classifier from Zhang and Huang [40]. The DeepLabv3+ detection rate is the percent of the test set where a license plate was detected. Post-anonymization detection and recognition rates are the rates for the classifier from Zhang and Huang [40] on the test dataset after anonymization. Task Accuracy Base The failure cases in the CCPD dataset primarily revolved around two cases: inability to detect the rectangular shape of the license plate and failures due to apparent changes in the color of the plate, both of which resulted in DeepLabv3+ failing to detect the plate. Examples of this can be seen in Figure 8. In the first example, the low light conditions rendered the outline of the indiscernible. In the second example, the lighting significantly modifies the color of the plate. This dataset was collected in mainland China, where license plates are uniformly dark blue. We theorize that the absence of this blue color resulted in this license plate not being detected. Figure 8. Examples of failure cases from the the CCPD dataset. In the first example, the low light leaves no clear outline of the plate. In the second example, the plate can be seen, but the lighting conditions render the color unrecognizable. In both cases, DeepLabv3+ fails to detect the plate. Conclusions and Future Work In this paper, we describe a flexible anonymization algorithm based on semantic segmentation with DeepLabv3+ and adaptive Gaussian blurring. This system addresses several issues with existing video anonymization systems, namely the lack of flexibility in object class recognition and the inability handle multi-scale objects. We then show that this system worked for several practical use cases, and at a variety of scales. This flexibility and adaptability means that our algorithm can be used in many practical situations where video anonymization is needed. While this system is extremely practical, there are several areas where future work can be done. One such area would be to explore different anonymization layers, which may be more suitable for some specific applications. We also feel it would be useful to consider different use cases, and particularly cases where changes to the machine learning algorithm for the vision task could be modified in tandem with the anonymization algorithm to provide both anonymization and higher accuracy for the vision task. Another issue that needs to be addressed is that the current algorithm estimates the size of objects simply by their bounding box. In cases where objects in the images are distorted by camera perspective, or take up significant depth in the image, the resulting filter may over blur all or part of the object. While, with knowledge of the object class, we could attempt to estimate orientation or similar information, this is further complicated by occlusion. Additionally, depth of field effects can result in initial blurring, which will again result in over-blurring of the object. As we can see from the global Gaussian results, this can seriously decrease the accuracy rate of the machine learning algorithm. Additionally, further evaluation of this algorithm would be useful. While we show that it works well for anonymized action recognition and anonymizing license plates, there are many other privacy crucial cases that could be considered. We also believe that it would be interesting to explore different parameter and hyperparameter choices for the DeepLabv3+ model, to determine their effect on the final anonymization. Author Contributions: All authors designed the project and drafted the manuscript, collected the data, wrote the code and performed the analysis. All participated in finalizing and approved the manuscript. All authors have read and agreed to the published version of the manuscript.
v3-fos-license
2016-06-18T00:21:13.874Z
2015-11-27T00:00:00.000
18981645
{ "extfieldsofstudy": [ "Biology", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.frontiersin.org/articles/10.3389/fpls.2015.01044/pdf", "pdf_hash": "722e9b2985066de734578760ff14a7e03371ce44", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43581", "s2fieldsofstudy": [ "Agricultural and Food Sciences", "Biology", "Environmental Science" ], "sha1": "722e9b2985066de734578760ff14a7e03371ce44", "year": 2015 }
pes2o/s2orc
Integrated Physiological, Biochemical, and Molecular Analysis Identifies Important Traits and Mechanisms Associated with Differential Response of Rice Genotypes to Elevated Temperature In changing climatic conditions, heat stress caused by high temperature poses a serious threat to rice cultivation. A multiple organizational analysis at physiological, biochemical, and molecular levels is required to fully understand the impact of elevated temperature in rice. This study was aimed at deciphering the elevated temperature response in 11 popular and mega rice cultivars widely grown in India. Physiological and biochemical traits specifically membrane thermostability (MTS), antioxidants, and photosynthesis were studied at vegetative and reproductive phases, which were used to establish a correlation with grain yield under stress. Several useful traits in different genotypes were identified, which will be an important resource to develop high temperature-tolerant rice cultivars. Interestingly, Nagina22 emerged as the best performer in terms of yield as well as expression of physiological and biochemical traits at elevated temperature. It showed lesser relative injury, lesser reduction in chlorophyll content, increased super oxide dismutase, catalase and peroxidase activities, lesser reduction in net photosynthetic rate (PN), high transpiration rate (E), and other photosynthetic/fluorescence parameters contributing to least reduction in spikelet fertility and grain yield at elevated temperature. Furthermore, expression of 14 genes including heat shock transcription factors and heat shock proteins was analyzed in Nagina22 (tolerant) and Vandana (susceptible) at flowering phase, strengthening the fact that N22 performed better at molecular level also during elevated temperature. This study shows that elevated temperature response is complex and involves multiple biological processes that are needed to be characterized to address the challenges of extreme conditions of future climate. In changing climatic conditions, heat stress caused by high temperature poses a serious threat to rice cultivation. A multiple organizational analysis at physiological, biochemical, and molecular levels is required to fully understand the impact of elevated temperature in rice. This study was aimed at deciphering the elevated temperature response in 11 popular and mega rice cultivars widely grown in India. Physiological and biochemical traits specifically membrane thermostability (MTS), antioxidants, and photosynthesis were studied at vegetative and reproductive phases, which were used to establish a correlation with grain yield under stress. Several useful traits in different genotypes were identified, which will be an important resource to develop high temperature-tolerant rice cultivars. Interestingly, Nagina22 emerged as the best performer in terms of yield as well as expression of physiological and biochemical traits at elevated temperature. It showed lesser relative injury, lesser reduction in chlorophyll content, increased super oxide dismutase, catalase and peroxidase activities, lesser reduction in net photosynthetic rate (P N ), high transpiration rate (E), and other photosynthetic/fluorescence parameters contributing to least reduction in spikelet fertility and grain yield at elevated temperature. Furthermore, expression of 14 genes including heat shock transcription factors and heat shock proteins was analyzed in Nagina22 (tolerant) and Vandana (susceptible) at flowering phase, strengthening the fact that N22 performed better at molecular level also during elevated temperature. This study shows that elevated temperature response is complex and involves multiple biological processes that are needed to be characterized to address the challenges of extreme conditions of future climate. Keywords: heat stress, Oryza sativa, Hsf, antioxidants, photosynthesis INTRODUCTION Rice production and productivity are seriously affected by several biotic (diseases and insects) and abiotic (drought, extreme temperature, salinity, submergence, and heavy metals) stresses. In changing climatic conditions, these stresses have become more challenging and have already shown severe negative consequences in rice cultivation (Nguyen, 2002;Wassmann and Dobermann, 2007). Heat stress is one of the most serious issues in climate change, which affects all the phases of rice plant growth and metabolism (Prasad et al., 2006;Jagadish et al., 2008;Sailaja et al., 2014). Increase in daytime temperature to more than 34 • C decreased rice yield up to 8% (Bahuguna et al., 2014;Shi et al., 2014). In 2003, about 5.18 million tons of paddy was lost due to heat wave with the temperature above 38 • C for more than 20 days (Xia and Qi, 2004;Yang et al., 2004). The global mean temperature is rising every year and it is predicted that rise will be up to 3.7 • C by 2100 (IPCC, 2013). These circumstances propel breeders to develop heat-tolerant rice cultivars that can sustain high temperature without yield penalty to a significant scale. A detailed analysis of biochemical and physiological processes contributing tolerance/susceptibility in rice is necessary to develop heat stress-tolerant rice cultivars (Krishnan et al., 2011). Several investigations have been carried out in rice to decipher the most sensitive phase and physiological processes affected by high temperature (Jagadish et al., 2010(Jagadish et al., , 2011Shi et al., 2014). Although both vegetative and reproductive phases are affected due to high temperature, the latter seems to be more crucial, thereby impacting the yield directly (Hall, 1992;Prasad et al., 2006;Jagadish et al., 2007Jagadish et al., , 2008Jagadish et al., , 2010Shi et al., 2014). Several studies have been conducted to identify rice genotypes tolerant to high temperature (Ishimaru et al., 2010;Jagadish et al., 2010;Prasanth et al., 2012;Ye et al., 2012); however, very few studies were aimed to study high temperature response in popular and mega rice cultivars (Ziska et al., 1996;Prasad et al., 2006;Shi et al., 2014). In addition, most of the earlier studies of heat stress treatment were based on sudden exposure of plants to a definite increased temperature for few hours or days, which causes a shock to plant cells. Indeed, these studies have provided substantial information of rice response to high temperature. However, such circumstances are unlikely to prevail in natural environment. As the biological processes underlying rice responses to climate change are poorly understood, a comprehensive study comprising physiological, biochemical, and molecular analysis was performed, using popular rice cultivars exposed to elevated temperature. In this study, rice genotypes were grown at control and elevated temperatures right from seedling to maturity. Abbreviations: N22, Nagina22; ETS, elevated temperature stress; RI, relative injury; SOD, superoxide dismutase; CAT, catalase; POD, peroxidase; P N , net photosynthetic rate; g s , stomatal conductance; E, transpiration rate; C i , internal CO 2 concentration; iWUE, water use efficiency; Fv/Fm, quantum yield of PSII; Fv ′ /Fm ′ , efficiency of excitation capture by open PSII centers; ETR, electron transport rate; PSII , in vivo quantum yield of PSII photochemistry; CO2 , quantum yield of CO 2 assimilation; qP, coefficient of photochemical quenching; qN, co-efficient of non-photochemical quenching. Different physiological and biochemical traits, such as membrane thermostability (MTS), chlorophyll and carotenoid contents, antioxidant enzymes, and photosynthetic and fluorescence parameters, were measured at vegetative and reproductive phases. Yield attributes under control and elevated temperatures were utilized for correlation analysis with physiological traits to identify the most reliable traits for phenotyping or breeding of rice genotypes for elevated temperature tolerance. Furthermore, expression of 14 genes was analyzed in representative susceptible and tolerant rice cultivars. MATERIALS AND METHODS The experiments were conducted to investigate the physiological and biochemical responses of selected popular rice cultivars ( Table 1) at elevated temperature. Unlike other experiments where heat stress was applied by exposing plants to high temperature for short duration (1-2 h), this experiment was designed to study the response of genotypes growing at higher temperature, as 24-day-old seedlings were shifted to elevated temperature and maintained till harvesting of seeds. In order to simulate the elevated temperature treatment like natural environment, stress was imposed by shifting plants into a custom-made polyhouse that was built using metal frames and covered with transparent polythene sheets. Temperature inside and outside the polyhouse was recorded regularly (Supplementary Figure 1). Importantly, elevated temperature stress (inside polyhouse) was always proportional to the control (outside polyhouse) temperature. The plants were allowed to grow inside the polyhouse until physiological maturity. The mean of maximum and minimum temperature recorded from transplantation to flowering phase was 5.6 and 1.5 • C higher, respectively, inside the polyhouse than outside. The mean maximum temperature from flowering to seed maturity period was 5.5 • C higher inside the polyhouse. A methodological framework of experiments conducted in this study is shown in Figure 1. The experiment was carried out in Rabi season (January-May) of the years 2013 and 2014, which is considered as the best cropping season at Hyderabad, India, to study the heat stress experiments. The plants of 11 cultivars kept at control and elevated temperatures were used for physiological, biochemical, and yield studies at vegetative and reproductive phases. Fully matured leaf during vegetative stage and flag leaf after anthesis at reproductive stage were used for physiological and biochemical assays. The details of protocols and methods followed for estimation of MTS, chlorophyll, carotenoids, enzymes (SOD, CAT, and POD), gaseous exchange parameters, and yield attributes are given in Supplementary File 1. Statistical Analysis The data was analyzed by Analysis of Variance (ANOVA) using a statistical computer package Statistix Ver. 8.1. It was analyzed as per CRD (Completely Randomized Design). The differences between treatments and cultivars were estimated using HSD (Honest Significant Difference) test. Gene Expression Analysis To study the gene expressions in susceptible and tolerant rice genotypes, seeds of N22 and Vandana cultivars were germinated in petri plates and transferred into earthen pots. One pot containing four plants of each cultivar was transferred to growth chamber for heat stress treatment. Heat stress (42 • C) for 24 h was imposed during flowering initiation stage. Three biological replications were kept for this experiment. Gene sequences were retrieved from NCBI (http://www.ncbi. nlm.nih.gov). Thirteen genes studied previously during heat stress experiment in rice seedlings (Sailaja et al., 2014) were used here for expression analysis. These were heat shock transcription factors (OsHsfA2a, OsHsfA2e, OsHsfA7), heat shock proteins (HSP70 and HSP81.1), super oxide dismutase (SOD), sucrosephosphate synthase 1 (SPS), cytochrome c oxidase assembly protein (Cyt-C-Oxi), squamosa promoter-binding-like protein 10 (SPL), cell wall integrity protein (CWIP), auxin response factor (ARF), nuclear transcription factor-Y (NF-Y) subunit A-3, and unknown protein similar to ferredoxin (OsFd). In addition to these genes, expression of a fertility restorer homolog gene (FRH, AK101861; forward primer 5 ′ -TTACGCCACGCTGATTGAGG-3 ′ and reverse primer 3 ′ -CCGCTCCGCATTACACAACC-5 ′ ) was also analyzed in this study. Details of genes sources, primers, and methods followed for RNA extraction and quantitative PCR (qPCR) were published in our previous study (Sailaja et al., 2014). Total RNA from flag leaf of N22 and Vandana was isolated by using RNeasy Plant Mini Kit (Qiagen). cDNA synthesis of mRNAs was done using Improm-II reverse transcription system (Promega), and qRT-PCR was performed using SYBR Premix Ex-Taq (Takara). Actin was chosen as an internal control, and all the reactions were run in triplicate. qPCR conditions for genes were 50 • C for 10 min for preholding stage, 95 • C for 10 min for holding stage, 40 cycles of denaturation at 95 • C for 15 s, and annealing plus extension at 60 • C for 30 s, followed by a disassociation stage (melt curve analysis). In order to analyze the real-time PCR data, the comparative threshold cycle (CT) method was used. The CT-values are provided in Supplementary File 2. CT was calculated by CT target minus CT reference. CT-values were calculated by CT of treated sample minus CT control sample. Fold difference of genes expression was calculated from 2-Ct . To calculate, CT standard deviation, we followed http://www3.appliedbiosystems.com/cms/groups/mcb_support/ documents/generaldocuments/cms_042380.pdf. The positive value of CT suggested down-regulation of transcript. Here, if the test sample had a value of 0.25, then it suggested 1/4 the amount of target RNA as the calibrator and was represented as 4.0-fold down-regulation. Photosynthetic Pigments Photosynthetic pigments (Chl a and Chl b) and carotenoids were measured. Significant differences were observed in Chl a and Chl Frontiers in Plant Science | www.frontiersin.org a/b, and total chlorophyll at ETS when compared with control. However, differences were not significant in case of Chlb and carotenoid content. Mean reduction of Chl a by 24 and 51% in vegetative and reproductive phases, respectively, was observed at ETS. During vegetative phase, increase in Chl a content was observed in N22 and Sampada, whereas maximum reduction was observed in BPT5204. At reproductive phase, reduction in Chl a was observed in almost all cultivars (Figures 2B,C). Furthermore, reduction in total chlorophyll content was also observed in all cultivars during vegetative and reproductive phases by a mean of 14.9 and 42%, respectively (Supplementary Tables 1, 2). Antioxidants SOD, CAT, and POD activities were measured in control and ETS samples. Significant differences in SOD activity were observed during vegetative and reproductive phases under elevated temperature. Increased SOD activity was recorded in BPT5204, IR64, Jaya, N22, Rasi, and Vandana cultivars, whereas decreased SOD activity was noticed in Krishna Hamsa, Sampada, and Swarna at both the phases during elevated temperature ( Figure 3A). Unlike SOD, significant differences in CAT activity were not observed at ETS. Although, N22 showed increased CAT activity at both vegetative and reproductive phases (Supplementary Table 3), POD activity was significantly affected under ETS during both the phases. Cultivars such as N22, Sampada, and Vandana showed increased POD activity in vegetative and reproductive phases ( Figure 3B). Among 11 cultivars chosen for antioxidant enzymes activity assay, only N22 showed increased activity of each of the three enzymes (SOD, CAT, and POD) at both vegetative and reproductive stages under ETS. Photosynthesis To examine the effects of elevated temperature on photosynthesis, photosynthetic and fluorescence characters were measured in all cultivars at vegetative and reproductive phases. Different parameters such as net photosynthetic rate (P N ), stomatal conductance (g s ), transpiration rate (E), internal CO 2 concentration (C i ), ratio of intercellular and control CO 2 (C i /C a ), and water use efficiency (iWUE) were analyzed. Reduction in P N was observed in all cultivars at both the phases under ETS. The mean P N was significantly reduced by 27.8 and 23% at vegetative and reproductive phases, respectively. Significant differences were noticed among the varieties. The interaction between treatment and variety (T × V) was also found statistically significant (P < 0.01). Maximum reduction of P N was observed in BPT5204, Vandana, and Varadhan, whereas minimum reduction of P N was observed in Jaya, N22, Rasi, Krishna Hamsa, and IR64 at vegetative and reproductive phases ( Figure 4A). The mean g s and E were more significantly affected during vegetative phase than reproductive phase under ETS. The reduction in mean g s was observed to the tune of 21.7% (vegetative) and 11% (reproductive) (Figure 4B). Jaya and Rasi showed significant increase in E at reproductive phase under elevated temperature (Figure 5). Ci and C i /C a were not affected significantly at reproductive phase when compared with vegetative phase (Supplementary Table 4). Increased Ci was observed in IR64, Krishna Hamsa, N22, Rasi, Swarna, Vandana, and Varadhan, whereas increased C i /C a ratio was observed in BPT5204, IR64, Jaya, Krishna Hamsa, N22, Rasi, Swarna, Vandana, and Varadhan under ETS at both the phases, but it was statistically non-significant. Unlike other parameters, iWUE was significantly affected in reproductive phase. Increased iWUE was observed in BPT5204, IR64, Jaya, MTU1010, N22, Rasi, Sampada, and Vandana at both the phases during ETS, but it was statistically non-significant (Supplementary Table 5). Fluorescence Parameters Along with the photosynthetic characteristics, different fluorescence parameters such as maximum quantum yield of PSII (Fv/Fm), efficiency of excitation capture by open PSII centers ( e = Fv ′ /Fm ′ ), electron transport rate (ETR), in vivo quantum yield of PSII photochemistry ( PSII ), quantum yield of CO 2 assimilation ( CO2 ), coefficient of photochemical quenching (qP), and co-efficient of non-photochemical quenching (qN) were measured. Marginal reduction of Fv/Fm ratio was observed in all cultivars at both the phases under ETS (Figure 6A). Maximum reduction of Fv/Fm was observed in Vandana and Rasi. Reduction of Fv ′ /Fm ′ was also observed under ETS in all cultivars. Here, the reduction was more significant in reproductive phase when compared with vegetative stage. Four cultivars-IR64, Jaya, MTU1010, and N22-showed minimum reduction of Fv ′ /Fm ′ (Supplementary Table 6). The mean ETR of all the cultivars was decreased by 16.7 and 19% at vegetative and reproductive phases, respectively, under ETS ( Figure 6B). Reduction in PSII and CO2 was observed in all the cultivars during ETS at vegetative and reproductive phases. Furthermore, more significant reduction (21%) of PSII was observed at reproductive phase. Mean reduction in qP by 12 and 15% was observed at vegetative and reproductive phases, respectively. Marginal increase in qP was observed in Varadhan, whereas in other cultivars, qP was reduced under ETS. Increase in qN was observed in maximum cultivars during vegetative and reproductive phases under ETS. At reproductive stage, decrease in qN was observed in BPT5204 and Varadhan, whereas other cultivars showed increased qN under ETS (Supplementary Table 7). Plant Height, Tiller Number, and Number of Panicles per Hill Plant height at maturity was measured at ETS and control. Maximum increase in plant height at elevated temperature was observed in N22, whereas maximum decrease was observed in Varadhan. Significant differences in tiller number/panicle number (P < 0.05) were observed under ETS. Here, maximum increase was observed in Krishna Hamsa, whereas maximum decrease was observed in Jaya and BPT5204 (Supplementary Table 8). Yield Attributes The following yield-associated linked traits were recorded in rice cultivars grown at control and elevated temperatures. Days to 50% Flowering and Days to Maturity Observation on days to 50% flowering and days to maturity is presented in Table 2. There was significant reduction in number of days to 50% flowering in all cultivars under ETS. In addition, significant decrease in number of days to physiological maturity was observed in all cultivars under ETS. The grainfilling period (the difference between flowering and maturity) was also decreased under elevated temperature in almost all cultivars. The mean of grain-filling period for all varieties was reduced by 2 days. Maximum reduction of 6 days was observed in Krishna Hamsa and Rasi followed by IR64 and N22 ( Table 2). Correlation Analysis To observe the correlation between yield attributes and different biochemical/physiological traits studied under ETS, a multiple correlation analysis was performed (Supplementary Table 9). The correlation coefficient values indicated that spikelet fertility was positively and significantly associated with grain yield recorded under ETS. Furthermore, a strong negative association of relative injury % and maximum quantum yield of PSII (Fv/Fm) was observed with grain yield under ETS. E (transpiration rate) at reproductive stage also showed a positive association with grain yield under heat stress. Genes Expression To study the molecular response, expression of 14 genes was analyzed in representative susceptible and tolerant rice genotypes. N22 was selected as tolerant, whereas Vandana was selected as susceptible genotype based on the physiological, biochemical, and yield studies. These two genotypes were selected for gene expression study, as they have similar flowering time and maturity duration. The expression analysis was done at reproductive phase considering it as more sensitive to stress. In order to analyze gene expression, heat stress treatment (42 • C for 24 h) was given at flowering stage in a controlled environment (plant growth chamber). In our previous study (Sailaja et al., 2014), 13 genes were used to study the heat stress response in young seedlings of rice. Here, in addition to those 13 genes, FRH was also included for gene expression analysis at reproductive phase. N22 showed very high expression of heat shock transcription factors-OsHsfA2a, OsHsfA2e, and OsHsfA7-to the tune of 49.0-, 6.1-, and 17.3-fold under heat stress with respect to control. Vandana also showed upregulation of OsHsfA2e and OsHsfA7, although the degree of expression was very less compared to N22. OsHsfA2a was down-regulated in Vandana. The other highly upregulated genes in N22 during heat stress were Osfd (13.7fold), Cyt-C-Oxi (14.2-fold), CWIP (12.5-fold), and FRH (80.0fold). In Vandana also, Osfd, Cyt-C-Oxi, and CWIP showed upregulation under heat stress, but the expression was very less compared to N22. However, FRH was down-regulated in Vandana. Heat shock protein genes HSP81.1 and HSP70 showed increased expression in both the cultivars under heat stress, although expression was more in Vandana. SPS, SPL, and ARF were upregulated, whereas SOD was down-regulated in both the genotypes during stress (Figure 8). DISCUSSION Rice, being the most important crop to ensure food security, faces challenges of climate change. Global warming is one of the most serious concerns in changing climatic conditions, which has direct impact on agriculture. Increase in temperature will lead to heat stress, which affects yield and quality of agricultural crops. Although, several studies have been performed to understand the effect of heat stress in crops and particularly in rice (Oh-e et al., 2007;Jagadish et al., 2008;Shi et al., 2014), most of these studies were based on exposing the plants to high temperature for limited duration at either vegetative or reproductive phases. Keeping in view that global mean temperature is rising, it is important to understand the response of plants exposed to high temperature throughout their growth phase, particularly from flowering stage till seed maturity. It will not only help in understanding the adaptation plasticity of different genotypes to high temperature but also provide information on identifying the important traits for breeding of genotypes suitable to these environmental conditions. In this study, 11 rice varieties grown widely in India were selected. These genotypes were grown in control and elevated temperatures from seedling to maturity phase. Important physiological and biochemical processes were studied to understand the differential response of these rice varieties to elevated temperature at vegetative and reproductive phases. In rice, reproductive stage is the most sensitive stage to heat (Yoshida et al., 1981) and anthesis/flowering is the most severely affected process (Satake and Yoshida, 1978;Nakagawa et al., 2002;Jagadish et al., 2008;Shi et al., 2014). In general, high temperature is unfavorable for flowering and grain filling by causing spikelet sterility and shortening the duration of grain-filling phase (Tian et al., 2007;Xie et al., 2009). High temperature during ripening stage generally reduces grain weight and grain-filling phase, and increases the percentage of white chalky rice (Osada et al., 1973;Yoshida et al., 1981). At elevated temperature, decrease in number of days to 50% flowering and days to grain maturity was observed in all cultivars. Similarly, grain-filling phase was also reduced in almost all cultivars. High temperature showed significant impact on yield through sharp reduction of filled grain number/hill and increased sterile spikelets. Maximum filled grain reduction was observed in BPT5204, Sampada, Swarna, and Vandana. On the other hand, minimum reduction in filled grain number was observed in N22, Jaya, MTU1010, and Rasi at elevated temperature. Similar to reduction in filled grain number/hill, spikelet sterility was observed more in BPT5204, followed by Swarna, Vandana, and Sampada at ETS. High temperature leads to spikelet sterility due to poor anther dehiscence and low pollen production (Matsui et al., 1997;Prasad et al., 2006). In addition, significant reduction of 1000 grain weight and grain yield per hill was also observed at ETS. Based on yield attributes at ETS, these rice cultivars were categorized as tolerant, moderately tolerant, and susceptible. In order to understand the important physiological and biochemical phenomena contributing to differential elevated temperature response, various parameters such as MTS, antioxidant enzymes, chlorophyll and carotenoid contents, and photosynthesis and fluorescence characteristics were measured in all the genotypes at vegetative and reproductive phases. Among the physiological and biochemical traits studied here, MTS was the most reliable trait that showed maximum correlation with yield attributes under ETS. When the difference of RI in ETS and control samples was observed, four cultivars showed <30% increase in RI, i.e., 13% increase in MTU1010, 20% in IR64, 23% in N22, and 17% in Rasi. These cultivars showed good performance in yield attributes also. Earlier studies showed that plants with higher electrolyte leakage/relative injury found to be more susceptible toward high temperature stress (Reynolds et al., 1994;Haque et al., 2009). Studies in rice also showed that heat-tolerant genotypes possess better membrane integrity than heat-sensitive ones (Mohammed and Tarpley, 2009;Kumar et al., 2012). Interestingly, Jaya showed high RI even though it showed less reduction in filled grain percentage, suggesting that it may have different mechanism to cope with high temperature stress. Jaya showed increased SOD activity at both the stages (vegetative and reproductive) and minimum reduction of P N and Fv ′ /Fm ′ under ETS. SOD and Fv ′ /Fm ′ play crucial role in stress response in plants (Kumar et al., 2012;Sharma et al., 2012). Increased antioxidant activity under heat is a general response shown by plants (Wahid et al., 2007). In this study also, increased SOD and POD activities were observed in susceptible as well as in tolerant genotypes. Mohammed and Tarpley (2009) showed negative association of Chla, total chlorophyll content, and Chl a/b with high temperature in rice, but Chl b was not affected, which was observed in this study too. Here, significant difference in Chla, Chl a/b, and total chlorophyll was observed between control and ETS at reproductive phase; however, Chlb and carotenoid content were not affected significantly. Retention of chlorophyll for greater duration under high temperature was reported in tolerant genotypes of rice and creeping bentgrass (Agrostis palustris Huds; Sohn and Back, 2007). Xie et al. (2012) reported that high air temperature during heading stage negatively influenced SPAD-value (relative content of chlorophyll) in rice flag leaves. In plants, photosynthesis is one of the most susceptible processes to high temperature stress (Yin et al., 2010). Considerable reduction of photosynthesis during high temperature stress in rice leaves was reported (Taniyama et al., 1988). Cao et al. (2009) reported that high temperature during maximum vegetative stage and early grain-filling phases caused a reduction in photosynthetic rate of flag leaf in different rice cultivars. In this study, photosynthetic parameters such as P N , g s , E, Ci, and iWUE were analyzed in vegetative and reproductive phases of 11 cultivars grown at control and ETS. These parameters were significantly affected, suggesting that photosynthesis is highly sensitive to high temperature in rice genotypes. While analyzing the gas exchange parameters such as g s , E, and Ci, it was observed that the effect of ETS was more pronounced during vegetative phase for these traits in particular. Egeh et al. (1992) reported that higher E, g s , and Ci contribute to high temperature tolerance. Increase in E and higher g s result in considerable lowering of leaf and canopy temperature, which reduces the harmful effect of high temperature. Although photosynthetic parameters were significantly influenced by high temperature in this study, they did not show distinct correlation with yield attributes of susceptible and tolerant rice cultivars, thereby suggesting that it may be a general physiological response of plants to high temperature. In stress physiology, chlorophyll fluorescence is another important technique to evaluate the damage of leaf photosynthetic apparatus, in particular PSII activity (Maxwell and Jhonson, 2000;Baker and Rosenqvist, 2004). It was used to assess the genetic variability for heat stress tolerance in wheat (Sharma et al., 2012(Sharma et al., , 2014. In this study, elevated temperature showed marginal reduction in maximum quantum yield of PSII (Fv/Fm) during vegetative as well as reproductive phases. Fv/Fm-values did not show much reduction at elevated temperature in comparison to control, which may be due to the gradual adaptation of plants to elevated temperature conditions. Significant reduction of Fv ′ /Fm ′ ( e efficiency of excitation capture by open PSII centers) was noticed in all genotypes under ETS at both the phases. Furthermore, PSII , qP, and ETR were also significantly reduced, whereas qN was increased. Song et al. (2014) reported reduction in qP and ETR at high temperature (42 • C) in populus. Reduction in ETR under high temperature stress is due to inactivation of oxygen-evolving complex (OEC) (Luo et al., 2011) and less utilization of NADPH and ATP under reduced photosynthesis (Lu and Zhang, 1999;Subrahmanyam and Rathore, 2000). Correlation analysis suggests that MTS and E are the most useful parameters to phenotype for ETS tolerance. RI showed a strong negative association, whereas E at reproductive phase showed positive association with grain yield. IR64 (11.1 ± 0.6) and Rasi (10.7 ± 1.2) showed high E under elevated temperature. Furthermore, g s and E showed significant and positive association with filled grain number. High g s and E might be reducing canopy temperature that causes lower panicle temperature and facilitate increase in spikelet fertility under ETS. Reduction in panicle micro-climate temperature by surrounding leaves reduces the effect of high temperature (Shi et al., 2014). This study suggests that different genotypes may have evolved different mechanisms to develop ETS tolerance, which signifies the complexity of pathways associated with high temperature response. Important biochemical and physiological traits contributing to ETS tolerance need to be characterized in individual genotypes to facilitate the breeding of climate-resilient rice genotypes. From this study, we have listed the useful traits identified in 11 genotypes, which can be used to develop elevated temperature-tolerant rice cultivars (Table 3). Interestingly, N22 emerged as the most tolerant genotype to elevated temperature. It has been categorized as heat tolerant in earlier studies (Egeh et al., 1992;Ziska et al., 1996;Jagadish et al., 2008). Interestingly, this aus genotype shows tolerance to high temperature irrespective of the treatments, i.e., short duration exposure, long duration exposure, or continuous high temperature treatment (in this study), which is not shown by other genotypes studied so far. Furthermore, it shows tolerance character at all stages of crop growth, i.e., seedling, vegetative, and reproductive phases (Jagadish et al., 2008;Krishnan et al., 2011;Sailaja et al., 2014). N22 showed best performance in almost all the parameters studied here at vegetative and reproductive phases, e.g., lesser RI (23%), lesser reduction in chlorophyll content, increased SOD, CAT, and POD activities, lesser reduction in P N , and high transpiration rate causing minimum reduction in spikelet fertility and grain yield under ETS. This was further supported with the gene expression analysis of N22, showing very high expression of heat shock transcription factors (OsHsfA2a, OsHsfA2e, and OsHsfA7) during high temperature stress at flowering stage. Hsfs are important transcriptional regulatory proteins of plants playing key role in controlling the expression of several heatresponsive genes (Qiao et al., 2015). Overexpression of Hsf genes in transgenic plants resulted in upregulation of heat stress-associated genes and an enhancement of thermotolerance (Mishra et al., 2002;Charng et al., 2007;Yokotani et al., 2008). Induced expression of OsHsfA2a, OsHsfA2e, and OsHsfA7 during heat stress was reported in rice . Osfd, Cyt-C-Oxi, and CWIP also showed very high expression in heat stressed tissues of N22. Significant increase in expression of Osfd and CWIP in N22 was reported in our previous study where heat stress was applied in seedling stage (Sailaja et al., 2014). The annotation of Osfd suggests it as ironsulfur cluster-binding protein involved in electron transport activity and is located in chromosome1 (locus Os01g0730500). Another highly upregulated gene during heat stress in N22 was FRH, which is annotated as fertility restorer homolog A, a prenyltransferase domain containing protein. In comparison to N22, the susceptible cultivar Vandana showed very less expression of Hsfs, Osfd, Cyt-C-Oxi, and CWIP, whereas FRH was down-regulated. It would be interesting to further characterize their functional role in high temperature stress tolerance. A multiple organizational level analysis including physiological, biochemical, and transcriptional responses suggested N22 as the most efficient heat-tolerant genotype in this study. In summary, this is an important study where popular and widely grown rice genotypes were characterized for heat stress response by growing plants at continuous high temperature proportional to ambient temperature during the whole period of study. Several important physiological and biochemical traits were identified in different genotypes, which would be useful in phenotyping and breeding for heat stress tolerance. This study emphasizes that individual genotypes need to be characterized for specific heat stress treatments as it is a complex phenomenon where different genotypes have evolved different ways to respond to elevated temperature. Our study demonstrates that N22 is highly suitable to high temperature, showing best expression of useful genes and physiological and biochemical traits that can be utilized in breeding programs for high temperature tolerance. The identified physiological and biochemical traits imparting heat stress tolerance in different rice genotypes can be genetically mapped and introgressed into heat-susceptible, high-yielding rice genotypes through breeding. Alleles of differentially expressed genes and their promoters/regulators can be sequenced from susceptible and tolerant rice genotypes to identify polymorphic genetic loci linked with stress tolerance. Such genetic loci can be validated in heat stress tolerance in a set of susceptible and tolerant rice genotypes and in mapping populations. The molecular markers based on these genetic loci could be directly useful in selecting heat tolerant genotypes through marker-assisted breeding. Furthermore, resistant or tolerant alleles of such genes can be used for genetic transformation of high-yielding but heat-susceptible genotypes. AUTHOR CONTRIBUTIONS SM, DS, SN, SV, and VB designed the research, BS, TV, YR, and PV performed the research, BS, DS, and SM analyzed the data, SM and BS wrote the manuscript. All authors read and approved the manuscript. ACKNOWLEDGMENTS Authors are highly thankful to the Project Director, Indian Institute of Rice Research, for his kind support. Financial support received from Indian Council of Agricultural Research-NICRA (National Innovations in Climate Resilient Agriculture, 03 NICRA-030031 Grant in Aid) project is acknowledged.
v3-fos-license
2022-03-17T13:25:52.051Z
2022-03-17T00:00:00.000
247478561
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.frontiersin.org/articles/10.3389/fmicb.2022.840792/pdf", "pdf_hash": "a342fa1e7ef5036b7c0f969d50e7ada87b409e0e", "pdf_src": "Frontier", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43582", "s2fieldsofstudy": [ "Biology", "Environmental Science" ], "sha1": "a342fa1e7ef5036b7c0f969d50e7ada87b409e0e", "year": 2022 }
pes2o/s2orc
LC_Glucose-Inhibited Division Protein Is Required for Motility, Biofilm Formation, and Stress Response in Lysobacter capsici X2-3 Glucose-inhibited division protein (GidA) plays a critical role in the growth, stress response, and virulence of bacteria. However, how gidA may affect plant growth-promoting bacteria (PGPB) is still not clear. Our study aimed to describe the regulatory function of the gidA gene in Lysobacter capsici, which produces a variety of lytic enzymes and novel antibiotics. Here, we generated an LC_GidA mutant, MT16, and an LC_GidA complemented strain, Com-16, by plasmid integration. The deletion of LC_GidA resulted in an attenuation of the bacterial growth rate, motility, and biofilm formation of L. capsici. Root colonization assays demonstrated that the LC_GidA mutant showed reduced colonization of wheat roots. In addition, disruption of LC_GidA showed a clear diminution of survival in the presence of high temperature, high salt, and different pH conditions. The downregulated expression of genes related to DNA replication, cell division, motility, and biofilm formation was further validated by real-time quantitative PCR (RT–qPCR). Together, understanding the regulatory function of GidA is helpful for improving the biocontrol of crop diseases and has strong potential for biological applications. INTRODUCTION Lysobacter spp. are bacteria natively present in the rhizosphere, water, and some extreme conditions (Park et al., 2008;Fang et al., 2020). In recent years, species, such as Lysobacter enzymogenes, Lysobacter antibioticus, and Lysobacter capsici, have attracted much interest for their antimicrobial activities, and they are regarded as effective biocontrol agents of plant diseases Afoshin et al., 2020). For example, heat stable antifungal factor (HSAF), isolated from L. enzymogenes C3, has been exhibited to be inhibitory activities against a wide range of fungal species (Yu et al., 2007). Compared to L. enzymogenes, much less is known about the biological features of L. capsici. The L. capsici AZ78 genome has a gene pool that allows it to successfully interact with plant pathogenic microorganisms and environmental factors, providing a genetic framework for detailed analysis of potential biocontrol mechanisms of plant pathogens . In addition, the effective antifungal effect of L. capsici AZ78 and L. capsici PG4 has been shown (Puopolo et al., 2010;Brescia et al., 2020). Twenty-two volatile organic compounds to be produced by L. capsici AZ78, that contribute to biological control of soilborne plant pathogens (Vlassi et al., 2020). Overall, the species of L. capsici has considerable potential for biocontrol of plant pathogenic microorganisms. tRNA modification ensures efficient and accurate protein synthesis and promotes cellular health and growth (Manickam et al., 2016). Glucose-inhibited division protein (GidA), which is highly conserved in prokaryotes, serves as a tRNA modification enzyme and catalyzes the addition of a carboxymethylaminomethyl (cmnm) group at the 5′ position of the wobble uridine (U34) of tRNAs (Yu et al., 2019;Gao et al., 2020). GidA modification is evolutionarily conserved in bacteria and Eukarya, which is essential for efficient and accurate protein translation (Fislage et al., 2014). The disruption of gidA causes pleiotropy and affects multiple phenotypic traits. Therefore, the GidA-mediated tRNA modification pathway is thought to be the main regulatory mechanism of pathogenicity (Shippy and Fadl, 2014). The gidA gene is recognized to function in the regulation of bacterial growth, stress response, and virulence (Shippy and Fadl, 2014). In Aeromonas hydrophila, disruption of gidA resulted in altered cell morphology, reduced growth, and decreased cytotoxic enterotoxin production (Sha et al., 2004). In other bacteria genera, such as Salmonella spp. and Streptococcus spp., gidA mutants had motility defects, reduced survival under stressful conditions, and decreased expression of virulence proteins (Rehl et al., 2013;Zhang et al., 2014;Gao et al., 2016). In Pseudomonas syringae, the causal agent of bean spot disease, the gidA mutant had altered cell morphology and could not produce toxin (Kinscherf and Willis, 2002). In reality, GidA can regulate the expression of a variety of proteins at the translational level through tRNA modification, and thus can regulate the survival of bacteria in response to environmental signals under stressful conditions (Gustilo et al., 2008). Taken together, these studies highlight the importance of this conserved tRNA modification pathway in cellular processes. However, little is known about GidA in L. capsici. Lysobacter capsici X2-3 was isolated from the wheat rhizosphere and showed marked antimicrobial activity against plant pathogenic fungi, oomycetes, and Gram-positive bacteria. Genes in the X2-3 genome were annotated using a combined analysis of the KEGG, COG, and GO databases, and several genes were predicted to be associated with antibiotic production (Yi et al., 2015). Although GidA family proteins play important roles in the regulation of bacterial growth, pathogenicity, and human diseases in pathogenic species, there are few studies on plant growthpromoting bacteria (PGPB). In this study, the biological function of LC_GidA was characterized by constructing an LC_GidA mutant. We demonstrated that the inactivation of LC_GidA significantly reduced bacterial growth, twitching motility, biofilm formation, root colonization, and stress response in L. capsici X2-3. Bacterial Strains, Growth Conditions, and Plasmids The bacterial strains and plasmids used in this study are listed in Table 1. Unless otherwise stated, L. capsici X2-3 and its derivative strains were grown at 28°C in nutrient broth (NB) medium or on NA (NB with 1.5% agar) medium. Transformants from the first crossover for the LC_GidA knockout were cultured on NBN (NB without 1% sucrose) or NAN (NBN with 1.5% agar) medium. Transformants bearing the second crossover were plated on NAS (NAN plus 10% sucrose) medium (Zou et al., 2011). All bacterial strains were incubated at 28°C. Escherichia coli strains were cultured in Luria-Bertani (LB) or LB plus 1.5% agar plates at 37°C. When necessary, the media were supplemented with the antibiotic ampicillin (Amp, 50 μg/ml), kanamycin (Km, 50 μg/ml), or gentamicin (Gm, 50 μg/ml), depending on the strains used. Construction of the LC_GidA Deletion Mutant and Its Complemented Strain The LC_GidA mutant was generated from the wild-type X2-3 strain by allelic homologous recombination. Briefly, two LC_GidA flanking regions were amplified by PCR using the primer pairs up F/R and down F/R ( Table 1). The upstream and downstream PCR products were digested with BamHI and HindIII, respectively. The digested fragments were ligated into the suicide vector pKMS1 ( Table 1) to obtain the recombinant plasmid pKMS1-AB (Zou et al., 2011). The plasmid was transformed into X2-3 by electroporation. The LC_GidA mutant MT16 was obtained after two recombination events and confirmed by PCR and sequencing of the PCR products. The fragment harboring the intact LC_GidA gene, which was amplified by PCR using the primers gidAF and gidAR ( Table 1), was cloned into the expression vector pBBR1-MCS5 (Table 1) at the EcoRI and BamHI site, resulting in the recombinant plasmid pBBR1-gidA, and then pBBR1-gidA was transformed into the mutant MT16 by electroporation (1.8 KV, 200 Ω, and 25 μF). The complemented mutant strain Com-16 was selected on NA plates with gentamycin (Kovach et al., 1994). Growth Curve Determination The X2-3, MT16, and Com-16 strains were grown for 24 h at 28°C in NA medium and then inoculated into NB medium to OD 600 = 1.0. The cultures were diluted 1:100 into NB medium. The strains were incubated at 28°C for 48 h with shaking at 180 rpm, and bacterial growth was examined every 4 h (Rehl et al., 2013). Motility Assay The motility assay was performed as previously described (Rashid and Kornberg, 2000;Tomada et al., 2016). To test twitching motility, bacteria were grown for 24 h in NA medium at 28°C, and 3 μl of the bacterial cultures at a normalized OD 600 were added to NYGB medium (0.6% agar) plates. The diameters of the areas occupied by the bacterial cells were measured after 3 days. Biofilm Formation Assay The crystal violet technique was used to analyze the attachment of the different strains to an abiotic surface. The X2-3, MT16, and Com-16 strains were cultured in NB medium and adjusted to OD 600 = 1. The cultures were diluted 1:100 into a glass tube containing 10 ml of NB medium supplemented with 1% sucrose or glucose. Then, the glass tubes were incubated at 28°C for 3 days with shaking at 180 rpm. The growth medium was removed, and the tubes were washed three times with sterile distilled water. Then, the glass tubes were stained with a 0.2% crystal violet solution for 10 min. The unbound crystal violet was removed, and the tubes were washed three times with sterile distilled water. Crystal violet was extracted with absolute ethanol, and the absorbance was measured at 575 nm (Zhang et al., 2018). Pellicle Formation All Lysobacter strains obtained throughout the study were tested for their ability to produce biofilms, which were visualized as floating pellicle at the air-broth interface that completely blocked the surface of the culture and could not be dispersed by shaking. The X2-3, MT16, and Com-16 strains were grown in glass test tubes containing NB medium (with 1% sucrose or 1% glucose) at 28°C for 5 days without shaking (Latasa et al., 2012). Root Colonization Assay Seven-day-old plants were collected, and the roots were cut into 1.5 cm segments. Fragments of uniform shape and size were placed into 96-well microtiter plate. Two hundred microliters of bacterial culture with an OD 600 = 1.0 was added to the wells, and the plates were incubated at 28°C for 3 days. After the incubation period, the roots were removed from the cultures, washed with sterile water, and then added to 1 ml sterile water. The bacteria on the root surface were removed and dispersed in sterile water by shaking. One hundred microliters of the dispersed preparation was plated on NA agar and counted after 5 days (Tariq et al., 2014). The plasmid pBBR1-gfp was transformed into the X2-3, MT16, and Com-16 strains by electroporation, and the transformants were selected on NA plates with gentamycin. The treatment was the same as above. To view the colonization of L. capsici X2-3-gfp, MT16-gfp, and Com-16-gfp on the root surfaces, the roots were observed using a confocal laser scanning microscope system (Zeiss LSM 800, Carl Zeiss AG, Jena, Germany) with an excitation wavelength of 488 nm. Images of at least 12 roots were obtained for each treatment . Stress Tolerance Assays The bacterial strains were diluted 1:100 into NB medium, and experiments were conducted to test the OD 600 under five environmental stresses. Stress treatments were applied as follows: for UV radiation, the cells were exposed to shortwave UV radiation (254 nm in a biological safety cabinet) at a distance of 60 cm for 45 min. For salt stress, NaCl was added to the bacterial cultures at final concentrations of 0.15, 0.25, and 0.35 mol/L (Li et al., 2014). For temperature stress, the cultures were incubated at 37 and 42°C with shaking at 180 rpm. Resistance against H 2 O 2 was determined as described previously Frontiers in Microbiology | www.frontiersin.org with slight modifications (Liu et al., 2019). H 2 O 2 at concentrations of 0.1, 0.01, and 0.001 mM was added to the bacterial cultures and, the samples were incubated at 28°C for 10 min with shaking. After serially diluting the bacteria five times (10 −1 -10 −5 ), 3 μl of each cell sample was dropped onto NA plates and incubated at 28°C for 3 days. The pH stress test was similar to the H 2 O 2 test. The bacterium was serially diluted five times (10 −1 -10 −5 ), and then 3 μl of each cell sample was dropped onto NA plates with pH values ranging from 5.0 to 9.0. RT-qPCR The wild-type strain X2-3 and the mutant strain MT16 were cultivated until they reached an OD 600 = 1. Total RNA was extracted using AG RNAex Pro Reagent [Accurate Biotechnology (Hunan) Co., Ltd.], and cDNA was synthesized by reverse transcription. Nineteen genes related to DNA replication, cell division, motility, and biofilm formation were chosen for RT-qPCR ( Table 2). RT-qPCR experiments were carried out as instructed by the manufacturer [Accurate Biotechnology (Hunan) Co., Ltd.]. The 16S rRNA gene was used as an internal control (Qian et al., 2013). The relative transcription levels were calculated using the 2 -ΔΔCT method (Livak and Schmittgen, 2001). Statistical Analysis All data were reported as mean standard at least triplicate experiments. The data were analyzed using the statistical SPSS software (version 18.0) by one-way ANOVA, and the mean was compared by Duncan's multiple range test (DMRT) at the 5% probability level. General Analysis of GidA in X2-3 Glucose-inhibited division protein as a tRNA modification enzyme is highly conserved in bacteria and plays an important role in bacterial growth, stress response, and virulence (Shippy and Fadl, 2014). We conducted a search of the L. capsici X2-3 genome annotation (GenBank accession No. LBMI00000000.1) and observed that a potential ORF of approximately 1,890 bp in size was predicted to encode GidA (Supplementary Figure S1), which was named LC_GidA in L. capsici. BLAST analyses showed that the LC_GidA gene shares 62.43% identity with the E. coli gidA gene (GenBank accession No. NC_011750.1). The putative LC_GidA protein showed 63.81% identity with the E. coli GidA protein (GenBank accession No. YP_002410220.1; Supplementary Figure S1), which is a tRNA modification enzyme responsible for the proper biosynthesis of 5-methylaminomethyl-2-thiouridine (mnm5s2U) at position 5 of the wobble uridine (U34) of tRNAs. Deletion of LC_GidA Attenuates the Growth and Motility of X2-3 To determine the function of the LC_GidA gene in L. capsici X2-3, a LC_GidA deletion mutant, termed MT16, was generated by integration of the pKMS1 plasmid (Supplementary Figure S2). The mutant was identified for the loss of 1,890 bp fragment coding region of the gidA gene by PCR with the primers gidAup-F and gidAdown-R (Supplementary Figure S3). Additionally, the complemented mutant Com-16 was generated by insertion of the full-length LC_GidA into pBBR1-MCS5 and transfer of the resultant plasmid into MT16. The growth of wild-type strain X2-3 and the LC_GidA gene deletion mutant MT16 was assayed by measuring OD 600 values from 4 to 48 h at 4 h intervals. As shown in Figure 1A, the cell density of MT16 was lower than that of X2-3 and Com-16, and the MT16 colony size was obviously smaller than that of X2-3 at the same timepoints. These results suggest that the loss of LC_GidA resulted in the attenuation of bacterial growth. The twitching motility of X2-3 and the mutant MT16 were tested on 0.6% agar plates. After 3 days of incubation at 28°C, the diameter of the Com-16 complemented strain was 2.30 cm LC_GidA Is Involved in Biofilm and Pellicle Formation To measure the difference in the biofilm biomass of the MT16 and X2-3 strains, they were cultured in NB medium supplemented with 1% sucrose or 1% glucose for 3 days. The samples were then stained with crystal violet, and the biofilm biomass was quantified by measuring their OD 575 . Staining of bacterial cells with CV-staining showed that X2-3 and Com-16 produced much more biofilms of cell mass adhered to the glass surface than those produced by MT16 strain (Figure 2A). The biofilm biomass of MT16 was 17 and 30% lower than that of X2-3 in 1% sucrose and 1% glucose media, respectively. By contrast, the biofilm biomass of Com-16 was similar to that of the wild-type strain ( Figure 2B). Furthermore, the pellicle, robust biofilm formed at the air-liquid interface of the culture, could be observed in 1% sucrose or 1% glucose NB medium after static culture for 5 days. The MT16 pellicle was much thinner than that of X2-3, both in 1% sucrose and 1% glucose NB medium, while pellicle formation was partially or fully restored in the Com-16 strain ( Figure 2C). From these results we also determined that the rate at which X2-3 utilized different C sources varied, for example, the utilization rate of sucrose was higher than that of glucose; the utilization rate of glucose by the LC_GidA deletion strain was relatively low. These results indicated that deletion of the LC_GidA gene A B FIGURE 1 | The growth and motility of X2-3, MT16, and Com-16. (A) X2-3, MT16, and Com-16 growth curves. The X2-3, MT16, and Com-16 strains were cultured in NB medium, adjusted to OD 600 = 1.0, and then subcultured in fresh NB for 48 h. The OD 600 values were tested every 4 h post-subculturing. All experiments were repeated at least three times. (B) Twitching motility of X2-3, MT16, and Com-16. The X2-3, MT16, and Com-16 strains were grown for 24 h in NB medium at 28°C and adjusted to OD 600 = 1.0. Three microliters of each cell sample was dropped onto 0.6% agar plates for the motility tests. The diameters of each colony were measured after 3 days of incubation, and the resulting values were taken to indicate the bacterial motility. Each experiment was performed at least three times. a, not significant compared to X2-3. b, significant difference compared to X2-3. Frontiers in Microbiology | www.frontiersin.org in MT16 decreased the biofilm biomass, while the Com-16 complemented strain recovered biofilm formation ability. Inactivation of LC_GidA Decreased the Colonization of Lysobacter capsici X2-3 on Wheat Roots Considering that the LC_GidA gene plays a role in biofilm formation, a quantitative measurement of root colonization was performed. Wheat roots were cultured in X2-3, MT16, or Com-16 for 3 days, and then 100 μl of the bacterial suspensions were plated on NA agar and cultured for 3 days. The results are shown in Figure 3B. The ability of the MT16 mutant to colonize wheat roots was significantly lower than that of the wild-type X2-3 strain; wheat root colonization was recovered in the Com-16 complemented strain. Green fluorescent proteinlabeled X2-3, MT16, and Com-16 (X2-3-gfp, MT16-gfp, and Com-16-gfp) were used to detect the root colonization of L. capsici X2-3 under a confocal laser scanning microscope (Zeiss LSM 800, Carl Zeiss AG, Jena, Germany). GFP fluorescence shows successful colonization of X2-3 in root tip cells of wheat, the difference of colonization was determined by observing the GFP fluorescence area. As can be seen from Figure 3A, that the fluorescence area of the wild type is significantly larger than that of the mutant. The images showed that more X2-3gfp cells were bound to the roots than MT16-gfp cells ( Figure 3A). These results indicated that the inactivation of LC_GidA may affect the colonization of wheat roots. The LC_GidA Mutation Impairs Bacterial Resistance to Temperature, Salt, pH, and H 2 O 2 but Has No Significant Effect on UV Radiation To assess the role of LC_GidA in stress tolerance, the growth yields of MT16, Com-16, and X2-3 were tested under different conditions, including temperature, salt, pH, and UV radiation. The growth of MT16 was significantly lower than that of X2-3 at 37 and 42°C, while Com-16 growth was basically restored to the level of the wild-type strain ( Figure 4A). As shown in Figure 4A, the mutant had decreased survival at high osmotic pressure. When treated with UV radiation, there were no significant differences between the MT16 and X2-3 strains ( Figure 4A). Compared with the wild-type strain, the growth of the mutant was inhibited at all concentrations of H 2 O 2, and the growth of Com-16 was also slightly affected under the high and low H 2 O 2 conditions ( Figure 4B). The pH resistance of L. capsici was significantly affected by the deletion of LC_GidA ( Figure 4C). The LC_GidA Gene Regulates the Expression of Different Genes To assess the role of LC_GidA as a global regulatory factor and further show that the deletion of LC_GidA leads to a decrease in growth, motility, and biofilm formation, 19 genes related to DNA replication, repair, cell division, motility, and biofilm formation in X2-3 were chosen for RT-qPCR. The results showed that the expression of genes related to motility, replication, cell division, and biofilm formation was significantly downregulated. The genes radC, gyrA, recN, n6amt, dnaA, rmuC, ftsQ, ftsI, and ftsB, which are related to DNA replication, repair, and cell division, were markedly downregulated in the LC_GidA mutant ( Figure 5A). Six genes related to motility, pilA, flgD, fliF, flhB, fliQ, and fliP, were significantly decreased in the mutant compared with wild-type X2-3 ( Figure 5B). Among the biofilm formation genes, four genes, pgaA, pgaB, pgaC, and surA, were significantly repressed in the LC_GidA mutant ( Figure 5C). DISCUSSION Glucose-inhibited division protein, as an evolutionarily conserved tRNA modifying enzyme, catalyzes the addition of a cmnm group at the wobble uridine of tRNAs and is essential for proper and efficient protein translation (Fislage et al., 2014). GidA has exhibited important roles in regulating multiple The results of the biofilm formation assays were quantified by measuring the absorbance of the crystal violet stain at 575 nm. Each experiment was performed at least three times. a, not significant compared to X2-3. b, significant difference compared to X2-3. (C) Pellicle formation by X2-3, MT16, and Com-16. All strains were analyzed after 5 days of incubation at 28°C, showing developed pellicles at the interface between the liquid and air in NB medium supplemented with 1% sucrose or glucose. Frontiers in Microbiology | www.frontiersin.org biological processes, such as growth, cell division, and virulence in pathogenic bacteria (Shippy et al., 2011). However, the function in different bacterial species is not always the same. L. capsici is an effective biocontrol agents of plant diseases, and the role of GidA in L. capsici is unclear. In this study, we demonstrated that gidA affects cell growth, twitching motility, biofilm formation, root colonization, and stress response in L. capsici X2-3. First, we obtained the gidA deletion mutant, we found that deletion of LC_GidA significantly reduced the growth and motility of L. capsici X2-3 (Figure 1), and this result is in agreement with previous reports on E. coli (Lies et al., 2015) and Salmonella enterica (Rehl et al., 2013). To further understand the regulatory effect of LC_GidA, nine genes related to growth, including six involved in DNA replication, recombination, and repair (radC, gyrA, recN, n6amt, dnaA, and rmuC), and three involved in cell division (ftsQ, ftsI, and ftsB), were analyzed in the LC_GidA mutant by RT-qPCR, and all of these genes were downregulated (Figure 5A). GyrA, n6amt, and dnaA are all related to DNA replication. GyrA is an essential gene that introduces negative supercoils into plasmid and chromosomal DNA (Rovinskiy et al., 2019); the n6amt gene encodes the main enzyme catalyzing the methylation of the adenine base ; and dnaA is the initiator of chromosomal DNA replication and has various activities in E. coli (Mizushima, 2000). RecN is a structural maintenance protein and is involved in RecA-mediated recombinational repair in Deinococcus radiodurans and E. coli (Uranga et al., 2017;Keyamura and Hishida, 2019). RmuC and radC function in recombination and repair via different mechanisms (Okaichi et al., 1995;Kosinski et al., 2005). Cell division is also essential in bacterial growth, and division regulated by the proteins FtsQ, FtsB, and FtsI is a key component in facilitating bacterial cell replication (Kureisaite-Ciziene et al., 2018). Taken together, these genes involved in DNA replication, recombination, repair, and cell division were all related to cell growth, and the downregulation of these genes in the LC_GidA mutant can explain the mechanism by which gidA disruption inhibits L. capsici X2-3 growth. Additionally, six genes related to motility, pilA, flgD, fliF, flhB, fliQ, and fliP, were downregulated in the LC_GidA mutant ( Figure 5B). These RT-qPCR data related to replication, repair, cell division, and motility in the LC_GidA mutant strongly supported the biological results of attenuated cell growth and motility. Deletion of gidA significantly reduced L. capsici biofilm formation and colonization of wheat roots. Biofilms attached to biological surfaces are indispensable for bacterial colonization and sessile growth (Kumara et al., 2017), and gidA is considered to play important roles in biofilm formation. In S. mutans, loss of gidA decreased the capacity for glucose-dependent biofilm formation by over 50% (Li et al., 2014). In our study, the deletion of LC_GidA attenuated biofilm formation in the LC_GidA mutant (Figure 2). This attenuation may be due to impaired growth of mutant MT16 or downregulation of genes associated with biofilm formation, or a dual function of impaired growth and downregulation of genes. Four genes, pgaA, pgaB, pgaC, and surA that were reported to be related to biofilm formation were tested by RT-qPCR. The results revealed that the genes pgaA, pgaB, pgaC, and surA were clearly downregulated in the LC_GidA mutant ( Figure 5C). SurA is a major factor in the biogenesis of β-barrel outer membrane proteins, and the disruption of SurA in S. enterica serovar Typhi affects motility and biofilm formation (Lu et al., 2019). PgaA, pgaB, and pgaC have a profound role in the synthesis and secretion of poly-β-linked N-acetylglucosamine (PNAG), which has been characterized as a component of the bacterial surface responsible for biofilm formation in E. coli (Chen et al., 2014). Deletion of pgaC or pgaB dramatically reduced biofilms in Klebsiella pneumoniae and Aggregatibacter actinomycetemcomitans (Chen et al., 2014;Hathroubi et al., 2015;Shanmugam et al., 2017). Our results showed decreased biofilm formation and downregulated biofilmrelated genes in the LC_GidA mutant, consistent with these . The results were quantified by measuring the absorbance at 600 nm. The data represent the means ± SDs of three independent experiments. a, not significant compared to X2-3. b, significant difference compared to X2-3. LC_GidA mutations impair resistance to (B) H 2 O 2 and (C) pH in Lysobacter capsici. The wild-type X2-3, the mutant MT16, and the Com-16 complemented strains were grown on 0.1, 0.01, and 0.001 mM H 2 O 2 (B) and at pH 6.0, pH 7.0, or pH 9.0 (C). The bacterium was serially diluted five times (10 −1 -10 −5 ). Three replicates for each treatment were used, and the experiment was repeated three times. Frontiers in Microbiology | www.frontiersin.org studies. And the attenuation of biofilm formation in mutant can be explained by the downregulation of these genes. Biofilm formation is a determinant of the root colonization process in PGPBs, such as Bacillus (Chen et al., 2013;Xu et al., 2018). In our study, the LC_GidA mutant displayed an 80% reduction in bacterial colonization compared with X2-3 (Figure 3), suggesting that the LC_GidA gene is important for X2-3 colonization of wheat roots. Similar phenomena were found in a previous study with B. velezensis FZB42 (Al-Ali et al., 2018). In summary, the deletion of LC_GidA decreased X2-3 biofilm formation and colonization of the wheat rhizosphere. In addition, biofilm formation is considered a generic mechanism for the survival of bacteria in stressful environments (Ansari and Ahmad, 2019;Gao et al., 2019;Masmoudia et al., 2019). As shown in Figure 4, the disruption of LC_GidA strongly reduced the growth of the mutant in high salt media, high temperature, different concentrations of H 2 O 2 , and different pH conditions. This result is in agreement with previous reports in S. mutans in which the gidA mutant showed a reduced ability to withstand stress conditions (Li et al., 2014). Moreover, in Xanthomonas oryzae, the PXO_ RS20535 mutant produced significantly less biofilm and had a clear diminution of growth and survival under stress conditions (Antar et al., 2020). These results indicated that biofilm formation may be involved in the growth of X2-3 in various stressful environments. Previous study proved that as a global regulatory factor, deletion of gidA significantly reduced the growth in most bacteria (Shippy and Fadl, 2014). In our study, growth curves showed that the LC_GidA mutant resulted in an attenuation of the bacterial growth rate compared with the wild type and entered the stationary phase at a slightly lower density. While the LC_GidA mutant grew more slowly, this relatively small difference is not sufficient to explain the dramatic biofilm formation and stress respond observed. In addition, despite the modest growth defect, the LC_GidA mutant did not show any deficiency in UV stress compared with the wild type. And RT-qPCR assays also eliminate the effect due to the growth deficiency of LC_GidA in regulating biofilm formation and stress response. Taken together, our study indicated that the LC_GidA mutant decreased biofilm formation and stress respond of X2-3. In conclusion, this study demonstrated that LC_GidA regulates the expression of a series of genes involved in cell growth, twitching motility, biofilm formation, rhizosphere colonization, and stress resistance in L. capsici X2-3. The antimicrobial activity of the LC_GidA mutant against Gram-positive bacteria was also markedly decreased (Supplementary Figure S7). However, no significant changes in the antimicrobial activity of the LC_GidA mutant against either fungi or oomycetes were observed (Supplementary Figure S6), although deletion of gidA in pathogenic bacteria resulted in reduced pathogenicity. The regulatory mechanisms of GidA in antibacterial activity remain to be investigated. These findings provide new insights to better understanding the regulatory function of gidA in PGPB. This is the first report on the regulation of LC_GidA in L. capsici, as well as in the genus Lysobacter. DATA AVAILABILITY STATEMENT The datasets presented in this study can be found in online repositories. The names of the repository/repositories and accession number(s) can be found in the article/Supplementary Material. AUTHOR CONTRIBUTIONS DZ and HW conceived this study. DZ performed the mainly experiments, and some experiments were performed with the A B C FIGURE 5 | RT-qPCR of 19 selected differentially expressed genes. The X2-3 and MT16 mutant strains were cultivated to an OD 600 = 1. RT-qPCR of 19 selected differentially expressed genes related to replication repair and cell division (A), bacterial motility and flagellar formation (B), and biofilm formation (C). Three replicates for each treatment were used, and the experiment was repeated three times. Vertical bars represent SEs. a, not significant compared to X2-3. b, significant difference compared to X2-3. assistance of ZL and SH. DZ analyzed the data. DZ, CH, and AL wrote the manuscript. All authors contributed to the article and approved the submitted version. FUNDING This work was supported by National Key R&D Program of China (grant number 2017YFD0201100) and Outstanding Youth Foundation of Shandong Province (grant number ZR2021YQ20). ACKNOWLEDGMENTS We are grateful to Weiwen Kong in Yangzhou University for his pKMS1 plasmids.
v3-fos-license
2014-10-01T00:00:00.000Z
2013-08-01T00:00:00.000
7357440
{ "extfieldsofstudy": [ "Medicine", "Biology" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/1420-3049/18/8/9550/pdf", "pdf_hash": "268f73f731b393445b411ddb292609103e4383eb", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43584", "s2fieldsofstudy": [ "Medicine", "Environmental Science", "Biology" ], "sha1": "38511922b42fc821c190d45684893dc17a759568", "year": 2013 }
pes2o/s2orc
Melaleuca alternifolia Concentrate Inhibits in Vitro Entry of Influenza Virus into Host Cells Influenza virus causes high morbidity among the infected population annually and occasionally the spread of pandemics. Melaleuca alternifolia Concentrate (MAC) is an essential oil derived from a native Australian tea tree. Our aim was to investigate whether MAC has any in vitro inhibitory effect on influenza virus infection and what mechanism does the MAC use to fight the virus infection. In this study, the antiviral activity of MAC was examined by its inhibition of cytopathic effects. In silico prediction was performed to evaluate the interaction between MAC and the viral haemagglutinin. We found that when the influenza virus was incubated with 0.010% MAC for one hour, no cytopathic effect on MDCK cells was found after the virus infection and no immunofluorescence signal was detected in the host cells. Electron microscopy showed that the virus treated with MAC retained its structural integrity. By computational simulations, we found that terpinen-4-ol, which is the major bioactive component of MAC, could combine with the membrane fusion site of haemagglutinin. Thus, we proved that MAC could prevent influenza virus from entering the host cells by disturbing the normal viral membrane fusion procedure. Introduction Influenza is an infectious disease caused by the influenza virus which is a RNA virus of the family Orthomyxoviridae. Influenza spreads around the world in seasonal epidemics, with an estimated three to five million cases of severe illness and 250,000 to 500,000 deaths per annum [1]. Four major influenza pandemics occurred in the 20th century that caused more than 20-50 million deaths, and influenza virus infection remains one of the leading causes of mortality [2,3]. A new H1N1 influenza A virus, also called the 2009 H1N1 pandemic influenza virus (2009 H1N1 virus), had spread throughout the world and caused a serious influenza pandemic in 2009 [4,5]. Over 17,000 reported deaths have been caused by 2009 H1N1 virus infection since its identification in Mexico in April 2009, so drugs and vaccines against 2009 H1N1 virus infection are urgently needed [6]. However, 2009 H1N1 virus, like many other influenza virus stains, has developed resistance to commercially available anti-influenza drugs. Currently the neuraminidase (NA) inhibitor oseltamivir, which can interfere with the enzymatic activity of the NA of the influenza virus, is mainly used for the treatment of influenza patients, but the 2009 H1N1 virus has been reported to be resistant to it [7,8]. It has been recently reported that over 160 sporadic viral isolates of 2009 H1N1 virus show resistance to oseltamivir due to the NA H275Y genotype mutation [8,9]. On the other hand, though the vaccines against 2009 H1N1 virus infection have been developed and used in clinical practice, the safety of theses vaccines remains one of the major public concerns in most of countries [10][11][12][13], as deaths and serious side effects of vaccines against 2009 H1N1 virus have been reported [14]. The haemagglutinin (HA) on the surface of influenza virus particles is a major viral membrane glycoprotein molecule, which is synthesized in the infected cell as a single polypeptide chain precursor (HA0) with a length of approximately 560 amino acid residues and subsequently cleaved by an endoprotease into two subunits called HA1 and HA2 and then be covalently attached by the disulfide bond [15,16]. The crystallographic structure of the HA shows a long tightly intertwined fibrous stem domain at its membrane-proximal base, a globular head which contains the sialic acid receptor binding site (RBS) and five antigenic sites surrounding the RBS [17]. The mature HA on the viral surface is a trimeric rod-shaped molecule with the carboxy terminus inserted into the viral membrane and the hydrophilic end forming the spike of the viral surface [18][19][20]. Although the amino acid sequence identity of different virus strains can be less than 50%, the structure and functions of these HAs are highly conserved [16]. The major functions of the HA are as the receptor-binding ligand, leading to endocytosis of the virus into the host cell and subsequent membrane-fusion events in the infected cells [16,21]. Influenza virus initiates infection by binding to sialic acids on the surface of target cells. After endocytosis, the endosome acquires a lower pH value, mainly because of the activity of the Vacuolar-type H+-ATPase (V-ATPase) [22]. In the acid environment of the endosome, the HA molecule is cleaved into HA1 and HA2 subunits and then undergoes a conformational change which resulting in the exposure of the fusion peptide at the N-terminus of the HA2 subunit [23,24]. The fusion peptides insert into the endosomal membrane, while the transmembrane domains remain anchored in the viral membrane. Finally, the fusion peptide brings the endosomal membrane and the viral membrane into juxtaposition, leading to fusion. Subsequently, a pore is opened up by this structural change of more than one haemagglutinin molecule and then the contents of the virion are released into the cytoplasm of the cell. This completes the uncoating process [25]. Because of the conformational change of viral HA protein is indispensable for the membrane fusion process between influenza virus and the endosome of the host cell, this makes it a new target for anti-influenza virus drug development. Recently, some small compounds acting as HA conformational change inhibitors have been reported [26,27]. Herbal extracts have been reported to have an important role in controlling virus infections by serving as immuno-modulators during influenza virus infection [28] or blocking the interaction of virus with target cells or having virucidal activity through direct interaction with the virus [29,30]. Most importantly, accumulating evidence has suggested that treatment of herbal extracts might be able to reduce the risk of drug-resistant virus emergence [31]. Melaleuca alternifolia Concentrate (MAC), which is an essential oil derived from the leaves or terminal branches of the native Australian tea tree, Melaleuca alternifolia, is a heterogeneous mixture of approximately 100 chemically defined components that mainly contains terpinen-4-ol (56%-58%), γ-terpinene (20.65%), and α-terpinene (9.8%) [32]. The ability of MAC to induce anti-inflammatory effect [33,34] and inhibit infection of various microbial species, such as bacteria [35,36], viruses [37][38][39] and fungi [40,41] makes it a promising candidate for development of therapeutics against 2009 H1N1 virus infection. The purpose of this study was to determine the antiviral effect against 2009 H1N1 virus using an in vitro test of cytopathic effect (CPE) inhibition of MAC. As previously described, terpinen-4-ol was the main component of MAC, so here we also assessed the feasibility and sensitivity of interaction of terpinen-4-ol with the viral haemagglutinin protein through in silico prediction to confirm the drug target and the characterization of the protein changes after treatment with MAC. Cytotoxic Test of MAC As an initial step to determine the anti-virus effect of MAC, we first need to determine whether MAC has any effect on cellular viability. To address this, MDCK cells were co-cultured with MAC at various concentrations for about 72 h. The cellular viability of MAC was determined by a MTT assay, which is a colorimetric assay for assessing the viability of cells. Although MAC at concentration higher than 0.050% could induce significant cellular death, it did not have any cytotoxic effect on MDCK cells when the concentration was lower than 0.025%. In addition, 10% DMSO/DMEM control was set up because there was DMSO in the MAC solution. Interestingly, the absorbance value of the cell incubated in 10% DMSO/DMEM was similar to the cell control ( Figure 1). This observation also indicated that the cell death was produced by MAC at a high concentrations but not DMSO, because of the concentration of DMSO in the MAC working solution is far lower than 10%. These data suggest that MAC at proper concentrations does not have any cytotoxicity, and since MAC at concentrations lower than 0.025% did not have any cytotoxic effect on MDCK cells, we choose a concentration of 0.020% as a maximum study concentration to further determine the anti-viral effects of the MAC. MACs of different concentration were applied to the MDCK cell monolayer, the 10% DMSO/DMEM control and the cell control were set up. After 72 h incubation at 37 °C, 5% CO 2 , the viability of MDCK cells were determined by a standard MTT assay protocol, as described in details in Experimental section. The data were presented as means ± S.D. ***: p < 0.001. Anti-Viral Effect Assay of MAC in Vitro To determine the anti-viral effect of MAC, we first asked whether MAC could confer protection capability against influenza virus to the cells. To answer this question, MDCK cells were first treated with 0.020% MAC for 1, 2, and 4 h, respectively. MAC was then removed by careful sterile PBS wash. The MDCK cell monolayer was then inoculated with 2009 H1N1 virus in 100 TCID50 per well for 1 h and then the liquid was removed by sterile PBS wash and instead of the maintain media containing TPCK-trypsin 1μg/ml. The viability of MDCK cells were then determined by MTT method, when level 2−3 CPE was observed in the virus control and the cell control showed no CPE (about 48-72 h). As shown in Figure 2, no significant increase of cellular viability of MDCK cells was observed when MDCK cells were pretreated with MAC for one hour, two hours, and four hours (A1, A2 and A3) respectively, compared with the virus control and significant lower than the cell control and the ribavirin (ribo) control. It was worth noting that, the cell survival under A1, A2 and A3 condition remained the same. In other words, there was no tendency showing the cell survival was increased according to prolonging the time of treatment with MAC. These data, therefore, indicated that pretreatment of MDCK cells with MAC could not confer any cellular viability protection. Because pretreatment with MAC could not make MDCK cell produce any changes for protecting against influenza virus infection, we then examined whether treatment of the virus but not MDCK cells with MAC could confer any cellular viability protection. To determine this, 2009 H1N1 virus were first treated with MAC at a concentration of 0.010% for 0.5 and 1 h, respectively. The mixtures were added to a MDCK cells monolayer and then washed away after initial 1 h incubation and replaced with maintenance media containing TPCK-trypsin 1 μg/mL. The cellular viability was tested as mentioned above. As shown in Figure 2 (B1 and B2), although the infectivity of the influenza virus treated with MAC for 0.5 h still remained, the virus treated with MAC for 1 h presented poor infection of the host cells. Therefore, these data indicated that the influenza virus treated with MAC would dramatically lose its infective ability towards the host cells. To observe the proliferation of influenza virus in the host cell intuitively, an immunofluorescence assay was performed. The influenza viruses were treated with MAC to a final concentration of 0.010% for 0.5 and 1 h, respectively and a virus control and a cell control were set up. The primary antibody and the fluorochrome labeled secondary antibody produced cytoplasm staining patterns in MDCK cells infected by the influenza virus treated with MAC for 0.5 h and the virus without treatment, whereas only robust nuclear staining was detected by DAPI in MDCK cells infected by the influenza virus treated with MAC for 1 h and the cell control ( Figure 3). In addition, integrity of the virus particle after incubation with MAC was visualized by electron microscopy. No matter whether the influenza virus was treated with MAC or not, numerous whole virus particles could easily be visualized in the images. No changes in the general structure of the virion could be observed (Figure 4). This result demonstrated that MAC could not lyse the virion. Molecular Modeling and Molecular Dynamics Simulation Studies Given the result that MAC could inhibit 2009 H1N1 virus infection when the MAC was applied before the virus entered MDCK cells, but could not prevent replication and biosynthesis of the virus in the host cell, MAC appears to inhibit entry of influenza virus into the host cell. This involves two key steps: the first is virus attachment to the cell-surface via the receptor site on the HA protein and then internalization within endosomes; the next step involves fusion between the viral envelope and endosomal membrane, mediated by the conformational change in the HA protein, triggering uncoating. Viral nucleocapsids are then released into the cellular cytoplasm for transcription and translation. Since the two steps are all mediated by HA, the activity noted here might be explained by the fact that MAC could prevent influenza virus or the viral genome from entering the host cells by interaction with the viral haemagglutinin protein. To ascertain whether the explanation was feasible, the interaction between MAC and the viral haemagglutinin was accessed by means of molecular dynamics (MD). MAC has been complete chemically defined and it was demonstrated that its antimicrobial activity could be principally attributed to terpinen-4-ol, the main active component [37,38,40,[42][43][44]. Actually, terpinen-4-ol is the main bioactive antimicrobial component of essential oils derived from several aromatic plants [45][46][47]. On account of this, the interaction of terpinen-4-ol and the influenza virus haemagglutinin protein was predicted in silico to confirm the exact target and its active characteristics. From docking analyses, terpinen-4-ol was suggested to bind into a cavity near the fusion peptide ( Figure 5). Figure 5A represents the initial structure of terpinen-4-ol-HA in the MD simulations. It is important to obtain a stable MD trajectory for subsequent analysis. Therefore, the root mean-square deviation (RMSD) values were used to measure the conformational stability of the terpinen-4-ol-HA complex during the MD simulations. From the RMSD curves in Figure 5B, it is suggested that the terpinen-4-ol-HA complexes obtained from MD simulations is relatively stable. There are three repeated RMSDs of the complex during the MD simulation and all of them are under 0.3 nm and the variations are within 0.1 nm. As suggested by Russell [48], the fusion of virus and cell membranes is one of the key steps in the initial stages of infection. Comparison of the neutral-pH and fusion-pH structure indicates that at fusion-pH the membrane-distal domains of HA dissociate, and extensive structural reorganization occurs that involves extrusion of the "fusion peptide" from the interior of the neutral-pH structural. In its position in the fusion-pH structure [49], the fusion peptide is at the N terminus of a new 100-Å-long triple-helical coiled-coil, while the C-terminal membrane anchor is repositioned at the same end of the refolded molecule. Occupation of the membrane fusion site can stabilize the neutral-pH structure through inter-and intra-subunit interactions that presumably inhibit the conformational rearrangements required for membrane fusion. Therefore, we concluded that the terpinen-4-ol can stabilize the neutral-pH conformation of HA. From the MD simulation we can find terpinen-4-ol forms two strong hydrogen bonds with HA ( Figure 6A). The time dependence of distances for these hydrogen bonds is shown in Figure 6B. It can be seen from Figure 6 that there are two firm hydrogen bonds residues of isoleucine (I)-56 and asparagine (N)-60 of HA2 averaging 2 Å between terpinen-4-ol and HA2, which play an important role in the stability of the complex. A hydrogen bond interaction was considered to form if the distance between the hydrogen donor and acceptor was less than 3.5 Å. We found that the hydrogen bonds between terpinen-4-ol and residues I-56 and N-60 of HA2 make significant contributions to the binding affinity. Therefore, we believe that the H-bond interaction between the hydroxyl moiety of terpinen-4-ol and I-56 and N-60 of HA2 stabilizes the terpinen-4-ol-HA complex in the MD simulation. Figure 6. (A) Hydrogen bonds formed between terpinen-4-ol and residues in binding pocket. (B) The time dependent distance of terpinen-4-ol _I-56 (red) and terpinen-4-ol_N-60 (black). To explore the inhibition mechanisms of terpinen-4-ol with respect to its interaction with HA at the atomic level, the binding free energies were computed by means of the MM_GBSA. In particular, MM_GBSA combine molecular mechanics and continuum solvent models to estimate ligand binding affinities. The MM_GBSA calculation was constructed based a total of 250 snapshots that taken from the 15 ns to 20 ns. Importantly, the calculated binding free energy of the complex was −11.3647 kcal mol −1 , which indicated that the terpinen-4-ol bonds strongly to HA protein. The results are listed in Table 1. The MD simulation based on the same initiating structure had been repeated for three times. For the complex, the electrostatic energy and the van der Waals energy favorably contributed to the binding free energies. The free energy of terpinen-4-ol binding to HA calculated by MM_GBSA method showed that the binding process is thermodynamically favorable. Therefore, we conclude that terpinen-4-ol binds to the membrane fusion site of HA and stabilizes the conformation of the fusion peptide through this interaction. In conclusion, MD simulation was applied to clarify the three-dimensional structure of terpinen-4-ol bound to the active site. MD simulation revealed an optimal conformation of terpinen-4-ol-HA complex, in which the inhibitor forms two stable H-bonds with residues I-56 and N-60 of HA2 in the binding pocket. Moreover, MD simulation showed that this binding mode could stabilize the neutral pH conformation of HA. We believe that this property is important for antiviral activity of terpinen-4-ol. Understanding how terpinen-4-ol stabilized HA could provide a clue for the development of new influenza fusion inhibitors. The structural and mechanistic insights from the present study provide a valuable foundation for the structure-based design of more potent influenza fusion inhibitors. Bio-Safety All experiments involving pathogenic influenza A viruses were performed in a bio-safety level 2 (BSL2) laboratory of Zhongshan School of Medicine of Sun Yat-sen University, Guangzhou, China. Cells and Virus Madin-Darby Canine Kidney (MDCK) cells maintained by our laboratory were grown in Dulbecco's modified Eagle's medium (DMEM, Invitrogen Corporation, New York, NY, USA) supplemented with 10% heat-inactivated fetal bovine serum (FBS, Thermo Scientific HyClone product line, Logan, UT, USA) at 37 °C, 5% CO 2 (Heracell 150i, Thermo Scientific, Langenselbold, Germany). No antibiotics or anti-mycotic agents were used in cell or virus culture. 2009 H1N1 pandemic influenza virus strain, A/GuangzhouSB/01/2009(H1N1) (GZ01/09 for short) was a gift from the Guangdong Centers for Disease Control and Prevention that was propagated from clinical isolates and maintained in our laboratory. The virus strain was propagated in MDCK cells that were cultured in 0.02% TPCK-trypsin (Amresco Inc., Solon, OH, USA) at 37 °C, 5% CO 2 . The supernatant containing virus particles in MDCK cell culture was collected when 75%-100% CPE was observed. The virus was stored at −80 °C in aliquots until use. Melaleuca Alternifolia Concentrate (MAC) Hundred percent MAC (batch 270409) was provided by NeuMedix Biotechnology Pty Ltd, North Sydney, Australia. Preliminary experiments established the optimal solubility into dimethyl sulfoxide (DMSO) (Beijing Dingguo Changsheng Biotechnology Co. Ltd., Beijing, China) and the concentration of stock solution was 10% (v/v). For testing, the MAC stock solution was diluted by serum free DMEM media for working solutions with various concentrations. Virus Titrations The virus strain was titrated by standard Tissue Culture Infectious Dose 50 (TCID 50 ) Assay in MDCK cells. Briefly, MDCK cells were seeded in 96-well culture plates (about 5 × 10 4 cell/well) in DMEM with 10% fetal bovine serum (FBS) for 12-24 h at 37 °C with 5% CO 2 . After cell propagation, growth medium was removed and 10 fold serial dilutions of the GZ01/09 virus suspension in DMEM media with 1 μg/mL TPCK-trypsin were added to the wells. The plate was incubated at 37 °C with 5% CO 2 , and morphological changes on the MDCK cells were observed microscopically every 12 h. The final CPE was recorded after 72 h. TCID50 was calculated by counting all the wells with 1-4 CPE as being positive. TCID50 was calculated by the Reed-Muench method [50]. MTT Assay to Determine the Cellular Viability of MDCK Cells The cellular viability of MDCK cells was measured quantitatively by the reduction of formazan dye using MTT (3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide) (Beijing Dingguo Changsheng Biotechnology Co. Ltd., Beijing, China) assay. Briefly, confluent MDCK cell monolayer in 96-well culture plates was washed with sterile PBS and incubated for 3 h at 37 °C after 40 μL/well of MTT solution (5 mg/mL) was added into each well. When a purple precipitate was clearly visible, the liquid was carefully withdrawn without touching the sediment or the cells. DMSO at 100 μL/well was added to dissolve the purple formazan, and the absorbance at A490 nm was read with an Absorbance Microplate Reader (Gene Co. Ltd., Hong Kong, China). Bioimaging in 96 Well Plates The influence to entering host cell of the influenza virus by MAC treatment was determined by an immunofluorescence assay on MDCK cells in a 96 well plate. Briefly, MDCK cells were plated in a sterile 96-well plate about 10,000 cells/well. The influenza virus suspension treated with MAC of final concentration of 0.010% for 0.5 and 1 h at room temperature, virus suspension and maintain media for cell control were inoculated to the cell monolayer respectively, for 5 h in order for sufficient viral protein synthesis in the host cell. The cells were incubated at room temperature in the 3.7% formaldehyde 10 min for fixation; 0.1% Triton X-100 5 min for permeabilization and 3% fetal bovine serum 30 min for blocking. The influenza virus was stained with influenza A m1 (matrix protein 1) antibody (Santa Cruz Biotechnology, Inc. Santa Cruz, CA, USA) followed by Alexa Fluor ® 488 Goat Anti-Mouse IgG (H + L) (Molecular Probes, Invitrogen, Carlsbad, CA, USA). Finally, 50 µL per well of Fluoroshield™ with 4′,6-diamidino-2-phenylindole (DAPI, Sigma-Aldrich, Inc., St. Louis, MO, USA) was added and analyzed using an imaging instrument (Leica DMI4000B, Meyer Instruments, Inc., Houston, TX, USA). Electron Microscopy Observation of the Influenza Virus Morphology MDCK cells with or without treatment with MAC were observed under an electron microscope. The concentration and the treatment time of MAC were indicated in the figure legends. Each 10 μL of MAC-treated and untreated virus suspension was placed on a clean slide. Two copper grids were applied to float on the drops of virus suspensions using fine, clean forceps for 2 min. The bulk of the fluid was removed with the edge of the copper grid vertically on a strip of filter paper. Air dried the copper grid for 1 min. The copper grids were applied to float on a drop of 2% potassium phosphotungstate, using fine clean forceps, for 1 min. The bulk of the fluid was removed with the edge of the copper grid vertically on a strip of filter paper. Air dried the grid and examined in the electron microscope. Statistical Analysis The cell survival result in each group was expressed as the mean ± S.D. and the data was statistically compared with the relative control group using one-way analysis of variance (ANOVA), SPSS 17.0 for Windows software. p < 0.05 was considered to be statistically significant. Molecular Docking The structure of HA (PDB: 3AL4) [51] was used in the docking calculations. The program Autodock 4.0 [52] with a Lamarckian genetic algorithm is used to carry out the molecular docking. To evaluate the binding energies between the ligand and receptor, the AutoGrid program was used to generate the grid map with 80 × 80 × 80 points spaced equally at 0.375 Å is using. The number of GA runs is 200 and the energy evaluation is 25,000,000, other docking parameters were set to default values. At the end of the run, all docked conformations were clustered using a tolerance of 2 Å for root mean square deviations (RMSDs) and ranked based on docking energies. Molecular Dynamics Simulations The Amber 11.0 simulation suite [53] was used in molecular dynamics (MD) simulations and data analysis. An all-atom model of HA was generated using the tleap module on the basis of the initial model. To release conflicting contacts among residues, energy minimization was performed with steepest descent method for 500 steps, followed by conjugated gradient method for 500 steps. The protein was then solvated with water in a truncated tetrahedral periodic box (76.096 × 76.096 × 76.096 nm). The TIP3P [54] water model was used, and five Na + counterions were added to neutralize the system. Prior to the production phase, the following equilibration protocol was applied. First, the solvent was relaxed by energy minimization while restraining the protein atomic positions with a harmonic potential. The system was then energy-minimized without restraints for 2,500 steps using a combination of steepest descent and conjugated gradient methods. The system was gradually heated from 0 to 300 K over 20 ps using the NVT enemble. Finally, 20-ns MD simulation was conducted at 1 atmosphere and 300 K with the NPT ensemble. During the simulation, the SHAKE [55] algorithm was applied to constrain the covalent bonds to hydrogen atoms. A time step of 2 fs and a non-bond interaction cutoff radius of 12.0 Å were used. Coordinates were saved every 1 ps during the entire process. The ff03 all atom Amber force field (AMBER ff03) developed by Duan et al. [56], which shows a good balance in the balance between helix and sheet, was used for the protein and the AMBER GAFF [57] was used for the ligand. The parameters for terpinen-4-ol were developed as follows: the electrostatic potential of terpinen-4-ol was obtained at the HF/6-31G basis set from GAUSSIAN 2003 [58] after a geometry optimization at the same level. The partial charges were derived by fitting the gas-phase electrostatic potential using the restrained electrostatic potential (RESP) method [59]. The missing interaction parameters in the ligand were generated using antechamber tools in Amber. The long-range electrostatic were calculated by the particle-mesh ewald (PME) method [60]. Then we used molecular mechanics generalized Born surface area (MM-GBSA) to estimated the binding energies at 192 AMD Opteron (tm) Processor CPUs (2.0 GHz) were used in the simulation process. Binding Free Energy Calculation The binding free energies (ΔG bind ) were calculated using the MM-GBSA approach [61] inside the AMBER program. The first step of MM-GBSA method was the generation of multiple snapshots from an MD trajectory of the protein-ligand complex and a total 0f 50 snapshots were taken from the last 5 ns trajectory with an interval of 100 ps. For each snapshot, the free energy was calculated for each molecular species (complex, receptor, and ligand) using the following equation [62]: ΔG bind = ΔE mm + ΔG solv − TΔS ΔG mm = ΔE elec + ΔE vdw + ΔE ini (3) ΔG solv = ΔG GB + ΔG np (4) where G com , G rec , and G lig were the free energies for the complex, receptor, and ligand, respectively. ΔE mm was the molecular mechanics energy of the molecule expressed as the sum of the internal energy of the molecule plus the electrostatics and van der Waals interactions; ΔG solv was the solvation free energy of the molecule; T was the absolute temperature; and ΔS is entropy of the molecule. ΔE elec was the Coulomb interaction, ΔE vdw was the van der Waals interaction, and ΔE ini was the sum of the bond, angle, and dihedral energies; in this case, ΔE ini = 0. ΔG GB is polar solvation contribution calculated by solving the GB equation [63] for MM_GBSA method. ΔG np was the nonpolar solvation term; γ was the surface tension that was set to 0.0072 kal/(mol Å 2 );. ΔSASA is the solvent accessible surface area (Å 2 ) that was estimated using the MOLSURF algorithm and β was a constant that was set to 0. The solvent probe radius was set to 1.4 Å to define the dielectric boundary around the molecular surface. The vibrational entropy contributions were estimated by NMODE analysis [64] and 50 snapshots were used in the NMODE analysis. To obtain the contribution of each the binding energy, MM_GBSA was used to decompose the interaction energies to each residue involved in the interaction but only considering molecular mechanics and solvation energies without the contribution of entropies. Conclusions In conclusion, we have proved that an herbal extract has significant effect against influenza virus. The cause of the effect was probably terpinen-4-ol, the main bioactive component, binding to the fusion peptide of the haemagglutinin protein on the surface of influenza virus. Because of the fusion peptide is high conservative, thus, the herbal extract, as the haemagglutinin conformational change inhibitors, in vivo studies are essential to confirm the in vitro data.
v3-fos-license
2021-04-27T05:15:20.574Z
2021-04-01T00:00:00.000
233395099
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/2076-2615/11/4/1120/pdf", "pdf_hash": "d880960752f0563175e9e9151d632a587e208fd2", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43589", "s2fieldsofstudy": [ "Agricultural and Food Sciences" ], "sha1": "d880960752f0563175e9e9151d632a587e208fd2", "year": 2021 }
pes2o/s2orc
A Retrospective Case Study into the Effect of Hoof Lesions on the Lying Behaviour of Holstein–Friesian in a Loose-Housed System Simple Summary Lameness is a substantial welfare and economic problem in production animals. It can alter indicators of welfare such as lying time. Lying down is very important for cows, and they are highly motivated to perform this behaviour for 12 h or more per day. Conversely, cows that lie down too much or are uncomfortable standing may miss an opportunity to feed or drink if there is competition from sound (non-lame) cows. This study monitored different lesions that cause lameness in cattle through the use of accelerometers. The lesions included sole ulcers, sole haemorrhage, white line disease, interdigital hyperplasia and phelgmon, and digital dermatitis. Leg-based activity monitors that track the cows’ lying behaviour and mobility were used. From these data, it was found that cows with lesions on the foot spent longer lying down than those with no lesions, and cows with lesions in the soft tissue spent less time lying down than those with foot lesions. Trimming the cows’ feet altered the lying times of the cows with foot lesions and returned them closer to those of cows with no lesions. Abstract The association between hoof lesions and lying behaviour was assessed on a Holstein–Friesian dairy farm in England. Twenty-nine cows were included in the study. Cows with claw horn disruption lesions (CHDL, n = 8), soft tissue lesions (STL, n = 6), and no lesions (NL, n = 15) were assessed. Data were collected on parity, days in milk (DIM), and mobility scores. Cows were trimmed and treated, and lesions were recorded by a professional foot trimmer. Lying behaviour was assessed before and after claw trimming. The milking herd (n = 96) prevalence of lameness was 32.3%. Mobility was scored using the Agriculture and Horticulture Development Board (AHDB) Mobility Scoring system. Mobility scores were significantly different across lesions groups (p = 0.022). CHDL cows had a mean mobility score of 2.0 ± 0.9 (mean ± SD), STL were scored 1.2 ± 1.3, and NL cows were 0.9 ± 0.7. CHDL were associated with longer lying times (15.00 ± 1.04 h/d; p = 0.0006) and shorter standing times (9.68 ± 2.38 h/d; p = 0.0351) compared with NL lying times (11.77 ± 1.67 h/d) and standing times (12.21 ± 1.67 h/d). STL cows spent significantly less time lying (11.30 ± 2.44; p = 0.0013) than CHDL but not NL cows. No significant differences were found with any of the other lying behaviours. After trimming, CHDL cows spent significantly less time lying down than before trimming (13.66 ± 0.98; p = 0.0125). Cows with NL spent significantly more time lying down (12.57 ± 1.90; p = 0.0398) and had a shorter minimum lying bout duration (0.17 ± 0.09; p = 0.0236) after trimming. In conclusion, lying behaviour in dairy cattle was impacted by type of hoof lesions and hoof trimming. Introduction Lameness in dairy herds poses significant economic and welfare problems [1,2]. Furthermore, a consumer concern has led to lameness detection being included in various farm assurance schemes. Estimates of lameness prevalence vary with a range of 20.6-36.8% [1,[3][4][5][6][7]; however, this varies globally due to different management systems. Recent publications suggest UK dairy cow lameness prevalence to be 30.1% to 31.6% [3,6], indicating that lameness is a significant issue in UK dairy farming. Lameness in dairy cows is most commonly associated with the hindlimb [4,8], with over 90% of lesions found on the foot [5]. These can be divided into two broad categories: soft tissue lesions (STL), which may have an infectious component, and claw horn disruption lesions (CHDL) caused by trauma or increased pressure. Soft tissue lesions include digital dermatitis (DD), interdigital hyperplasia (IH), and interdigital phlegmon. Claw horn disruption lesions include sole ulcers (SU), sole hemorrhage (SH), and white line disease (WLD). Sole ulcers, WLD, and DD are the most common lesions affecting UK and Irish dairy herds [5,9]. Foot lesions are painful; however, as a prey species, cows are stoic in nature, and so, they often do not show signs of pain until lesions are advanced [10,11]. Foot lesions and lameness as well as being painful are related to decreased reproductive performance [12][13][14], reduced milk yield [15,16], and an increased likelihood of culling [17,18]. A drop in milk yield can be seen up to 4 months before diagnoses/treatment and up to 5 months after, resulting in an average of 350 kg of milk lost/cow/lactation [19]. Similarly, a significant drop in milk yield has been reported up to 3 months before treatment, suggesting that pathogenesis begins far before lameness is seen [20]. Furthermore, lesion-specific decreases in yield have been found with estimates of 570 kg and 370 kg loss due to SU and DD, respectively [20]. In addition to productivity, lameness affects the behaviour of dairy cattle, particularly feeding and lying behaviour. Several confounding factors influence these behaviours including different management and housing styles [21][22][23][24] and environmental variables [25][26][27]. Behavioural changes are associated with cow-level factors such as days in milk (DIM), parity, and body condition score (BCS) [16,28,29]. Lame animals feed less frequently [30] and often after non-lame cows [31]. They also have altered lying behaviour; higher locomotion score cows have longer lying times with fewer longer lying bouts [22,29,[32][33][34]. Specific hoof lesions, such as claw horn disruption lesions (CHDL), have been repeatedly reported to have shorter standing times [16] and increased lying times [35,36]. After claw trimming, SU and DD cows lie less than healthy control groups [37]. The literature is unclear about the effects of lesions on lying bout frequency; CHDL are reported to have both numerically fewer bouts [36,38] and more bouts [16] than non-lame cows. The importance of lying time in dairy cows is related to welfare, health, and reproductive status [39,40]. Cows are highly motivated to lie down, spending 9-14 h/d resting, prioritising lying over feeding, and other social behaviours [41]. Cows may have a behavioural requirement to lie down for 12-13 h/d [42]. The prompt detection and treatment of lame animals is essential, as animals rarely self cure, and treatment delays are inevitably associated with increased severity [43]. Several methods of lameness detection have been described including ad hoc observation, locomotion scoring, and routine hoof trimming. Ad hoc observation is ineffective with mild/moderate cases and translates poorly when recording herd statistics [1]. Farmers in the UK have underestimated lameness prevalence, failing to identify three out of four cases [44]. Serial locomotion scoring is highly recommended [45], although scoring is subjective. Locomotion scores have been associated with foot lesions [46][47][48]; however, not all severe lesions result in obvious lameness [11]. Forty percent of severe foot lesions were locomotion scored as 2 or 3 out of 5, which were described as imperfect and mildly abnormal locomotion, respectively [11]. Ideally, most cows should be trimmed 2-3 times per year [48]. Foot trimming alone may see lesions going unnoticed for prolonged periods as sole lesions can take anywhere from 6 to 8-10 weeks to appear at the sole surface. Automated lying time measurements may be a useful adjunct to lameness detection for farmers [49]; they may aid in early detection and treatment, thus improving welfare [43] and limiting production losses [15]. It is suggested that one accelerometer per cow is most useful in a cost-benefit analysis [50]. Investments in these systems are usually justifiable, with 84% of cost-benefit scenarios breaking even within the system's 10-year lifespan [51]. Accelerometers do not influence dairy cow lying behaviour; thus, they can give accurate herd statistics [52,53]. While the effects of lameness on lying behaviour are well documented, few papers assess specific hoof lesions and their effects on behaviour. This study aims to explore the effects of specific hoof lesions on the lying times of Holstein-dairy cows using the CowAlert system within a loose housed system. Ethical Approval This study was approved by the Clinical Research Ethical Review Board (CRERB) at the Royal Veterinary College, London; reference number CR2020-052-2. Animals and Management The study was performed on a 112-cow dairy farm in Hertfordshire, UK. This number represents milking animals and cows that have been dried off. Milking Holstein-Friesian cows (n = 96) were identified for the study, and 80 were randomly fitted with activity monitors by the herdsperson in the milking parlour. Data were collected from January to March 2020. There were 13 primiparous cows and 55 multiparous cows who had activity monitor data available (parity = 3.61 ± 1.99; mean ± SD (standard deviation)). Cows were loose-housed indoors on a majority woodchip bedding mixed with a recycled gypsum plasterboard product. The flooring in the yard was grooved concrete. The collecting yard, parlour, and raceway had rubber matting. The floors were scraped twice daily. High yielders were fed a mixed ration consisting of grass and maize silage, straw, brewers' grains, and a mineral blend in addition to concentrates in the parlour (fed according to yield). The ration was presented at 6:00 every morning and pushed up every hour by a robotic feed pusher. Minimal concentrates were given in the parlour to low yielders. High yielders were milked twice daily at 5:45 am and 3:00 pm while low yielders were milked once at 8:00 am. Data Collection Leg-based activity monitors (Cow Alert; IceQube, Ice Robotics LTD., Edinburgh, UK) were fitted with a Velcro-strap above the hindlimb fetlock in mid-February 2020. The activity monitors were left on after the study was completed to be included as a part of the management system. The cows recruited for this study had a minimum of 72 h to adapt to the activity monitors, which is within the described ranges previously used for habituation [52,54,55]. The monitors collected data on lying behaviour, activity with 4 Hz 3-dimensional accelerometers, several times per second. The average data for lying behaviour over 7 days was used 1 week before claw trimming (BCT) and 2 weeks after claw trimming (ACT). Claw trimming was performed in early March. The time frame was selected based on the literature, which suggests that lying behaviour may be altered 1 week before [56] and 2-3 weeks after claw trimming [37]. A timeline of key events is shown in Figure 1. Mobility scores were assessed for the milking herd (n = 96) using the Agriculture and Horticulture Development Board (AHDB) dairy mobility scoring system [57] (Table 1). This was completed at the end of January 2020, 5 weeks BCT. This timeline was chosen as SU can take up to 6 weeks to appear at the sole surface. The assessors were trained in mobility scoring. This was undertaken following afternoon milking as the cows were walking out of the milking parlour on concrete. Cows with a MS ≥ 2 were considered lame. Table 1. Agriculture and Horticulture Development Board (AHDB) dairy mobility scoring system [57]. Score Description of Behaviour Good mobility 0 Walks with even weight bearing and rhythm on all four feet, with a flat back. Long, fluid strides possible. Imperfect mobility 1 Steps uneven (rhythm or weight bearing) or strides shortened; affected limb or limbs not immediately identifiable. Impaired mobility 2 Uneven weight bearing on a limb that is immediately identifiable and/or obviously shortened strides (usually with an arch to the centre of the back). Severely impaired mobility 3 Unable to walk as fast as a brisk human pace (cannot keep up with the healthy herd). Lame leg easy to identify-limping; may barely stand on lame leg/s; back arched when standing and walking. Very lame. The farm trims every cow at least once a year prior to dry off and also treats lame cows as they are picked up by ad hoc observation. A professional foot trimmer visited the herd in March. Cows with SU were trimmed, treated with a block applied to the unaffected claw, and given Ketoprofen. DD cases were treated twice daily for three days with oxytetracycline spray in the milking parlour. The Ketoprofen and oxytetracycline treatments were performed by the herdsperson as per standard foot care protocol. In addition, all cows were regularly footbathed (up to 6 times/week) with 4% formalin as a preventative measure. Foot trimming records for 63 cows were collected. Following trimming, the data from the cows were grouped into 3 categories. The first included those with soft tissue and/or infectious lesions (STL), which included DD and interdigital hyperplasia (IH). The second group consisted of those with CHDL, which encompassed SU, sole hemorrhage (SH), sole separation, laminitis and WLD, and cows with no lesions (NL). The last group included cows with uncommon lesions (n = 6); which included forelimb lameness (n = 4) and cull cows (due to lameness). Those with simultaneous STL/CHDL (n = 2) were excluded. For the purposes of this study, IH and DD were grouped together as STL, based on the strong associations for these conditions in the literature [32,58,59]. Twenty-two cows did not have complete data from the sensors and so were excluded. This left 29 cows in the final analyses. Statistical Analysis The 7-day average (mean) for lying behaviour (Table 2) from the IceQubes was analysed before claw trimming (BCT) and after trimming (ACT). The 7-day average was precalculated by the activity monitors. Lying behaviour was recorded in hours and minutes and converted into hours/day for statistical analysis. Prism 8 (GraphPad) software was used for most of the analyses; R 4.0.0 (The R Foundation; Vienna, Austria) was used to analyse mobility scores, as this calculation could not be completed with Prism 8. For each lying behaviour measurement analysed, normality was assessed with a D'Agostino and Pearson skewness test (CHDL and NL) and a Shapiro-Wilk test (STL). Results with normal distributions, BCT (lying time, min and max lying bout, lying bouts/day), and ACT (standing time, lying bouts/day) were analysed with a one-way ANOVA. If significant, a Tukey's multiple comparison was performed. All other datasets, BCT (standing time), and ACT (lying time, min and max lying bout) were analysed with a non-parametric Kruskal-Wallis test. The prevalence of mobility scores was analysed with a Fisher's exact test. The effects of hoof lesions on lying times BCT and ACT were analysed using a paired t-test for normally distributed data, and skewed data were analysed with a Wilcoxon matched pairs test. The effect of hoof lesions on minimum lying bout duration BCT and ACT were analysed with a Wilcoxon matched pairs test, as all data were skewed. A p-value of <0.05 was considered statistically significant. Lameness and Lesion Prevalence The milking herd prevalence of lameness was 32.3% as determined by the MS of 96 cows. Foot trimming records were available for 63 cows, of which 33 had one or more foot lesions present. Of these, 28.6% (18/63) of cows had one affected foot, 20.6% (13/63) had two affected, and 4.7% (2/63) had ≥3 affected. The most prevalent foot lesions were DD, IH, SU, and SH, from most to least common, respectively (Table 3). Twelve sensors failed prior to data analysis. After grouping the applicable cows (cows with foot trimming records and activity monitor data), the data from a total of 29 cows were analysed; 8 CHDL, 6 STL, and 15 NL cows. Before Claw Trimming (BCT) Lying behaviour did vary between lesion groups (Table 4). CHDL spent significantly more time lying than the NL group. There was no significant difference in lying time between cows with STL and NL. CHDL cows spent an additional 3.2 h and 3.7 h lying compared to NL and STL cows, respectively. CHDL spent significantly less time standing than NL cows. While minimum and maximum lying bout duration did not vary significantly, CHDL cows did tend to have longer maximum lying bouts (p = 0.085) and numerically more lying bouts on average. Table 4. Effect of hoof lesions on lying behaviour (lying and standing time, lying bout duration and frequency) before (BCT) and after claw trimming (ACT) for 29 Holstein-Friesian cows. Datasets analysed by one-way ANOVA. a,b within a row indicates significant differences found by a Tukey's multiple comparison between lesion groups. * analysed with non-parametric Kruskal-Wallis. c,d within a column indicates significant differences found by a paired t-test. x,y within a column indicates significant differences found by a Wilcoxon matched pairs test. Mean ± standard deviation (SD) represented in each lesion group column. After Claw Trimming (ACT) Lying and standing time did not vary significantly between lesion types (Table 4). Number and duration of lying bouts did not differ significantly between groups. Comparison between BCT and ACT Lying time decreased significantly from BCT to ACT for CHDL cows (15.00 vs. 13.66 h/d; p = 0.0125) (Figure 2). STL and NL groups increased their lying time, although the change was significant only for NL cows (11.77 vs. 12.57 h/d; p = 0.0125). Cows with NL also had significantly shorter minimum lying bout duration ACT (Figure 3). No other lying behaviours were statistically significant when comparing BCT and ACT. Discussion The herd prevalence of lameness was 32.3%, which is similar to other studies into lameness prevalence in the UK [3,6,7]. The majority of lesions (92%) were found on the hindlimbs, which is in line with reports from other studies [5,60]. In general, lying times in the present study complement other studies. Lying times for cows with NL or STL ranged from 11.3 to 12.6 h/d, others have quoted cows lying down for 12 h/d in freestall systems [61] and 10.5-13.6 h/d in loose-housed systems as per the current study housing [62][63][64]. The cows in the present study, especially post-trimming, fulfill the 12-13 h/d lying behavioural need [42]. Before claw trimming, CHDL cows were found to lie significantly longer than both STL and NL; however, STL do not lie significantly less than NL cows. It has been repeatedly shown that lame cows lie longer than non-lame cows, coupled with shorter standing times, due to their inverse association [33,65,66]. In contrast, shorter lying times have also been associated with lame animals [37,49,61], and increased standing times are often implicated as a risk factor for lesion development [61,67]. Before trimming, cows with CHDL laid down for an additional 3.3 h/d versus NL and 3.7 h/d versus STL cows. This is longer than other papers [33], and the variance could be due to a number of factors such as of housing design [22], lesion severity, or cow level factors such as BCS, parity, and stage of lactation [29]. When assessing cow-level factors in this study, it was found that neither parity nor DIM varied significantly between lesion groups. Individual cows show great variation in lying times within a given farm [68]. Chapinal et al. found that SU cows spent 1.1 h/d longer lying than non-lame cows [36]; this was not true for SH or DD cows in their study. CHDL cows in the present study spent an additional 3.23 h/d lying versus NL cows. Theoretically, CHDL share a common aetiology where contusions within the claw horn capsule cause sole lesions [69]. Cows may be trying to relieve pressure on CHDL to alleviate the associated pain while, anatomically, DD lesions are not directly impacted by weight bearing when standing. Lying behaviour was only an indicator of STL, mainly DD, in another study [70]. Navarro et al. found lame cows, with sole damage and infectious lesions, spent less time standing (13.5 h/d) than non-lame cows (15.2 h/d) [16]. The effects of IH alone may be limited but are variable based on lesion size and severity and whether there are any concurrent infectious/traumatic lesions associated. Lying bout duration seems to increase in higher MS cows [22,33]. In particular, cows with sole damage have longer lying bouts [16,71]. The literature has also found that longer lying bout durations were associated with STL, but not CHDL, specifically [70]. Lying bouts/day can vary; generally, studies have found lame cows have fewer, longer lying bouts [22,32,33], with more variation in lying bout length [21]. In this study, CHDL may have numerically more lying bouts/day, which is unusual, although they do demonstrate a wider range of lying bout length. Lame cows have been reported to have more lying bouts/day; however, this may be attributed to differences in automatic milking systems [72]. In this study, all three groups did not vary significantly in lying bout length or bouts/day. The difference in lying behaviour BCT and ACT was significant for CHDL and NL cows. After trimming, CHDL cows spent less time lying and NL cows spent more time lying. This may indicate increased comfort in CHDL cows post-treatment. Other studies have found similar effects where cows with SU and DD spent more time standing 2-3 weeks ACT [37]. While no significant changes were seen with STL cows, this may be attributed to the mild effects of IH lesions on the cows in the group. Cows have been seen to increase daily lying time in the period after trimming, this was found in lame and nonlame cows [56,71]. Thus, cows with NL may be more tender after trimming. Conversely, in another study, cows with a foot-block laid down longer than non-lame cows ACT, but no other treatment group saw any change in lying behaviour [73]. This included those given a block and nonsteroidal anti-inflammatory drugs (NSAID) [73]. Pain associated with the lesions is the likely cause; investigations on the effects of the blocks themselves have been shown to increase MS, they do not seem to alter lying times when applied to non-lame cows [74]. A decrease in the minimum lying-bout duration of NL cows was significant post-trimming (p = 0.0236). This may indicate greater comfort in walking and transitioning from a lying position. Mobility score was significantly different across the lesion groups in this study. CHDL had the highest mean MS, while STL and NL cows were often not classified as lame. The indication that MS may be more useful for identifying CHDL than STL has been documented previously [70,75]. Sole ulcer [47,48], double sole, and interdigital purulent inflammation [46] are associated with increased locomotion scores. Sole hemorrhage [47], WLD, and DD [46] have not been noted to change MS. When WLD is assessed with SU as CHDL, there seems to be an association with MS [76], as seen in this study. DD is a painful condition; however, its association with lameness is more variable [36,46]. It may be related to chronicity [11] or severity [77] wherein acute lesions would be expected to cause pain. In addition, due to the location of DD, the cow should not apply direct pressure with the lesion when weight bearing. It has also been found that DD cows did not appear lame unless concurrent CHDL was present [78]. Likewise, IH has a lesser association with lameness [79]. In general, the current study agrees with the literature in that CHDL, especially SU, are associated with visible lameness. STL do seem to be associated with a slightly higher MS than for cows with NL; however, the location, stage, and severity of these lesions was not noted in this study. Further categorisation of these lesions with a bigger sample size may have yielded different results. Study limitations included sample size and the inability to check mobility scores at trimming and after trimming. Although parity and DIM were not statistically significant between groups, analysing cows with a particular stage of lactation and parity may yield more specific results. BCS was not assessed, although it is associated with lameness [80,81], CHDL [69,82], and lying times [29]. Cows were assessed in broad lesion groups as opposed to specific lesions categorised according to location and severity. In summary, lying times may be a useful adjunct for lameness detection to mobility scoring and regular foot trimming. Cows with CHDL lie for significantly longer periods than other cows. This extreme behaviour may be used to identify cows that require further examination, although more work is needed to determine what changes deem further investigation necessary. Monitors may also be useful to monitor the efficacy of treatment, CHDL spent less time lying, making them more comparable to cows with NL than pretrimming, possibly indicating a greater index of comfort. The use of activity monitors is already widespread due to their benefits for heat detection, so implementing their use for lameness detection is quite feasible. Future work looking at the effect of specific foot lesions on lying behaviour would be interesting and may prove to be more useful for detecting lameness. Conclusions Mobility score and increased lying times or decreased standing times can be used as indicators of CHDL in dairy cows. While not a perfect means of identifying lesions, they can be used as tools for farmers to identify cows that may require attention. The benefits of hoof trimming can also be seen up to two weeks ACT. Cows with CHDL and NL showed beneficial changes after treatment. Informed Consent Statement: Not applicable. Data Availability Statement: The datasets used and analysed during the current study are available from the corresponding author on reasonable request.
v3-fos-license
2019-10-24T09:12:59.868Z
2019-10-23T00:00:00.000
204848032
{ "extfieldsofstudy": [ "Medicine", "Biology" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.frontiersin.org/articles/10.3389/fimmu.2019.02469/pdf", "pdf_hash": "f4a1d0cf074eca3b119a66d4680a8b4f09b16a60", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43592", "s2fieldsofstudy": [ "Biology", "Medicine" ], "sha1": "f936d2b9b8b3b6352b5c2e6ab0228e63a26f1b42", "year": 2019 }
pes2o/s2orc
Pregnancy-Induced Alterations in NK Cell Phenotype and Function Pregnant women are particularly susceptible to complications of influenza A virus infection, which may result from pregnancy-induced changes in the function of immune cells, including natural killer (NK) cells. To better understand NK cell function during pregnancy, we assessed the ability of the two main subsets of NK cells, CD56dim, and CD56bright NK cells, to respond to influenza-virus infected cells and tumor cells. During pregnancy, CD56dim and CD56bright NK cells displayed enhanced functional responses to both infected and tumor cells, with increased expression of degranulation markers and elevated frequency of NK cells producing IFN-γ. To better understand the mechanisms driving this enhanced function, we profiled CD56dim and CD56bright NK cells from pregnant and non-pregnant women using mass cytometry. NK cells from pregnant women displayed significantly increased expression of several functional and activation markers such as CD38 on both subsets and NKp46 on CD56dim NK cells. NK cells also displayed diminished expression of the chemokine receptor CXCR3 during pregnancy. Overall, these data demonstrate that functional and phenotypic shifts occur in NK cells during pregnancy that can influence the magnitude of the immune response to both infections and tumors. INTRODUCTION During pregnancy, the immune system has to finely balance its activity in order to tolerate the semiallogeneic fetus, while maintaining the ability to fight microbial challenges (1)(2)(3)(4). These immune alterations may be at least partially responsible for the increased susceptibility of pregnant women to complications from influenza virus infection (5)(6)(7)(8)(9). Recent studies have demonstrated enhanced responses to influenza virus by several innate immune cell subsets during pregnancy, including monocytes, plasmacytoid dendritic cells and natural killer (NK) cells (2,(10)(11)(12)(13)(14). It remains unclear whether such changes could contribute to the enhanced pathogenesis of influenza virus during pregnancy because the role of NK cells in the pathogenesis of influenza virus remains controversial. Several mouse studies have shown that NK cell depletion or the use of mice deficient in NK cells improved the outcome of influenza infection (15,16), suggesting that NK cell activity may be pathogenic in the setting of influenza infection. On the contrary, NK cells reduced influenza virus burden and promoted clearance of the virus in mice deficient in NKp46, a major NK cell receptor thought to play a role in influenza recognition (17), suggesting that NK cells may contribute to protection from influenza. Controversy remains as another mouse strain deficient in NKp46 expression is resistant to viral infection (18). In humans, NK cells were found in abundance in the lungs of fatally infected patients with the 2009 H1N1 pandemic strain of influenza virus (19). This NK cell recruitment correlated with severity of lung inflammation and poor patient outcome, but the causality in the relation between infiltration of NK cells and viral clearance and pathogenesis is unproven. NK cells mediate their response to influenza and other pathogens using an array of germline receptors. Inhibitory receptors serve to protect healthy cells from NK cells and include the killer-cell immunoglobulin-like receptors (KIRs) and the heterodimer NKG2A-CD94. NK cell activating receptors signal 'altered self ' and include NKp46, NKp30, NKp44, NKG2C, and NKG2D, among others. Together, the activating and inhibitory receptors define the degree of NK cell maturation and responsiveness to stimuli (20,21). In response to virusinfected or cancerous cells, NK cells can kill cells via release of cytolytic molecules or through engagement of death receptors. They can also produce cytokines, such as IFN-γ, which limit viral replication and tumor proliferation (21). CD56 dim and CD56 bright NK cells are two major NK cell subsets identified in the peripheral blood that tend to differ in their responsiveness. CD56 dim NK cells are more cytotoxic and CD56 bright are better at secreting cytokines (22,23). Due to their robust cytotoxic capabilities and immune regulatory potential, NK cell activation is tightly regulated to limit tissue damage at the site of infection. Here, we sought to better understand how NK cell activity is regulated during pregnancy and gain insight into the unusual susceptibility of pregnant women to complications from influenza virus and other infections. We used mass cytometry and ex vivo influenza infection to profile the expression of NK cell activating and inhibitory receptors during this critical period of development. Study Design Pregnant women in their second and third trimester and control non-pregnant women were enrolled in two cohorts in separate years. In the discovery cohort, twenty-one healthy pregnant women were recruited between October 2013 and March 2014 from the Obstetrics Clinic at Lucile Packard Children's Hospital at Stanford University. Twenty-one nonpregnant (control) women were recruited for Stanford influenza vaccine studies (NCT numbers: NCT03020537, NCT03022422, and NCT02141581). In the validation cohort, 32 non-pregnant (control) women were recruited for Stanford vaccine studies (NCT numbers: NCT01827462 and NCT03022422) and 21 healthy pregnant women were recruited between October 2012 and March 2013 from the Obstetrics Clinic at Lucile Packard Children's Hospital at Stanford. Venous blood was collected from all participants at baseline; pregnant women also provided a sample at 6 weeks post-partum. Exclusion criteria included concomitant illnesses, immunosuppressive medications, or receipt of blood products within the previous year. Pregnant women were also excluded for known fetal abnormalities and morbid obesity (pre-pregnancy body mass index >40). This study was performed in accordance with the Declaration of Helsinki and approved by the Stanford University Institutional Review Board (IRB-25182); written informed consent was obtained from all participants. Blood from anonymous healthy donors at the Stanford blood bank center was obtained for confirmatory functional assays. NK Cell: Infected Monocyte Co-culture A/California/7/2009 influenza (pH1N1) wild-type influenza A virus obtained from Kanta Subbarao at the National Institutes of Health was propagated in embryonated chicken eggs. Monocytes were washed and re-suspended in serum-free RPMI media at 1 × 10 5 per 100 µL and infected at a multiplicity of infection (MOI) of 3 for 1 h at 37 • C with 5% carbon dioxide. Onehour post-infection, viral inoculum was removed and cells were resuspended in 100 µL of complete RP10. Autologous NK cells were then exposed to pH1N1-infected monocytes at a effector:target (E:T) ratio 1:1. After a further 2-h incubation, 2 µM monensin, 3 µg/mL brefeldin A (eBiosciences), and anti-CD107a-allophycocyanin-H7 (BD Pharmingen) were added to the co-culture for 4 h, followed by cell staining for flow cytometry analysis. K562 Cell Assay Following purification, NK cells were exposed to K562 tumor cells (ATCC) at an effector:target (E:T) ratio of 1:1. Immediately following co-incubation, 2 µM monensin, 3 µg/mL brefeldin A, and anti-CD107a-allophycocyanin-H7 were added to the co-culture for 4 h, followed by cell staining for flow cytometry analysis. Antibody Labeling for CyTOF Purified antibodies (lacking carrier proteins) were labeled 100 µg at a time according to instructions provided by DVS Sciences with heavy metal-preloaded maleimide-coupled MAXPAR chelating polymers and as previously described (24,25). Qdot antibodies purchased from Invitrogen were used for Cd112 and were not conjugated. In115, Gd155, and Gd157 were ordered from Trace Sciences and conjugated with exactly as with metals purchased from DVS Sciences. Following labeling, antibodies were diluted in PBS to a concentration between 0.1 and 0.3 mg/mL. Each antibody clone and lot was titrated to optimal staining concentrations using cell lines and primary human samples. The gating shown in Figures S2, S4, S6, S8 displays one individual as an example. Gates were set based on both positive and negative controls known to express markers, and all stains were validated by comparison to conventional flow cytometry, as described in our prior studies (20,26). Cell subsets known to not express markers were used as negative controls in many cases (for instance, B cells do not express many NK cell markers). For some stains such as NKG2C, as new antibody conjugations and panels were used for the second cohort, the gating strategy modified if better ability to distinguish populations was possible. Gating was not used as part of the GLM analysis. PBMC Staining for CyTOF Acquisition Cryopreserved PBMCs from non-pregnant and pregnant women in discovery and validation cohort were thawed and cells were transferred to 96-well deep-well-plates, resuspended in 25 µM cisplatin (Enzo Life Sciences) for 1 min and quenched with 100% serum. Cells were stained for 30 min, fixed (BD FACS Lyse), permeabilized (BD FACS Perm II), and stained with intracellular antibodies for 45 min on ice. Staining panels are described in Tables S3, S4. All antibodies were conjugated using MaxPar X8 labeling kits (DVS Sciences). Cells were suspended overnight in iridium intercalator (DVS Sciences) in 2% paraformaldehyde in phosphate-buffered saline (PBS) and washed 1× in PBS and 2× in H 2 O immediately before acquisition on a CyTOF-1 (Fluidigm). Modeling of Predictors of Pregnancy in Mass Cytometry Data To identify markers that were consistently changed during pregnancy, we used a generalized linear model (GLM) with bootstrap resampling to account for the donor-specific heterogeneity. We implemented the GLM approach and other regression models in an open source R package CytoGLMM (27) available here: https://christofseiler.github.io/CytoGLMM/. Statistical Analysis Linear discriminant analyses were implemented in R using the package MASS (28,29). Statistical analyses for functional experiments were performed using GraphPad Prism, version 6.0d (GraphPad Software). A Mann-Whitney U-test was used to compare control to pregnant women and a Wilcoxon signedrank was used to compare the paired data in women between pregnancy and the post-partum period. Data Availability Mass cytometry data supporting this publication is available at ImmPort (https://www.immport.org) under study accession SDY1537. NK Cell Immune Response to Influenza Virus During Pregnancy To investigate how pregnancy alters NK cell phenotype and function, we recruited two cohorts of pregnant and nonpregnant (control) women in subsequent years (Tables S1, S2). We assessed NK cell antiviral function during pregnancy by flow cytometry after exposing sorted NK cells to autologous infected monocytes [ Figure 1A; (12,30)]. We observed that the frequency of CD56 dim NK cells expressing CD107a, a marker of cytolytic activity, and IFN-γ was significantly greater in pregnant women than in controls or in post-partum women (Figures 1B,C and Figure S1A). Similarly, the frequency of CD56 bright NK cells expressing CD107a and IFN-γ was also significantly greater during pregnancy than in controls and post-partum (Figures 1D,E and Figure S1A). Bulk NK cells from pregnant women displayed enhanced killing of influenzainfected monocytes ( Figure 1F and Figure S1B). These data demonstrate that the two major NK cell subsets have enhanced responses to influenza-virus infected cells during pregnancy. NK Cell Immune Response to Cancer Cells During Pregnancy During pregnancy, monocytes respond more robustly to influenza virus (11) which could activate NK cells through inflammatory cytokine production, potentially explaining enhanced NK cell responses. We hypothesized that if NK cell function was intrinsically elevated during pregnancy, we should observe enhanced anti-tumor responses as well. We therefore exposed sorted total NK cells from controls and pregnant women to the K562 tumor cell line ( Figure 1A), which represents a homogenous, identical target for NK cells from controls and pregnant women. CD56 dim NK cells from pregnant women had 1.6-fold greater expression of CD107a than CD56 dim NK cells from non-pregnant women in response to K562 cells ( Figure 1G and Figure S1C), though IFN-γ responses were not significantly different ( Figure 1H and Figure S1C). CD56 bright NK cells also displayed enhanced degranulation ( Figure 1I and Frontiers in Immunology | www.frontiersin.org FIGURE 1 | were infected with the H1N1 influenza virus strain. NK cells were either exposed to H1N1-infected monocytes or to K562 tumor cells for 7 or 4 h, respectively. (B-I) CD56 dim and CD56 bright NK cell immune response was then determined by flow cytometry. The frequency of (B) CD107a-and (C) IFN-γ-expressing CD56 dim NK cells in response to influenza-infected monocytes is represented. The frequency of (D) CD107a-and (E) IFN-γ-expressing CD56 bright NK cells in response to influenza-infected monocytes is represented. (F) The frequency of dead or dying monocytes based on staining with viability dye in NK cell co-culture. The frequency of CD107a (G) and IFN-γ-production (H) by CD56 dim NK cells in response to K562 cells is represented. The frequency of CD107a (I) and IFN-γ-production (J) by CD56 bright NK cells in response to K562 cells. (K) The frequency of dead or dying K562 tumor cells based on staining with viability dye in NK cell co-culture. *P < 0.05, **P < 0.01, and ***P < 0.001 (Mann-Whitney U-Tests to compare controls vs. pregnant; Wilcoxon matched-paired test to compare pregnant vs. post-partum). and Figure S1C). This increased degranulation by both NK cell subsets from pregnant women resulted in enhanced killing of K562 cells by bulk NK cells (Figure 1K and Figure S1D). These data indicate that NK cells have an intrinsically enhanced ability to kill both infected and tumor targets during pregnancy. Deep Profiling of CD56 dim and CD56 bright NK Cells During Pregnancy in the Discovery Cohort To understand potential drivers of this enhanced NK cell function during pregnancy, we profiled the expression patterns of inhibitory and activating surface receptors on NK cells in control non-pregnant women, pregnant women, and post-partum women (including the 10 individuals per group tested in Figure 1). PBMCs in both cohorts were evaluated by mass cytometry as outlined in Figure 2A and Tables S3, S4. NK cells were identified as Figure S2A). The frequency of NK cells did not significantly differ between pregnant and control women, nor in pregnant vs. post-partum women in either cohort (Figures S2B,C). To identify NK cell markers predictive of pregnancy, we used a Generalized Linear Model (GLM) with bootstrap resampling to account for correlations between cells and inter-individual variability [ Figure 2A; (27)]. Expression of several markers such as CD38, NKp46, PD-1, and CD27 were predictive of pregnancy on CD56 dim NK cells, while NKp30 was more likely to predict control ( Figure 2B). When comparing the same women during pregnancy and postpartum, CD38, NKp46, NKG2C, NKG2D, and NKp44 were predictive of pregnancy on CD56 dim NK cells ( Figure 2C). Manual gating confirmed elevated expression of CD38 and NKp46 on CD56 dim NK cells during pregnancy (Figures S3, S4). To further define the markers that distinguish pregnancy, a linear discriminant analysis (LDA) was performed, revealing that CD38 and NKp46 best separate the CD56 dim NK cell population of pregnant women from that of control and post-partum women ( Figure 2D). Together, our data indicate that there are differences in NK receptor expression patterns during pregnancy, and that CD38 and NKp46 expression are major drivers of these pregnancy-related changes. As CD56 bright NK cells differ from CD56 dim NK cells in their maturation and receptor expression patterns, we analyzed them separately. CD38 and NKp46 expression levels are also predictive of pregnancy on CD56 bright NK cells, as is the inhibitory receptor NKG2A, which is highly expressed on CD56 bright NK cells (Figure 2E and Figures S5, S6). Expression of the chemokine receptor, CXCR3, and activating receptor, NKp44, were associated with non-pregnant state. Similar differences were seen when comparing pregnant and post-partum samples ( Figure 2F and Figures S5, S6). LDA reveals that CD38, NKp46, NKG2A, and NKG2D best separate CD56 bright NK cells of pregnant women from that of control and post-partum women ( Figure 2G). Together, these data suggest that during pregnancy, both CD56 dim and CD56 bright NK cell subsets have the potential for greater activation through an increased expression of CD38 and NKp46. Deep Profiling of CD56 dim and CD56 bright NK Cells in the Validation Cohort We performed a deeper profiling of NK cells in the validation cohort, using an antibody panel including an increased number of specific NK cell receptors such as KIRs (Figure 2A and Table S4). Similar to the discovery cohort, CD38 and NKp46 are predictive of pregnancy on CD56 dim NK cells compared to controls ( Figure 3A). CD56 dim NK cells from pregnant women also display an increased expression of NKG2C, LILRB1, and KIR2DL3 compared to control, while NKG2D and CD11b expression predicted control. CD38, NKG2A, and CD244 expression are also predictive of pregnancy when compared with post-partum conditions, while several markers including KIRs predicted the post-partum state ( Figure 3B). NKp46 predicted the post-partum state among CD56 dim NK cells in the validation cohort ( Figure 3B). Manual gating confirmed the results of the GLM for this cohort (Figures S7, S8). LDA performed on these data showed that CD38 and NKG2A best explained the separation between CD56 dim NK cells from pregnant women with controls and post-partum in the validation cohort ( Figure 3C). For CD56 bright NK cells, CD11b, CD38, LILRB1, CD25, KIR2DL3, NKG2A, and NKG2C are predictive of pregnancy, while several markers predict the post-partum state (Figures 3D,E). These data were confirmed by manual gating (Figures S9, S10). LDA separation showed that CD38, NKp30, CD94, and CD244 most contribute to the separation of CD56 bright NK cells from pregnant women compared to controls and post-partum ( Figure 3F). Several markers differed in their predictions between the discovery and validation cohorts. For instance, NKG2C was predictive of pregnancy in comparison to control among both CD56 dim and CD56 bright NK cells in the validation cohort, but not in the discovery cohort. This raises the possibility that there are differences in CMV status between cohorts driving the effect. Unfortunately, CMV serologies were not available; however, there were no significant differences in the frequency of "adaptive" NKG2C + CD57 + NK cells between the control, pregnant, or post-partum women in either cohort, making it less likely that differences in CMV status were driving the differences in NK cell phenotype ( Figure S11). Overall, the most consistent finding in pregnancy is the increased expression of CD38 on both CD56 dim and CD56 bright NK cells. There is significant variation in the expression patterns of activating and inhibitory NK cell receptors during pregnancy, but pregnancy is associated with a higher activation status and enhanced CD38 expression. Co-expression of CD38 and NKp46 As the most consistently observed difference was enhanced expression of CD38 and NKp46 on CD56 dim NK cells during pregnancy, we examined the frequency of CD56 dim NK cells co-expressing these markers (Figure 4). CD38 and NKp46 were co-expressed on a greater frequency of NK cells both the discovery (Figures 4A,B) and validation cohorts during pregnancy (Figures 4C,D). In the discovery cohort, the frequency of CD38 high NKp46 + NK cells returned to levels found in controls during the post-partum period, but in the validation cohort, the frequency of CD38 high NKp46 + NK cells remained high in the post-partum period. There was no significant association between the frequency of CD38 high NKp46 + NK cells and "adaptive" NKG2C + CD57 + NK cells (Figure S11). DISCUSSION During pregnancy, the maternal immune system is engaged in a fine balance: tolerance is required to preserve the fetus while defenses must be maintained to protect mother and baby from microbial challenges. NK cells play a critical role in this balance as their job is to patrol the body for "altered self " (31). NK cell activity had been thought to be suppressed during pregnancy to protect the fetus, but recent studies have suggested a more nuanced view (2). NK cells from pregnant women display diminished responses to stimulation with cytokines and phorbol-myristate acetate and ionomycin, yet NK cell responses to influenza-infected cells are enhanced (12)(13)(14)32). Here we show that both CD56 dim and CD56 bright NK cell subsets have enhanced responses to both the influenza virus and to cancer cells, indicating a cell-intrinsic enhancement in their response to threats. Profiling CD56 dim and CD56 bright NK cells from pregnant and non-pregnant women showed that during pregnancy, both subsets are characterized by increased expression of the activation marker, CD38. CD38 is expressed on a large proportion of NK cells even in non-pregnant individuals and is significantly increased in cell surface density during pregnancy. CD56 dim NK cells also demonstrate increased expression of the activating receptor NKp46 during pregnancy (though it is even higher in the post-partum period in one study); this receptor may play a role in recognition of influenza-infected cells (33,34). These observations indicate that NK cells have an enhanced expression of receptors that mark NK cell activation and contribute to the response to influenza virus and cancer cells. Pregnant women are significantly more likely to suffer adverse consequences from influenza infection than are the general population. During the 1918 influenza pandemic, the case fatality rate for influenza infection was estimated to be 27-75% among pregnant women but only 2-3% among the general population (35). Even with improved supportive care, the case-fatality rate among pregnant women was twice that of the general population during the 2009 pandemic (36). Thus, an understanding of the mechanisms driving this enhanced susceptibility to influenza infection during pregnancy represents an important challenge for the scientific community. During influenza virus infection, the recruitment of peripheral NK cells into the lungs represents one of the first lines of defense following influenza infection (37). Though isolated NK cells stimulated with cytokines or chemicals have suppressed responses during pregnancy, our data here confirm earlier findings that NK cell responses to autologous influenza-infected cells are enhanced during pregnancy (12). This enhanced responsiveness could be deleterious to lung integrity and drive pathogenesis. Consistent with this idea, Kim et al. demonstrated that pregnant mice infected by influenza virus have increased lung inflammation and damage compared to non-pregnant mice (38). Further, Littauer et al. suggested that innate immune responses play a role in the initiation of pregnancy complications such as preterm birth and stillbirth following influenza virus infection (5). Finally, the idea that enhanced NK cell responses could be detrimental in pregnant women is consistent with observations that hyperinflammatory responses are a driving force behind severe influenza disease in humans (39)(40)(41). To deepen our understanding of the effect of pregnancy on NK cell responses, we turned to mass cytometry to profile the expression of NK cell surface receptors. We were surprised to discover that both CD56 dim and CD56 bright NK cell subsets had a consistent and significant increase in CD38 expression during pregnancy compared to non-pregnant and post-partum samples. While CD38 is commonly viewed as an activation marker on T cells, it is more highly expressed on NK cells and has several important functions. First, CD38 confers lymphocytes with the ability to adhere to endothelial cells through its binding to CD31, a necessary step in extravasation. CD38 also functions as an ectoenzyme, converting extracellular NAD + to cADPR through its cyclase activity or cADPR to Adenosyl-di-phosphate ribose through its hydrolase activity (42). These molecules in turn can diffuse into the cell and promote its activation by driving intracellular calcium increase, phosphorylation of signaling molecules, production of cytokines, and vesicular transport (43). CD38 crosslinking can enhance the cytotoxic activity of cytokine-activated NK cells (44)(45)(46) and plays a role in immune synapse formation in T cells (47) and NK cells (Le Gars et al., unpublished data). Thus, this increased CD38 expression during pregnancy might explain the enhanced responses of NK cells to influenza and tumor cells. Interestingly, decidual NK cells express high levels of CD38 compared to peripheral NK cells, yet their origin is still unclear (48). It has been proposed that subsets of NK cells can migrate from the maternal blood to the decidua and acquire the unique features of decidual NK cells upon exposure to decidual environment (49,50). Our data suggest that the overall environment during pregnancy could enhance CD38 expression. Several studies suggest that KIR2DL4 could play a significant role in regulating IFN-γ production by decidual NK cells (51)(52)(53). Further, an NK cell population found in repeated pregnancies, which has a unique transcriptome and epigenetic signature, is characterized by high expression of the receptors NKG2C and LILRB1 (54). This NK cell population has open chromatin around the enhancers of IFNG and VEGF genes, which leads to an increased production of IFN-γ and VEGF upon activation. This is consistent with our finding that NKG2C and LILRB1 expression is increased in our validation cohort, and could explain the increased activation of peripheral NK cells upon encounter with infected or tumor cells during pregnancy. Another interesting finding is the consistent increased expression of NKp46 on CD56 dim NK cells during pregnancy. Intriguingly, in the validation cohort, NKp46 levels were even higher on CD56 dim NK cells during the post-partum period. NKp46 has been shown to contribute to NK cell influenza virus responses through binding of influenza hemagglutinin (34). Signaling mediated by NKp46 following influenza sensing leads to the production of IFN-γ (33,55). Therefore, an increased expression of NKp46 during pregnancy could make NK cells more responsive to influenza virus. Further, more elevated expression of NKp46 facilitates the control of lung cancer in mice (56) and NKp46 alteration is associated with tumor progression in human gastric cancer (57). Thus, the increased expression of NKp46 on CD56 dim NK cells, together with CD38, could explain the enhanced response to cancer cells during pregnancy. Two factors limited our ability to directly attribute the enhanced expression of CD38 and NKp46 to NK cell hyperresponsiveness during pregnancy. First, we did not have sufficient PBMC samples from pregnant women to perform blocking experiments. Second, even with enough material, CD38 and NKp46 are expressed on NK cells from non-pregnant women as well, albeit at lower levels, thus blocking would be expected to diminish responses in both pregnant and non-pregnant women. Several markers differed in their expression pattern during pregnancy in only one cohort, and there was significant variation in the expression patterns of some markers between cohorts. For instance, NKG2D cells was predictive of pregnancy in the discovery cohort and predictive of post-partum/control in the validation cohort. This may reflect the substantial differences in NK cell phenotype between individuals. In earlier work we noted that NK cell receptor expression profiles, particularly those of activating receptors, differed dramatically between identical twins and based on maturation status, suggesting that these expression patterns are influenced by the environment (20,26). This high variation between individuals may also explain our failure to observe consistent pregnancy-related changes in NKp30 or NKp44 expression. Other changes associated with pregnancy, including expression patterns of LILRB1, KIR3DL2, and KIR2DL5 were only evaluated in the validation cohort and warrant follow-up in future studies. Changes in NKG2C expression observed on CD56 dim NK cells between the pregnant and control subjects could reflect differences in the CMV status between the cohorts in the cross-sectional analyses; unfortunately, CMV status is not known for the cohorts. An additional feature that was observed in the discovery cohort, but unfortunately not evaluated in the validation cohort, was decreased expression of CXCR3 on CD56 bright NK cells during pregnancy. CXCR3, through the binding to its ligand IP-10, is an important receptor responsible for the recruitment of NK cells to the site of infection or inflammation. The CXCR3/IP-10 axis has been shown to enhance acute respiratory distress syndrome (ARDS) by the increased systemic presence of IP-10 (58). Thus, decreased level of CXCR3 on NK cells fails to explain their enhanced responses during pregnancy but could represent a mechanism of protection to avoid an excessive recruitment of CD56 bright NK cells to the lung of influenza-infected pregnant women and to restrain lung damage. Why does NK cell phenotype undergo such changes during pregnancy? The answer remains unclear. The presence of fetal antigens in maternal blood could explain the increased activation state of NK cells. Monocytes and dendritic cells exert a proinflammatory phenotype during pregnancy (2,11,14) and this could be in part due to parental antigens present in the fetus. In turn, monocytes and pDCs could produce several cytokines such as IL-15,−18, or IFN-type I to promote increased NK cell receptor expression and activate NK cells (59). Another possibility to explain the observed phenotypic changes of NK cells during pregnancy is hormonal variation. These fluctuations could promote transcriptomic and epigenetic modifications driving alteration of NK cell phenotype and response to influenza virus and tumor cells. However, several studies suggest that progesterone and estrogen dampen NK cell cytotoxic activities (60,61). A deep analysis of the transcriptomic and epigenetic landscape of NK cells during pregnancy could lead to a better understanding of these NK cell changes. There are several limitations of our study, including the fact that our mass cytometry panels differed between the two cohorts and remain limited to ∼40 markers. Thus, we may have excluded other molecules involved in NK cell immune responses during pregnancy, including critical NK cell surface molecules such as DNAM-1, TIGIT, and Siglec-7. We also did not follow-up on other differences that were seen in only one cohort. Further, here we studied peripheral blood NK cells and were not able to sample lung resident NK cells or uterine NK cells. Finally, we had limited data reflecting the history of the pregnant and control women in terms of their prior vaccination status, prior influenza infection status, cigarette and drug use, and others. We cannot exclude that unmeasured factors could influence the NK cell phenotype and the quality of the NK cell responses to influenza and cancer cells. Here, our goal was to refine current understanding of NK cell biology and activity in the context of pregnancy and influenza virus infection. Our work reveals enhanced activity of both CD56 dim and CD56 bright NK cell subsets to influenzainfected cells and tumor cells during pregnancy. These enhanced responses are associated with a more robust expression of CD38, a receptor that plays a role in activation and cytotoxicity, and NKp46, a receptor associated with a better response to influenza virus and certain cancers. Together, our data provide a more complete view of the immune changes mediated by pregnancy and enhances our understanding of the susceptibility of pregnant women to influenza virus. DATA AVAILABILITY STATEMENT Mass cytometry data supporting this publication is available at ImmPort (https://www.immport.org) under study accession SDY1537. ETHICS STATEMENT The studies involving human participants were reviewed and approved by Stanford University Institutional Review Board. The patients/participants provided their written informed consent to participate in this study. Written informed consent was obtained from the individual(s) for the publication of any potentially identifiable images or data included in this article. AUTHOR CONTRIBUTIONS ML, AK, NB, and CB designed experiments. ML, AK, and NB, and CS analyzed the data. LM, SS-O, and PK collaborated and provided advice in the analysis of the data. MD, CD, GS, and NA coordinated and provided human samples. ML and CB wrote the manuscript. All authors contributed revisions and edits.
v3-fos-license
2016-05-04T20:20:58.661Z
2015-03-16T00:00:00.000
9817995
{ "extfieldsofstudy": [ "Psychology", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.frontiersin.org/articles/10.3389/fnsys.2015.00037/pdf", "pdf_hash": "7b153dff19c4f88b95e7345122a05f3ffeb7ee90", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43593", "s2fieldsofstudy": [ "Biology", "Psychology" ], "sha1": "7b153dff19c4f88b95e7345122a05f3ffeb7ee90", "year": 2015 }
pes2o/s2orc
Theta variation and spatiotemporal scaling along the septotemporal axis of the hippocampus Hippocampal theta has been related to locomotor speed, attention, anxiety, sensorimotor integration and memory among other emergent phenomena. One difficulty in understanding the function of theta is that the hippocampus (HPC) modulates voluntary behavior at the same time that it processes sensory input. Both functions are correlated with characteristic changes in theta indices. The current review highlights a series of studies examining theta local field potential (LFP) signals across the septotemporal or longitudinal axis of the HPC. While the theta signal is coherent throughout the entirety of the HPC, the amplitude, but not the frequency, of theta varies significantly across its three-dimensional expanse. We suggest that the theta signal offers a rich vein of information about how distributed neuronal ensembles support emergent function. Further, we speculate that emergent function across the long axis varies with respect to spatiotemporal scale. Thus, septal HPC processes details of the proximal spatiotemporal environment while more temporal aspects process larger spaces and wider time-scales. The degree to which emergent functions are supported by the synchronization of theta across the septotemporal axis is an open question. Our working model is that theta synchrony serves to bind ensembles representing varying resolutions of spatiotemporal information at interdependent septotemporal areas of the HPC. Such synchrony and cooperative interactions along the septotemporal axis likely support memory formation and subsequent consolidation and retrieval. Introduction You remember the last moment of experience, the last few moments and a variable stream of experience that can extend minutes and hours into the past. Mammals, whether finding food or avoiding becoming food, can remember paths and events that vary in spatiotemporal scale. Thus, you are reading this manuscript on-line, coffee in hand, having sat down at your office computer ten to twenty minutes ago, after having a tense discussion with a colleague in the hallway. Experimental analyses of neurobiological correlates tend to focus on the instantaneous response of the nervous system to sensory and motor events. On the other hand, memory for events occurring over time periods extending minutes, hours or days are dependent upon hippocampal neurobiology. We speculate that emergent function across the long axis varies with respect to spatiotemporal scale. Thus, the septal hippocampus (HPC) represents details of the immediate sensory environment (e.g., right here, right now and the sequence of words in the last sentence), while the temporal HPC represents larger spatial features and longer temporal contexts (e.g., accessing the manuscript, sitting down in your chair and the tense hallway discussion). The reader is also referred to Komorowski et al. (2013); Evensmoen et al. (2015), as well as Wolbers and Wiener (2014) for related discussions of variation across the long axis and variation in spatiotemporal scaling. Historically, significant emphasis has been placed on examining the functionality of distinct hippocampal (HPC) subregions within the tri-synaptic circuit (dentate gyrus (DG) > CA3 > CA1) rather than functional differences across the areal or longitudinal expanse of the HPC. A variety of behavioral studies based on lesion data in rodents and more recently neuroimaging data in humans support functional differentiation of hippocampal circuits along its long axis (Hughes, 1965;Moser et al., 1995;Strange et al., 1999;de Hoz and Martin, 2014; see Bannerman et al., 2004 for reviews see Ta et al., 2012). How segregate are the circuits and function of different portions of the long axis? The septotemporal axis of the HPC can be subdivided into a septal (dorsal), intermediate and temporal (ventral) portions based on variation in entorhinal inputs (Dolorfo and Amaral, 1998a,b), subcortical projections (Risold and Swanson, 1996) and gene expression (Dong et al., 2009;Fanselow and Dong, 2010; see also Strange et al., 2014 for review). Note the septal HPC in rodents corresponds to the posterior HPC in humans and other primates, while the temporal HPC corresponds to the anterior HPC. The septal portion is generally thought to play a dominant role in spatial information processing, while the temporal portion a greater role in emotion/motivation (see Bannerman et al., 2004 or Fanselow andDong, 2010 for reviews). Based on subcortical projections to the lateral septum and relays to hypothalamic nuclei, Risold and Swanson (1996) suggested that the septal, intermediate and temporal HPC were differentially involved in guiding different aspects of motivated behavior, ongoing spatial navigation, social and reproductive and ingestive behavior, respectively. While there is clear anatomical and functional differentiation across the long axis, the details of the differences and under what conditions, if ever, is there cooperative interactions across the long axis are open questions. The theta rhythm is a 6--12 Hz oscillation in the local field potential (LFP) signal that is generated by synchronous synaptic inputs bombarding the somatodendritic field of HPC and entorhinal cortical neurons. The elegant laminar organization of somatodendritic fields within the HPC and the laminar organization of axonal inputs provides a high degree of spatial discrimination to HPC LFP signals. This unique window has in fact helped define temporal structure (e.g., theta, gamma, sharp wave and high-frequency rhythms) in ensemble organization within the brain (Buzsáki, 2006). At the micro-level, the theta rhythm allows for the integration and segregation of individual neuronal elements into distributed cell assemblies (Buzsáki and Chrobak, 1995). At a more macro-level, analysis of areal and laminar variation in the theta LFP signal reveals functional connectivity and emergent function in a manner similar to analyses of electroencephalogric (EEG) and blood-oxygen-leveldependent (BOLD) signals. Thus synchrony in theta phase across regions likely links brain networks into ensemble interactions supporting emergent function (e.g., Remondes and Wilson, 2013). Our laboratory has been examining septotemporal variation in the theta, as well as the gamma signal with respect to self-motion, novelty and experience (Sabolek et al., 2009;Hinman et al., 2011Hinman et al., , 2013Penley et al., 2012Penley et al., , 2013Long et al., 2014a,b). These studies illustrate septotemporal variation in theta amplitude and frequency in relation to sensorimotor action and experience (e.g., locomotor speed/acceleration), the current sensory environment as well as past experience. Hippocampal-Entorhinal Anatomy and Interactions The basic anatomy and physiology of the HPC is highly conserved across mammals and a number of excellent reviews describe the details of this organization (Amaral and Witter, 1989;Lavenex and Amaral, 2000;Strange et al., 2014). We emphasize three features of this anatomy. First as noted above, the HPC has a highly laminar organization that allows for insight into the temporal organization of synchronous synaptic input. Second, there is a topographic organization of entorhinal cortex (EC) inputs to the HPC that maps rostrocaudal-oriented bands or strips of EC neurons to different septotemporal levels of the HPC (Dolorfo and Amaral, 1998a;Chrobak and Amaral, 2007). Third, intrinsic entorhinal associational connections within the bands have the potential to integrate multimodal associative inputs (e.g., visuospatial, auditory, olfactory, selfmotion) distributed across the rostrocaudal (anterior-posterior) extent of the EC bands (see Figure 1; Insausti et al., 1987;Witter et al., 1989;Suzuki and Amaral, 1994a;Burwell and Amaral, 1998;Dolorfo and Amaral, 1998a,b;Burwell, 2000;Lavenex and Amaral, 2000;Chrobak and Amaral, 2007;Kerr et al., 2007). The HPC has a highly laminar organization that allows for insight into the temporal organization of synchronous synaptic input. The principal cell fields including the CA1 and CA3 pyramidal neurons and dentate granule cells as well as associated GABAergic basket cells are densely packed in soldier-like fashion creating the distinctive curvilinear cell layers of region CA1 and CA3 and the sharp-V shape of the granule cell layer (DG). The dendritic fields of these neurons are arranged in relatively narrow tangents oriented roughly ninety-degree from cell layers. The intrinsic intra-hippocampal connections (e.g., mossy cell input to the granule cells, granule cell input to CA3 and CA3 input to CA1) synapse in ordered fashion along the length of the dendritic field of their targets. Similarly, EC layer 2 input to the DG and CA3 and EC layer 3 input to CA1 synapse in an ordered fashion at different somatodendritic locations from the intrahippocampal inputs (see Amaral and Witter, 1989 for detailed description). This ordered architecture allows for spatially unique current flow profiles and isolation of laminar specific changes in LFP signals FIGURE 1 | The topography of EC to HPC projections. (A) Distinct areas of the provide afferents to the septal 50% of the EC (red), the midseptotemporal 25% (blue) and the temporal 25% (yellow) for DG (left) and CA1 (right). EC projections from layer 2 to the DG and CA3 as well as layer 3 to CA1 exhibit a similar topography. Bands or zones of neurons across the entire rostrocaudal extent of the EC innervate progressively more temporal DG, CA3 and CA1 neurons starting from the caudolateral extreme of the EC toward the more medial aspects of the (see Bragin et al., 1995;Csicsvari et al., 2003;Montgomery et al., 2009). As noted the septotemporal axis of the HPC can be subdivided into a septal (dorsal), intermediate and temporal (ventral) portions based minimally on variation in entorhinal inputs (Dolorfo and Amaral, 1998a). Neurons within a rostrocaudal band along the dorsolateral and caudal edge of the EC, subjacent the rhinal sulcus except at the most caudal extreme, innervate the septal HPC. Neurons within rostrocaudal bands more medially innervate progressively more temporal aspects of the HPC (Steward and Scoville, 1976;Wyss, 1981;Ruth et al., 1988;Witter et al., 1989;Dolorfo and Amaral, 1998a;see Canto et al., 2008 for review). This is true for EC layer 2 projections to the DG and CA3 as well as layer 3 projections to CA1 (see Figure 1A). It is important to note that the rostrocaudal bands or strips are arranged from the most lateral EC to the medial aspect of the EC, but that this mediolateral orientation is not equivalent with the cytoarchitectonic distinctions between the medial (MEC) and lateral (LEC) EC. The rostrocaudal strips are roughly orthogonal to the MEC-LEC boundaries and thus both MEC and LEC neurons contribute significant input to all septotemporal levels of the HPC. Figure 1B illustrates two retrograde-labeling cases where relatively large tracer injections in septal DG (Figure 1B, left) and septal CA1 (Figure 1B, right) label a relatively narrow sliver of EC neurons that extend across the entire caudolateral boundary of the EC including both MEC and LEC neurons. Dolorfo and Amaral (1998a) indicated that bands while ''not entirely segregate'', exhibited ''relatively little overlap'' and suggested that ''different portions of the entorhinalhippocampal circuit are capable of semiautonomous information processing''. In addition to describing the topographic organization of rostrocaudally-projecting EC bands to the HPC, Dolorfo and Amaral (1998a) described a rich network of intrinsic associational connections that could link the neurons within each band. Thus, retrograde and anterograde traces injected anywhere along the rostrocaudal extent of each EC band indicated horizontal associational connections across the entire rostrocaudal extent of each band originating from both superficial (layer 2--3) and deep (V-VI) neurons. The reader is referred to the elegant illustrations in Amaral, 1998a original publications (Dolorfo andAmaral, 1998a,b) as well as Chrobak and Amaral (2007) for a description of the bands in the macaque. Currently, there is limited additional information about the details of connectivity and physiological interaction across the rostrocaudal extent of each band. Thus, how, when and if grid cell regions within the caudal extent of the lateral band in the MEA interact with functional modules located in the more rostral aspects of each band has not been addressed. It is possible that the long-range horizontal connections within the EC serve to temporally orchestrate the discharge of neurons within discrete functional modules, rather than integrate associative information across different functional modules. In contrast to the limited information about integration across the EC, a larger number of studies have highlighted interlaminar and intralaminar interactions within focal regions of the EC including detailed descriptions of the dorsoventral aspect of the caudal MEC. Neurophysiological analyses have highlighted MEA and LEA differences (Hargreaves et al., 2005;Deshmukh et al., 2010;Knierim et al., 2013), interlaminar (e.g., deep layer 5 influences on superficial layer 3 and layer 2 neurons; Kloosterman et al., 2003;Ma et al., 2008) and focal intralaminar (e.g., layer 2 to layer 2) most prominently in the MEA (Beed et al., 2010, see Burgalossi andBrecht, 2014 for review). A general finding is that there is greater connectivity among layer 3 and layer 5 neurons than layer 2 pyramidal or stellate cells (see Dhillon and Jones, 2000;Kumar et al., 2007;Ma et al., 2008;Couey et al., 2013;Pastoll et al., 2013). It is important to appreciate that there are two types of principal cells in layer 2, stellate cells and pyramidal neurons, and these distinct types may differentially contribute to local and distant patterns of horizontal, inter-entorhinal connections. Specifically, stellate neurons appear to lack monosynaptic connections with other stellate cells at least locally, although they may contribute to long-range horizontal interactions via disynaptic connections to local and distant GABAergic neurons; the reader is referred to anatomical and physiological descriptions by Klink and Alonso (1997), Buckmaster (2014) and more conceptually to Sasaki et al. (2014) for additional discussion. Burgalossi and Brecht (2014) have recently provided a fairly complete and engaging review highlighting the focal modularity and interconnectivity of the dorsoventral aspect of the caudal MEC. Currently, knowledge of the neurophysiological interactions and detailed anatomical description of inter-entorhinal interactions across the rostrocaudal associational connections is lacking. Nonetheless, we speculate that there is integration of information across the inter-entorhinal bands and that further study is necessary to appreciate these long-range horizontal interactions. On this note, recent findings emphasize direct long-range mono-synaptic interactions of CA1 neurons across the septotemporal axis (Yang et al., 2014) despite the lack of direct focal interactions among CA1 neurons (Deuchars and Thomson, 1996). The rostrocaudal associational connections are largely orthogonal to the distribution of entorhinal inputs from neocortical associative cortices including the prominent perirhinal cortex, parahippocampal cortex and amygdalar input to the EC (Suzuki and Amaral, 1994;Insausti et al., 1997;Lavenex and Amaral, 2000;Canto et al., 2008;Mohedano-Moriano et al., 2008;Agster and Burwell, 2013). It is thus likely, that these associational connections can integrate across several functional domains defined by both neocortical associative inputs to EC as well as amygdalar inputs. We suggest that inter-entorhinal associative connections within the bands integrate superordinate features of necortical input, such as the spatial features (e.g., proximal-distal) or the time-scale of temporal integration (Giocomo et al., 2007;Hasselmo et al., 2010). In short, the HPC receives distinct sets of EC input that integrate information arriving to different functional domains organized across the rostrocaudal and mediolateral areal axes of the EC. The Hippocampal Theta Signal Hippocampal neurons, entorhinal neurons and multiple subcortical afferents discharge action potentials in phase relation to the theta and theta-related gamma concert. Characteristics of theta and gamma and their relations to emergent function have been elegantly reviewed by multiple authors (Buzsáki et al., 1992;Buzsáki, 2002Buzsáki, , 2005Vertes et al., 2004;Buzsáki and Moser, 2013;Hasselmo and Stern, 2014). Briefly, theta is the relatively slow orchestration of neurons into coordinated ''sentences or paragraphs'' of information on the time-scale of ∼80--200 ms (∼5--12 Hz). In contrast, more local gamma rhythmicity is the faster orchestration of neurons into focal ensembles or ''letters or words'' in theta sentences or paragraphs. Theta and other brain rhythms allow individual neurons to discharge in slow (e.g., theta) and fast (e.g., gamma) temporal relation to other neurons within well-defined as well as rapidly changing ensembles (Buzsáki and Watson, 2012;Dupret et al., 2013). Mechanistically, theta LFP signals are generated by the summation of relatively synchronous excitatory potentials rhythmically constrained by inhibitory synaptic potentials, impinging on relatively local, but ill-defined regions of somatodendritic space (Green and Arduini, 1954;Petsche et al., 1962;Leung, 1985;Brankack et al., 1993;Bragin et al., 1995;Buzsáki, 2002). The LFP waveform characteristics, such as the frequency and amplitude of the signal depend on the proportional contribution of multiple afferent sources as well as the intrinsic properties of different neurons . Subtle variation in any one of the inputs, including changes in the timing, can alter synaptic integration and subsequent current flow contributing to the LFP signal. Similarly, Ang et al. (2005) carefully illustrate how subtle temporal variation in the timing of CA3 input to the distal dendrites of CA1 pyramidal cells and dendritic-targeting GABAergic neurons can amplify or suppress the synaptic currents elicited by EC input. Specifically, input from dendritic-targeting GABAergic neurons driven by CA3 input can maximize or suppress intracellular current flow to subsequent EC input within very narrow (20 ms) time windows. The summation of these currents is the origin of extracellular theta LFPs. Given that virtually all HPC, EC and subcortical afferents discharge in phase relation to the theta dynamic, the LFP signal is relatively coherent throughout the HPC. However, the theta signal varies considerably in shape and amplitude at varying laminar, regional and areal sites in the HPC (see Sabolek et al., 2009). Depending upon the specific location and features of the recording electrodes, researchers can ''listen'' to signals being generated not by one neuron, but by all the neurons within a three-dimensional range. The elegant anatomical organization of the HPC allows distinctive LFP signals to be ''heard'' in the same manner in which microphones distributed across an auditorium or stadium could eavesdrop on and isolate the focal generation of auditory signals from orderly arranged sound sources (e.g., rows of ''speakers''). Current source density calculations can further isolate these signals (see Csicsvari et al., 2003 for an excellent exemplar) within the HPC, but such are typically limited to twodimensional analyses and often yield relatively similar results to analyses of the LFP signal. Variation in Theta Inputs The theta signal varies considerably in the three-dimensional expanse of the HPC. The proximodistal and septotemporal topography of the CA3/mossy fiber input to the DG and CA1, as well as the EC into to DG, CA3 and CA1, provide the anatomical substrate for rich variation in the amplitude and synchrony of the synaptic inputs that create the theta LFP signal. The ordered dissonance of inputs from these multiple sources provides variation to the rhythmic drumming determined by the theta frequency, which varies from ∼4--12 Hz. The topographic contribution of subcortical inputs including the prominent medial septal projection of GABAergic and cholinergic neurons contributes to the orchestration of the theta signal with regards to both variation in the current (amplitude) and frequency (see Freund and Antal, 1988;Tóth et al., 1993;Lee et al., 1994;Brazhnik and Fox, 1999;Borhegyi et al., 2004;Colom et al., 2005;Manseau et al., 2008). Despite the common understanding that the theta signal is coherent across its laminar and areal axes, the degree of coherence in the amplitude and phase of the signal varies on a moment-to-moment basis reflecting the transient variation in the synchrony of theta generating inputs impinging on the dendritic field structure of distinct populations of neurons. Thus, the signal can vary considerably within a focal region of the septal HPC as illustrated in Figure 2, which illustrates differences in theta amplitude and speed modulation of that amplitude for simultaneously CA1 and DG electrodes. Septotemporal Variation in Speed-Theta Indices Moment-to-moment variation in the power and frequency of the theta signal has been linked to the locomotor speed of the rodent (Vanderwolf, 1969;Teitelbaum and McFarland, 1971;Feder and Ranck, 1973;Whishaw and Vanderwolf, 1973;McFarland et al., 1975). Several laboratories have more recently demonstrated FIGURE 2 | Theta LFP signal varies across regions and its relationship to locomotor speed within septal HPC. While fairly coherent within the same septotemporal area, the theta signal varies across lamina within a region (e.g., CA1 stratum radiatum vs. stratum oriens; not shown) and across regions (concurrent CA1 (A) vs. DG (B) recordings illustrated). The relationship of theta to speed is typically strongest with rats running on a linear track in a highly stereotyped manner and diminishes with multiple aspects of sensorimotor experience (e.g., turns, sensory events, task and memory demands; see text for details). (A) Illustration evidences theta variation in relation to speed at concurrently recorded CA1 and (B) DG sites for a single 20 s sweep. Theta amplitude to speed traces (left) illustrate relationship over concurrent 5 min recording session while rat navigated on a linear track (see Hinman et al., 2011;Long et al., 2014a). As illustrated, CA1 electrodes typically exhibit a much stronger relation to speed than concurrently recorded DG sites. Frontiers in Systems Neuroscience | www.frontiersin.org this phenomenon and illustrated that the gamma rhythm also varies as a function of locomotor speed (Rivas et al., 1996;Bouwman et al., 2005;Ahmed and Mehta, 2012;Kemere et al., 2013). While theta varies as a function of locomotor speed, there is a systematic decline in this relationship with distance from the septal pole of the HPC (see Figure 3; Maurer et al., 2005;Hinman et al., 2011;Patel et al., 2012;Long et al., 2014a). This finding coupled with observations that there is an increase in the size of hippocampal places fields across the long axis (Jung et al., 1994;Kjelstrup et al., 2008) support the notion that septal HPC may be encoding the fine details of spatial position while the temporal HPC is tracking larger regions of space (but see, Keinath et al., 2014). We speculate that the significance of the speed-theta relationship represents the flow of sensory input across hippocampal circuitry and it appears that speed ''synchronizes'' HPC circuits. The faster the animal moves, the faster transitions that need to be made, suggesting that as the animal increases its speed, it has to process incoming information on shorter timescales in order to organize this information into something meaningful. Alterations in the strength and ''scaling'' in the speed-theta relationship across the septotemporal axis of the HPC could be reflective of HPC processing with high speed-theta relationships indicating efficient processing on shorter time-scales (septal HPC), whereas low speed-theta relationships indicating processing on longer time-scales (temporal HPC). In both cases, the speed-theta dynamic can provide clues with regards to network uncertainty and the subsequent stability of the system as it relates to the predictability of future events (Lisman and Redish, 2009). In support of these ideas, Terrazas et al., 2005 attenuated selfmotion signals and suggests that the speed-signal is crucial for determining the scaling of place representation. When motion signals are attenuated, the HPC responds as if the animal were moving slower through space, traveling across a smaller distance and subsequently making place fields larger. These outcomes were consistent with reductions in neuronal firing rates as well as theta amplitude gain in response to locomotor speed. Consequently, these results may suggest that alterations in spatial scale across the long axis of the HPC could potentially be described by systematic variation in the gain of a motion signal (Maurer et al., 2005;Terrazas et al., 2005;McNaughton et al., 2006). In contrast to theta amplitude, the frequency of theta is a relatively fixed phenomenon across the length of the septotemporal axis (Hinman et al., 2011;Patel et al., 2012). The frequency across the entire HPC as well as the EC is likely a consequent of the phase related firing of subcortical inputs including those from supramammilary nucleus and the medial septum (MS; Freund and Antal, 1988;King et al., 1998;see Mattis et al., 2014 for recent overview). While cholinergic medial septal cells are thought to contribute to alternations in theta amplitude changes (Lee et al., 1994;Buzsáki, 2002), they lack the temporal resolution to contribute to rapid changes in theta power (Zhang et al., 2010;Vandecasteele et al., 2014); whereas cells that participate in theta current generation have the ability to produce theta amplitude changes on a finer temporal scale. Further, with segregate MS neurons projecting to differential septotemporal FIGURE 3 | Theta amplitude varies across the septotemporal axis and the relationship to locomotor speed systematically diminishes with distance from the septal pole of the HPC. (A) Septal most CA1 sites exhibit the strongest relationship to variation in locomotor speed. (B) Sites at the more temporal extremes often exhibit no significant variation in relationship to speed. Notably the relationship of theta to speed is best observed in rats traversing linear tracks (back and forth) and this relationship typically diminishes with turns, task demands, as well as the presentation of sensory events. Illustration indicates example where variation in relation to speed is evident at more septal CA1 stratum lacunosum-moleculare (slm) site as compared to concurrently recorded midseptotemporal site roughly 5mm from septal pole for a single 20 s sweep. Theta amplitude to speed traces (left) illustrate relationship over concurrent 5 min recording session while rat navigated on a linear track (see Hinman et al., 2011;Long et al., 2014a). extents of the HPC, the MS is well suited for synchronizing or desynchronizing the theta rhythm across the longitudinal axis. Our observations indicate that the strongest relationship between locomotor speed and theta amplitude is observed in the most septal CA1 electrodes in rats running in a highly stereotyped manner across a linear track. While it has not necessarily been systematically examined, multiple findings along with our own unpublished observations evidence that locomotor speed does not always account for a significant portion of the variability in theta amplitude (e.g., Montgomery et al., 2009;Gupta et al., 2012;Jeewajee et al., 2013;Long et al., 2014b;see Figures 4, 5). One may suppose that the strength of the relationship increases with experience or only on linear tracks where running behavior becomes more stereotyped (see Jeewajee et al., 2013). On the other hand, we have observed a systematic decrease in theta power and the speed-theta relationship over repeated daily sessions within a day (Hinman et al., 2011). These findings certainly illustrate that the relationship between running speed and the amplitude of the theta signal is not fixed and can vary depending upon a number of key variables. Spatial Novelty and Experience Variation A wealth of evidence links the HPC to novelty detection and neurophysiological signals within the HPC typically habituate or decrease with repeated experience (see Vinogradova, 2001;Nyberg, 2005;Kumaran and Maguire, 2009 for review see Kemere et al., 2013). Our recent findings reveal that rats navigating across a runway in a novel space, as compared to a (C) β-values for coherence across all electrode pairs within septal, across septal, and mid-septotemporal sites and across septal and temporal sites comparing changes in the modified path (red) and novel space (blue) from the familiar condition for DG pairs (right column) and CA1 pairs (left column). For categorical variables (familiar vs. novel space), β-values indicate changes in theta power and coherence independent of alterations in locomotor speed (see Hinman et al., 2011;Penley et al., 2013; see also Long et al., 2014a/b for additional information with regards to β-values for continuous variables). familiar environment, exhibit an increase in theta power across electrode sites throughout the entire septotemporal extent of the HPC including sites in DG and CA1 ; see Figures 4A/B). Further, there was an increase in theta coherence across septotemporally distant CA1 electrodes although not across DG electrodes (Figure 4C). These findings suggest that environmental novelty synchronizes and engages the entirety of the septotemporal axis to encode novel spatial experience. We suggest that within limits, greater power and coherence reflect a numerically larger, and temporally more precise, network engagement in a common process; in this case, the CA1 network engaged in encoding the features of a novel spatial experience. In contrast to changes in response to novelty, we have also reported that theta amplitude decreases as a function of experience across repeated trials on a linear track. This phenomenon is prominent at more temporal levels of the HPC, with no habituation observed at septal electrodes (see Figure 6 in Hinman et al., 2011). These data may indicate that the septal HPC is continually engaged by the details of current experience while more temporal levels encoding larger spatial features and longer time spans may habituate when meaningful information about such spatiotemporal features are not relevant or have no necessary significance to ongoing behavior or cognitive demands. It has been observed that theta in the most temporal aspects of the HPC is minimal in amplitude and intermittent in occurrence (Royer et al., 2010). The latter may reflect the significance (or lack thereof) of information concerning larger spatiotemporal phenomenon. Thus, HPC circuits may, or may not, need to maintain information about the experience and spatial context of events that occurred in the past five minutes or hour, if that information is no longer relevant to ongoing cognitive performance or future behavior. In contrast, when navigating for the first time in a new city, novel tourist destination or foraging environment, the spatiotemporal details of both experience and navigation are more relevant to ongoing and future behavior. Recent data from Patel et al. (2012) investigated a hypothesis first put forth by Lubenov and Siapas, 2009 suggesting there exists a 360 • phase shift in the theta wave between septal and temporal HPC sites. Data from Patel et al., 2012 indicate not a 360 • , but a 180 • phase shift between septotemporal sites. These data are important to consider with regards to experience and behavior dependent alterations in septotemporal theta indices. Overall, these data suggest that distributed groups of neurons can assimilate or segregate how multimodal neocortical sensory features can be perceptually integrated or consolidated into memories across the septotemporal axis of the HPC. Alterations in network interactions may provide a clue into the segregation of information among areal regions of the HPC where changes in septotemporal synchrony may offer insights into when distributed networks interact. We hypothesize that this phase shift is not a simple product of fairly fixed anatomical constraints but degree of environmental familiarity and task demands may modify the aforementioned phenomenon suggesting that environmental requirements may serve to segregate or enhance HPC networks along the long axis. Further, we propose that theta coordination across the FIGURE 5 | Novel sound presentation decreases the relationship between theta amplitude and locomotor speed in a location specific manner. Rats were trained to run on a rectangular maze for a food reward. (A) and (B) illustrate a significant decrease in the relationship between speed and theta for septal (A) and temporal (B) electrodes on Arm 2, which was the arm that was in closest proximity to the sound source (see Long et al., 2014b for additional details). Surprisingly, we observed a unique reduction in the speed to theta relationship only on the arm nearest the sound source with a habituation of this decreased slope across repeated sound exposures (right). long axis reflects a shifting dynamic between CA3 and EC afferents. Such coordination allows subsets of CA3 neurons to discharge at earlier phases of theta relative to EC neurons. This shift in phase may reflect the efficacy of EC synaptic inputs. Slight alterations in theta frequency representing the timing of inputs can bias the response of CA1 neurons to either CA3 or EC input (Ang et al., 2005). Similarly, Hasselmo et al. (2002), Hasselmo (2005) describe theta as providing bias to different synaptic inputs at different phases of each theta cycle (Hasselmo et al., 2002;Hasselmo, 2005). Such a biasing mechanism could allow for the preferential encoding of new representations (e.g., novelty; EC input dominating) or under conditions of familiarly, largely ignoring EC input and responding to CA3 inputs. Although speculative, more likely than not environmental variables will bias synaptic inputs ultimately resulting in differential degrees of septotemporal theta wave phase shifts. Sensory Novelty Hippocampal neurophysiological indices typically increase in relation to any novel stimulus within various stimulus modalities including: auditory cues (Redding, 1967;Parmeggiani and Rapisarda, 1969;Parmeggiani et al., 1982), textures (Itskov et al., 2011), odors Stäubli, 1999, 2001;Wood et al., 1999;Martin et al., 2007;Komorowski et al., 2009;Gourévitch et al., 2010) and gustatory cues (Ho et al., 2011). Auditory stimuli present an easily modifiable signal that affords considerable novel and temporal control. In a recent study, we were interested in whether the presentation of a novel sensory stimulus could alter theta indices in a manner similar to navigation in a novel spatial environment. Our findings were different from what might be expected. First, the presentation of a novel acoustic stimulus in a familiar environment modified the speed to theta amplitude relationship in a location specific manner. Second, the novel sound decreased the slope and r-square of the speedtheta relationship, which habituated (returned to baseline) across repeated sound exposures (Figure 5; see also Long et al., 2014b). A few details on these results are noteworthy as they may offer insight into the dynamics and variation of the hippocampal theta signal. Briefly, rats were trained to run on a rectangular maze for a food reward. Once baseline recordings (no sound) were obtained from well-trained rats in a highly familiar spatial environment, multiple recordings sessions were collected in the presence of a chronic sound stimulus presented nearest one arm of the rectangular maze (see Long et al., 2014b for additional details). The only observed change was a decrease in the slope and r-square of the locomotor speed to theta amplitude relationship during a single ten minute run session (Figure 5), which subsequently habituated across repeated sound exposures. The effects of a novel acoustic stimulus were strikingly different from exposure to a novel spatial environment. Novel space dramatically increased theta power, which may result from an overall novelty related increase in one or more modulatory inputs (e.g., cholinergic, noradrenergic). The novel acoustic stimuli exerted a fundamentally distinct effect inducing a sharp reduction in the slope and r-square of the speed to theta amplitude relationship in a location specific manner, despite the omnipresence of the acoustic stimulus. Ongoing studies are currently exploring the location specific changes in the theta amplitude to locomotor speed relationship and how this phenomenon varies across the septotemporal axis, as recent data has suggested that the LFP can encode spatial information as robustly as single units (Agarwal et al., 2014). Our review illustrates that there is significant variation in the theta LFP signal across the septotemporal axis and that characteristics of that signal vary differentially with respect to locomotor speed and experience. Importantly habituation or repeated exposure to the same task in the same environment decreases the theta signal most prominently at progressively more temporal HPC sites (Hinman et al., 2011) with no significant habituation at the most septal HPC sites. The latter is consistent with the noted intermittency in hippocampal theta reported by Royer et al. (2010). It appears the mechanisms that generate theta in the more temporal aspects of the HPC diminish, and/or the window for synaptic integration is larger (Marcelin et al., 2012) upon repeated exposure to the same sensory environment or repetition of voluntary motor activity. In addition to changes in theta power, changes in theta synchrony (coherence) can be observed across the septotemporal axis. Novel spatial environments increase theta coherence across the long axis of CA1. Thus, changes in theta synchrony vary predictably with environmental spatial conditions (Penley et al., 2012 as well as alterations in the pattern of voluntary motor activity (Hinman et al., 2011;Long et al., 2014a). While these findings highlight alterations in the theta LFP with respect to novel spatial phenomenon, theta also reflects aspects of motor performance. Thus, changes in theta amplitude can precede locomotor activity by hundreds of milliseconds, perhaps indicating a role for HPC circuits in anticipating or contributing to the selection of future movements (Wyble et al., 2004;Long et al., 2014a). Additionally, Vanderwolf (1971) and Whishaw (1972) demonstrated that cessation of theta activity initiated by termination of locomotion is associated with an onset of small-amplitude irregular activity (see also Gray and Ball, 1970;Kimsey et al., 1974). We have also reported a sharp reduction in theta amplitude during deceleration that generally occurs at the termination of locomotion (Long et al., 2014a). This observation is consistent with that presented by Wyble et al. (2004) where a sharp decrease in theta power (240--400 ms) precedes the cessation of locomotor activity. Further, Bland and Oddie (2001) suggest that theta as manifested by hippocampal and associated structures functions to provide ''voluntary motor systems with continually updated feedback on their performance relative to changing environmental (sensory) conditions''. This general theoretical framework is supported by the underlying anatomy of hippocampal circuits that link multimodal associative cortices to ventral basal ganglia circuits (Mogenson et al., 1980;Sesack and Grace, 2010;Aggleton, 2012). These data suggest that theta amplitude could be more related to future behavioral performance. A number of studies have investigated the relation of hippocampal unit spiking to past and/or future behavioral states. The spatial path represented during spiking activity witnessed in each theta cycle has been the focus of most of these studies (Dragoi and Buzsáki, 2006;Foster and Wilson, 2007;Johnson and Redish, 2007;Maurer et al., 2012) and suggests that hippocampal place cell activity is, at times, more reflective of future behaviors (Frank et al., 2000;Wood et al., 2000;Ferbinteanu and Shapiro, 2003;Ji and Wilson, 2008). In this vein, Schmidt-Hieber and Häusser (2013) demonstrate that theta membrane potential oscillations in medial EC preceded the onset of running, in some cases by more than one second. Similarly, Gupta et al. (2012) discuss acceleration and deceleration with regards to theta sequences. They demonstrate that as rats accelerate, paths represented are shifted forward in space, whereas during deceleration, paths are shifted backward in space. The authors relate this phenomenon to anticipation of reaching desired locations, where paths shifted backward in space may serve to review current experience (Ji and Wilson, 2008). Thus, the suggestion that hippocampal processing represents only ''the here and now'' is highly doubtful (Yartsev, 2008). Imaginably, this distinction could be a consequence of variation in the spatiotemporal scale of neuronal representations and diffrences in that scaling across the septotemporal axis. Notably, septal HPC ''place'' fields are narrowly tuned in the spatial domain and relatively insensitive to motivational variables (hunger, anxiety). In contrast, more temporal neurons have progressively larger place fields and are sensitive to emotional state (Jung et al., 1994;Kjelstrup et al., 2008;Royer et al., 2010). Recent findings also demonstrate ''time cells'' in the septal HPC (Pastalkova et al., 2008;Kraus et al., 2013) which increase their firing rates during specific seconds across the delay of a delayed conditional discrimination (MacDonald et al., 2011). Thus, the latter may represent the importance of spatiotemporal phenomenon across multiple time-scales. Conceptual Framework The hippocampal formation (HF; includes HPC) supports episodic memory formation in the mammalian brain. A specific characteristic of episodic memory is that it requires the recruitment of numerous different sensory modalities across multiple spatial and temporal scales. Data indicate functional differentiation across the septotemporal axis of the HPC with septal HPC supporting ''spatial memory'' and temporal HPC ''emotional memory''. Does the septotemporal axis of the HPC act as a unitary structure for successful information processing, or can areal domains segregate depending upon cognitive demands? This distinction could arise because of variations in spatiotemporal scale (e.g., ''time'' cells;MacDonald et al., 2011;Kraus et al., 2013) across the long axis of the HPC as mediated by differences in sub-cortical modulation or variations in specific receptor-activated membrane conductances (Moser and Moser, 1998). These intrinsic septotemporal differences along with anatomical data support a role for oscillations in the binding of relevant, multi-modal information as it relates to episodic events. With these ideas in mind, high-frequency oscillations (e.g., gamma) reflect ''local'' network processing and feature binding across brain regions (e.g., primary auditory cortex), while lowfrequency oscillations (e.g., cortical beta rhythm) are dynamically entrained across distributed brain areas (e.g., communication of sensory information to HPC). Synchronization of oscillations may serve as a mechanism to transfer and bind information from large-scale network events operating at behavioral timescales, to fast, local events operating at smaller time-scales--which are needed for synaptic adaptation (Buzsáki, 2006). The consequence of such activity is the integration and combining of events across multiple spatiotemporal scales (Canolty et al., 2010). Each sensory system (e.g., auditory, visual) generates oscillations at particular frequencies (e.g., beta oscillation) in response to relevant input and stimuli (Haenschel et al., 2000;Kay and Beshel, 2010;Cervenka et al., 2013). Given the aforementioned suggestions, it is likely that the HPC encodes and retrieves such sensory information via its own, internally generated oscillations (e.g., theta and gamma) and thus provides a framework for how the HPC gains access to episodic information spanning multiple modalities and spatiotemporal scales. In the current review, we emphasize the role of septotemporal theta indices with respect to recent familiar and novel spatial, as well as sensory experience. We indicate that septotemporal variation in theta dynamics may arise as a consequence of differences in the representation of spatiotemporal scale. These implications have far reaching consequences with regards to hippocampal processing and computation. Summary Here, we indicate that HPC theta oscillations can be related to locomotion, sensory and spatial novelty as well as recent experience. Data from our lab and others support a role for the septotemporal axis in the processing of spatial and sensory novelty---both exerting fundamentally different effects on theta dynamics. Further, novelty induced alterations in theta indices habituated across repeated exposure to a novel environment and auditory stimulus. These results are not inconsistent with reports evidencing habituation of novel auditory stimuli in auditory cortex (Haenschel et al., 2000). We speculate that factors such as time on maze and the familiarity of the experience differentially engage septotemporal circuits, supporting the idea that spatiotemporal scale is differentially represented across the long axis. Thus, septal HPC may be continually engaged by the details of the experience (as indicated by constantly high theta power), while more temporal aspects of the hippocampus, become ''bored'' and disengage more readily when meaningful information about such spatiotemporal features are not relevant to current task demands (as indicated by theta amplitude decrement). These data support the integration of events (e.g., episodic memory) across modalities---as represented by HPC afferents arising from EC, which funnels multi-modal sensory information to HPC and spatiotemporal trajectories---as indicated by alterations in the representation of space and time across the long axis.
v3-fos-license
2018-12-07T02:52:42.520Z
2018-01-14T00:00:00.000
55539185
{ "extfieldsofstudy": [ "Business" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.ccsenet.org/journal/index.php/ijbm/article/download/70383/40054", "pdf_hash": "80eb24b22b140ab6a86baab2643fc9cf5b16f912", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43594", "s2fieldsofstudy": [ "Business" ], "sha1": "80eb24b22b140ab6a86baab2643fc9cf5b16f912", "year": 2018 }
pes2o/s2orc
The Influence of Alliance Innovation Network Structure upon Enterprise Innovation : A Case Study of China ’ s Energy-Saving and Environment-protection Industry The Energy-saving and environment-protection industry, an important strategic and emerging industry in China, will develop into a pillar industry. In view of global climate change, environmental pollution, resource depletion and the defects and deficiencies in traditional technology, technology and product innovation constitute the lifeline of energy-saving and environment-protection industry. The alliance network of enterprises will influence, stimulate, and regulate enterprise innovation greatly. A comprehensive analysis of alliance data of China's energy-saving and environment-protection industry from 2000 to 2013 by using Ucinet software can reveal the network structure parameters such as degree, clique number, average path length, clustering coefficient, and betweenness centrality, which reflects different types of enterprise networks and different positions of enterprises in different types of networks. A negative regression analysis of enterprise patent data and network structure parameters by using Stata software can make some conclusions that the influences of network characteristics on enterprise innovation reach the maximum in the second year of the window period end, that innovation accumulation, clustering coefficient, betweenness centrality are related to the enterprise innovation, that clique number, network density are negatively related to the enterprise innovation, and that there is an inverted U relationship between average path length and enterprise innovation. It is suggested to increase the accumulation level of innovation, appropriately control the network density, reduce the average path length, improve the betweenness centrality and clustering coefficient of enterprises, so as to improve the overall innovation level. Research Background Innovation is the main way for enterprises and countries to gain competitiveness, and the main means for enterprises to obtain excess profits.Many enterprises lack innovation and core technology, and are under the control of other countries in these respects.Many Chinese enterprises with foreign businesses are not in the key link of the global industry chain, and their product pricing ability is weak and the added value of production is low.As a result, they are subject to other enterprises.In recent years, scholars have begun to explore the influencing factors of innovation in various aspects, among which the enterprise alliance cooperation mechanism has also attracted attention. Innovation Costs and Risks Require Enterprise Alliance Cooperation With the continuous progress of technology, the difficulty and risk of innovation further increase.For enterprises, innovation based on their internal resources has been unable to meet the needs of market competition.Enterprises gain competitive advantage by their technology and innovation.It requires a lot of money, manpower and time.At the same time, enterprises bear huge market risks.Innovation costs and risks cause enterprises to cooperate in alliances.With the deepening of postwar economic globalization, more and more enterprises begin to form strategic alliances to share resources and increase mutual innovation advantages.Enterprise strategic alliances help to enhance the core competitiveness of enterprises, economies of scale and scope, reduce operational risks and prevent excessive competition (Zhang, 2001).Strategic alliances can improve the competitiveness of enterprises in the areas of institutional and organizational (Zhou, 2000).The knowledge alliance helps enterprises not only acquire explicit knowledge, but also learn tacit knowledge and create new capabilities (Shen Zuzhi, 2003).It can update or create its core competence through strategic management . The Innovation Behavior of Enterprises Is Stimulated, Influenced and Restricted by the Alliance Innovation Network In the process of innovation, various kinds of alliances and cooperation, formed by enterprises, is called alliance innovation network.From a global perspective, a certain amount of enterprise innovation alliance forms a sparse alliance innovation network, which stimulates, influences and restricts the innovative behavior of enterprises.More and more scholars have proved that network organization is beneficial to enterprise technological innovation.Network practice, management orientation, external knowledge and network embeddedness have influence on innovation performance (Chen, 2016(Chen, , 2017)).There is a main effect between the local and super local double embeddedness of cluster enterprises and the promotion of innovation capability, network strength, persistence and network diversity have a significant impact on the innovation capability of cluster enterprises (Wei, 2014(Wei, , 2016)).The strength of alliance relations has an inverted U effect on corporate innovation performance, and the quality of alliance relationship has a positive impact on enterprise innovation performance (Xie, 2016(Xie, , 2017)).In the process of technological innovation in the industry alliance, relational embeddedness brings benefits, in which weak relationship has a positive impact on technological innovation, while strong ties have a U effect on technological innovation (Wang, 2017).The alliance network characteristics of small and micro technology enterprises play a significant role in promoting innovation performance (Zhang, 2016).Researching and developing cooperation can significantly promote the enterprise innovation, and the close relationship of alliance network is an important means to promote radical innovation (Gao, 2016). Research Methods: Social Network Analysis From the earliest interpersonal network to the later social network research, multidisciplinary integration formed a complete social network theory, methods and techniques.Granovetter (1984) points out that the theoretical hypothesis of traditional economics, pure economic relations, does not exist in real life, and that economic activities can not bypass social relations.He opened up a new field of economic sociology and established network analysis methods.In recent years, with the development of computer technology and of network analysis software, the analysis of the social network with highly complex structure has gradually become an important research object and methods.At the same time, social network and whole network analysis are gradually permeating into the field of economic management.This trend has been around the world for more than 10 years.In recent years, scholars have begun to try to build an alliance network, and they extends the interrelated analysis layers from isolated individual to an interrelated network.Zhao Yan et al. think that the network centrality of enterprises has an lagged positive effect on enterprise innovation (Zhao, 2017); the small world of strategic alliance network positively influences innovation performance (Zhao, 2013); the non redundant links and aggregation of enterprise alliance network nodes have a potential impact on enterprise innovation (Zhao, 2013); the structure holes of the alliance network can significantly promote the innovation performance of enterprises (Zhao, 2012); faction and knowledge flow have positive influence on alliance network innovation performance (Zhao, 2016). Research Purpose and Significance Energy-saving and environment-protection industry has become one of the strategic emerging industries to accelerate the cultivation and development of China.A Development Plan of the National Strategic Emerging Industry From 2016 to 2020 proposed that we should focus on the construction of ecological civilization and climate change, comprehensively promote energy-efficient and advanced environment-protection and resource-recycling industry system construction, and promote energy-saving environmental protection industry to become a pillar industry.Energy-saving and environment-protection enterprise's research and developing high-tech equipment should be in accordance with the specific conditions of environmental pollution and our national requirements for environmental protection, energy consumption, and resource consumption (Note 1), which requires a large number of innovative behavior.At the same time, there are many strategic alliances between enterprises in order to promote technological cooperation and accelerate innovation.At present, there is no literature published about the relationship between alliance structure and enterprise innovation behavior of China's energy-saving and environment-protection industry.It is necessary for us to do pioneering research on it.Through this study, we should determine whether there is a connection between these two and their specific forms (mathematical expressions).On this basis, we can draw some useful inferences and suggestions for the government, industry and enterprises. Enterprise Innovation Function Enterprise's innovation ability can be restricted by many factors.We simply divide these factors into two aspects: network factors (Song, 2014) and other factors.Enterprise innovation function can be put forward as follows: "Y" represents the total amount of innovation, and "i" represents the ith company.The network factors include the following indexes: degree, betweenness centrality, local efficiency, network density, clustering coefficient, average path length, cliques numbers, core values, etc.Other factors include individual factors such as enterprise scale, enterprise nature, industry characteristics, enterprise culture, enterprise history, Researching and developing investment intensity, management characteristics, enterprise strategy choice and macro factors, such as international politics, economy, national law, social development, government policy and so on. This paper focuses on the influence of alliance network on enterprise innovation, and we use the "accumulation of innovation" variables to measure all other factors as a whole.In later calculation, we can find that "accumulation of innovation" is an important explanation for enterprise innovation.At the same time, a number of network factors also make partial explanations for enterprise innovation. When individual factors of enterprise and network factors affect the innovation linearly, the function can be written in the following form: Considering the possible cross effects between factors: The Nature of the Single Factor Innovation Function This paper next analyzes the nature of f j (X ji ) (the influence function of network factors on enterprise innovation) when j ≠ 0. Network factors affect enterprise innovation mainly through two aspects which are the availability and convenience of innovation resource acquisition (AC), and the management cost of alliance relationship (Cost) (Zhong Shuhua, 1998). Next, we analyze the property of this function by using the simplest network structure parameter (degree). The first part of Equation ( 4) plays a positive role in promoting enterprise innovation.With the increase of alliance relationship, the status, importance and centrality of enterprises in the network are increasing, and the availability and convenience of innovation resources are also increasing.It is obvious that the AC function goes through the origin and rises monotonically.When the enterprise has only a few alliances, the enterprise is on the edge of the alliance network, and the information is limited by the core enterprises, and lacks the right to speak.The benefits of the alliance relationship are not easy to show.With the increase of alliances, enterprises are gradually approaching the central position of the network, and they can get more information and speak more right.According to economics theory, when the resource inputs began to gradually increase from zero, it initially manifests as economies of scale (increasing marginal revenue).After reaching a certain critical value, it manifests as economies of scale (decreasing marginal revenue).It is assumed that the slope of the AC function increases first and then decreases, and eventually approaches zero.That is to say, with the increase of the number of alliances, the unit income first rises gradually, then decreases gradually until it disappears.The second part of equation ( 4) has negative influence on enterprise innovation, and its absolute value increases with the increase of alliance relationship.It is obvious that the Cost function is also monotonically increasing through the origin, and its slope has no obvious tendency to change.We assume that the slope is fixed (Jin, 2005;Fan, 2003;Gu, 2001). Obviously, when degree reaches the maximum theoretical value of Dmax (enterprise sets up alliances with all other enterprises), the Cost function value should be greater than that of AC function, and obviously no enterprise the maxim alliances F Note. The The abov agree wit be cross e The Pr We choos terms. Data The er than the e negative ND was highly correlated with CN.CV, D, and E were highly correlated with each other.In the latter model estimation process, variables E, ND, and CV are excluded. Build Models and Estimate After the establishment of enterprise alliance, it takes a certain period of time to produce obvious effect on innovation.In order to investigate the length of the lag period, we establish the negative binomial models with different lag stages, and use Stata to estimate them.The results are shown in table 2. Model 6 Is Relatively Optimal On the basis of model 6, this paper tries to cross terms, but no significant items appear.We can see that although the R-squared is small, the F statistic of the equation and the T statistic of each coefficient pass the significance test.We can think that the following relationship exists: ijbm.ccsen The uppe Analys Observe t Accum In all the innovatio relationsh The I Alliance In the ful N p2 >N p1 > sorted lik growing reaches it Next, we From The clust small gro network d The close each ente elicitation while sin closed th innovativ Figure 2. Note.The d using the N enterprises h Figure 3. D lliances in the cha loop, the informa uming the innova oliferation of inno =49.5(CC=1); 1+ Figure 5 taking into account other network structure parameters, enterprise characteristics variables and macro factors. Table 3 . Stata negative binomial regression models with cross item (obs=706) .3.4We Introduce Square Terms One by One to Model 6, and Find That Only the Square Term Of APL Is Significant * p < 0.1, ** p < 0.05, *** p < 0.01.If the cross terms Cc*cn and a*cn are added to the model, the result cannot be calculated.3
v3-fos-license
2023-01-16T14:52:41.442Z
2022-03-09T00:00:00.000
255840300
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://doi.org/10.1186/s12903-022-02095-4", "pdf_hash": "ce4cfe64b2a6403dd3d5a7ce7b7aa82a43105c48", "pdf_src": "SpringerNature", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43595", "s2fieldsofstudy": [ "Medicine", "Political Science" ], "sha1": "ce4cfe64b2a6403dd3d5a7ce7b7aa82a43105c48", "year": 2022 }
pes2o/s2orc
Exploring variation of coverage and access to dental care for adults in 11 European countries: a vignette approach Oral health, coupled with rising awareness on the impact that limited dental care coverage has on oral health and general health and well-being, has received increased attention over the past few years. The purpose of the study was to compare the statutory coverage and access to dental care for adult services in 11 European countries using a vignette approach. We used three patient vignettes to highlight the differences of the dimensions of coverage and access to dental care (coverage, cost-sharing and accessibility). The three vignettes describe typical care pathways for patients with the most common oral health conditions (caries, periodontal disease, edentulism). The vignettes were completed by health services researchers knowledgeable on dental care, dentists, or teams consisting of a health systems expert working together with dental specialists. Completed vignettes were received from 11 countries: Bulgaria, Estonia, France, Germany, Republic of Ireland (Ireland), Lithuania, the Netherlands, Poland, Portugal, Slovakia and Sweden. While emergency dental care, tooth extraction and restorative care for acute pain due to carious lesions are covered in most responding countries, root canal treatment, periodontal care and prosthetic restoration often require cost-sharing or are entirely excluded from the benefit basket. Regular dental visits are also limited to one visit per year in many countries. Beyond financial barriers due to out-of-pocket payments, patients may experience very different physical barriers to accessing dental care. The limited availability of contracted dentists (especially in rural areas) and the unequal distribution and lack of specialised dentists are major access barriers to public dental care. According to the results, statutory coverage of dental care varies across European countries, while access barriers are largely similar. Many dental services require substantial cost-sharing in most countries, leading to high out-of-pocket spending. Socioeconomic status is thus a main determinant for access to dental care, but other factors such as geography, age and comorbidities can also inhibit access and affect outcomes. Moreover, coverage in most oral health systems is targeted at treatment and less at preventative oral health care. Introduction Oral diseases, such as dental caries (tooth decay), periodontal disease (gum disease) and edentulism (tooth loss) are persistently among the most prevalent conditions globally, despite being largely preventable [1]. They can have significant consequences, including unremitting pain, sepsis, reduced quality of life, lost school days, disruption to family life, and decreased work productivity. As such, they pose a substantial health and economic burden for individuals, families and society as a whole [2,3]. Routine access to primary oral health care allows for early detection and management of oral diseases and can mitigate their negative impacts [4]. The importance of oral health has received increased attention over the past few years. Both the 74th World Health Assembly Resolution (2021) and The Lancet Issue on Oral Health (2019) have highlighted the need to shift away from the traditional curative approach towards prevention, while also integrating oral health within primary health care systems and universal health coverage programmes [2][3][4][5]. Despite the significant impact of oral health on general health and well-being, many countries restrict dental benefits covered by the statutory health system to specific treatments or age groups [6,7]. Several dental care services require either cost-sharing or are paid fully out-of-pocket. There are large disparities in levels of cost-sharing and types of treatments excluded from the benefit basket across national and even regional jurisdictions. At the same time, there is increasing evidence that limited coverage reduces both financial protection and people's capacity to obtain dental care if they cannot pay for treatment or disposables [8,9]. This leads to inequalities in access to dental health services within and across countries and eventually to inequities in oral health [5,7,10,11]. A 2019 survey on areas of care where access might be a problem in European countries identified oral health as one area with major gaps in coverage and access [12]. Modifications to the benefit basket and how related services are financed and delivered will inevitably be needed in most countries to achieve better coverage and integration of dental care. For such efforts to be successful, it is equally important to identify and understand barriers to accessing dental care services beyond coverage, such as the physical availability and accessibility of the necessary care providers or potential differential experiences due to social determinants of health. The variation in coverage and other access barriers to dental care services across countries, however, remains under-investigated [7,8,12]. Studies focusing on the coverage of dental care for older adults in high-income countries [7,12] have shown that while most countries include some coverage for oral health services in their benefit baskets, important barriers to access exist. To the best of our knowledge, a comprehensive attempt to describe dental care coverage and capture potential access barriers for the general adult population in European countries using a qualitative approach was lacking. Against this backdrop, the aim of this paper is to compare differences in dental care coverage and access for adults in 11 European countries using a vignette approach. The three most frequent oral diseases (dental caries, periodontal disease and edentulism [3]) were chosen as the basis for the vignettes. Together, they amounted to approximately 0.75% of total disabilityadjusted life years (DALYs) and 2.2% of years lived with disability (YLDs) globally in 2019 [14]. On the basis of patient pathways for each of these three conditions, we first examine which dental care services are covered under the statutory benefit package, under which conditions and to what extent (i.e. scale of user charges in the form of cost-sharing or private payments) across those countries. We then compare further barriers to realised access, such as the physical availability of dental care services. This research was carried out as part of the work for the Expert Group on Health System Performance Assessment (HSPA) of the European Commission, aiming to explore the usefulness of the patient vignette approach as a complementary tool for identifying gaps and challenges in access to health care in the context of HSPA [15]. Conceptual framework A vignette is a short description of a person or situation designed to simulate key features of a real-world scenario [16][17][18][19]. A vignette case generally specifies a hypothetical patient's age, gender, medical complaint, and health history. As a research tool, vignettes are usually presented to relevant professionals to solicit their hypothetical response or behaviour. In medical literature, vignettes are mostly used to study variations in decision-making processes, including clinical judgments made by health professionals [20,21]. Recently, vignettes have, for example, also been used to investigate the availability and nature of certain types of care such as outpatient mental care [22] and community dementia care [23]. This study focuses on gaps in access during an episode of care that can be compared across countries. Therefore, the vignettes also include a delineation of the recommended care pathway and a list of services that could then be used to benchmark and compare access across countries. To compare coverage and access to the services included in each vignette, we use the framework of the Gaps in Coverage and Access survey [13,24], which explores the three traditional dimensions of coverage (population coverage, service coverage (which benefits are covered) and cost coverage (what proportion of costs is covered)) as well as a fourth dimension, labelled service access. Population coverage was not listed separately for this work, as gaps in statutory health coverage would be picked up under the service coverage dimension. In terms of service access, gaps could result from (i) a lack of physical availability of services, due to long distances to the provider, lack of sufficient statutory/contracted providers, poor quality of services, limited opening hours, waiting times and waiting lists; (ii) a person's inability to obtain necessary care, due to their incapacity to formulate a care request, obtain the care or to apply for coverage (and fulfil the necessary requirements) because of their condition or situation (e.g. people with cognitive impairment, mentally ill, homeless), and (lack of ) ability to navigate the system (such as being referred from one provider to another); and (iii) the attitude of the provider due to discrimination (on age, gender, race, religious beliefs, sexual orientation, etc.), for instance, leading to denial of care or the inability to accommodate care to the patient's preferences [13]. Furthermore, a list of determinants that could improve or worsen access, including patient characteristics (e.g. age, sex, and socioeconomic status, insurance status, legal status, place of residence) as well as other factors (night vs. day treatment protocols), were added to the conceptual framework and respondents were also asked to provide any other determinants they thought could affect access for the vignette. Participant selection Experts in 11 countries, including Bulgaria, Estonia, France, Germany, Republic of Ireland (Ireland), Lithuania, Netherlands, Poland, Portugal, Slovakia and Sweden were invited to participate in the vignette survey. The countries were selected to capture a variation of health systems (i.e., social health insurance vs. tax-financed, multi-vs. single payer, centralised vs. decentralised) and ensure geographical distribution. Depending on the country, vignettes were completed either by health services researchers knowledgeable on dental care, dentists, or by teams consisting of a health systems expert working together with dental specialists. Data collection: design of dental care vignettes and survey The vignettes were designed in collaboration with the Department of Oral Diagnostics, Digital Health and Health Services Research at the Charité Medical University in Berlin (Germany). Each vignette and the corresponding care pathway represent a common realistic dental problem, with potential treatment options based as much as possible on common practice and international guidelines or recommendations. To shape each vignette, recommendations found in systematic reviews or developed by national, European or international organisations in the field of dentistry were used. Three dental care vignettes were designed that illustrate typical care pathways for adult patients with the most common oral health conditions (caries, periodontal disease, edentulism). Vignette 1 explores coverage and potential access barriers in the treatment of dental caries that can be addressed by both non-restorative and restorative treatment using different materials (e.g. non-restorative: regular application of fluoride, gels, varnishes or sealants, or a combination thereof, resin infiltration; restorative: fillings using dental amalgams or composite resins, crowns) [25][26][27][28]. Vignette 2 focuses on periodontal conditions caused by plaque induced inflammation of the gingivae and characterised by red swollen tissues and bleeding (gingivitis) with periodontitis, resulting in further loss of supporting bone and attachment. Recommended treatment includes patient instruction on daily plaque removal as well as the removal of supra-gingival plaque, calculus, stain (dental cleaning) and sub-gingival deposits (root planning) and control of local plaque retentive factors [29,30]. The removal of dental calculus, which is part of the scaling and root planning treatment, also presents a very effective (primary and secondary) preventive intervention for periodontal disease. Vignette 3 considers coverage and access challenges for edentulous patients. Edentulous patients have a choice among different rehabilitation options: while complete dentures are widely used, the use of implant-borne replacements is increasing and there is evidence supporting their benefit in minimizing bone resorption. Prosthetic dental work is costly, but different modalities may be more or less affordable to patients [31,32]. Table 1 presents the three vignettes, including relevant services. Each vignette describes the patient, their symptoms and potential care decisions for their clinical situation. The sequence of services corresponds to the usual care pathway, which might not necessarily be the same for all countries and settings. It was expected that the chosen services might not reflect standard practice in some participating countries, and respondents were invited to describe these differences. To collect the data, a survey was constructed which presented each vignette in a separate table outlining all individual services per vignette (Table 1). In addition, for each service, experts were asked to indicate statutory service coverage (which benefits are covered) and cost coverage (what proportion of costs is covered). Moreover, for the access dimension, they were asked to indicate physical availability of services, a person's ability to obtain care, providers' attitude and any additional determinants they thought could affect access for each service of the vignettes. The full survey tool is available online (Additional file 1: Table S4). Data analysis and reporting The information provided by individual country experts in the survey was extracted and summarised in one table per vignette (Additional file 1: Tables S1-S3), exploring each service of the patient pathway by the three dimensions (coverage, cost-sharing, and physical availability/determinants of access) per country. We further synthesised responses (Figs. 1, 2, 3 below) using a traffic light system (green-yellow-red) to visually compare results across countries. These comparative tables build the foundation for the cross-country analysis of coverage. Results on physical availability and determinants of access are broken down in more detail in Table 2 and analysed separately, as access barriers were often similar across the three vignettes. Results Completed vignettes were received from the 11 countries named above between October and December 2020. If answers were unclear, country experts were contacted to provide clarifications. Overall, responses varied in level of detail provided. Some responses (in particular in Ireland and Sweden) showed the complexity of the coverage system for dental care, indicating need for further explanation. Dental services in Ireland are delivered through three publicly funded schemes: (i) the Public Dental Service (PDS), which provides emergency and some routine oral healthcare for children under the age of 16 and certain vulnerable groups, (ii) the Dental Treatment Services Scheme (DTSS) that entitles certain adults to some services free of charge, and (iii) discounted dental treatment under the Dental Treatment Benefit Scheme (DTBS) to those who have paid three years of social insurance contributions [33][34][35]. In addition, private dental care is available for patients that pay fully out-of-pocket and claim back fees of up to 20% of the treatment cost for certain non-routine procedures through tax relief [36]. In Sweden, dental care is free up to the age of 23 and all others receive an annual general dental care allowance between EUR 30 and EUR 60 to encourage dental checkups and preventive care. People with certain illness or conditions (e.g. difficult-to-treat diabetes) receive a special dental care subsidy of EUR 60 every six months. In addition, most dental care in Sweden is subject to a highcost protection scheme, which aims to protect patients from very high dental care costs. Treatment costs above Table 1 Dental care vignettes-patient description and services in patient pathway Vignettes Services Vignette 1: Urgent care with root canal and prosthodontic treatment A 35-year-old patient has not been able to sleep for two nights due to a strong, beating pain in the right-lower jaw. The patient requests an urgent dental appointment. The dentist determines that the patient needs a root-canal treatment to preserve the first lower molar, and treat the pain. The patient decides for the root canal treatment and against the alternative of tooth extraction. Following the root canal treatment, reconstruction with composite (filling) material is used until a fixed prosthodontic treatment (crown/onlay) can be placed Emergency consultation with dentist Radiography ((bitewing) X-rays) Root canal treatment OR Tooth extraction (interim) reconstruction with white filling material Fixed prosthodontic treatment (crown/onlay) Vignette 2: Periodontal treatment A 66-year-old patient with co-morbidities (obesity, diabetes) has frequent discomfort in the upper jaw. After a consultation, chronic periodontitis with generalized level 2 mobility is diagnosed, requiring scaling and root planning, involving periodontal probing and elimination of dental calculus and frequent follow-up visits to stop disease progression and stabilize bone-loss Scheduled visit with the dentist Scaling and root planning (performed by a dentist) Periodontal probing, and elimination of dental calculus (performed by dental assistant or hygienist) Vignette 3: Implant-borne restoration and prosthetic rehabilitation An edentulous 75-year-old patient received full upper and lower dentures 5 years ago. She feels she has lost significant capacity to chew as the lower prosthesis is poorly retained and gets displaced when speaking or eating. She seeks counseling from her dentist, who recommends two implants on the lower anterior jaw and an overdenture to improve retention. She agrees with this course of treatment and against more sophisticated fixed alternatives Consultation and surgical planning Surgical implantation Prosthetic rehabilitation: New prosthesis or adjustment of old prosthesis using the implants OR (Partially) fixed dentures certain thresholds during a twelve-month period are covered at 50% (for costs between EUR 295 and 1 470) or 85% (above EUR 1 470) of the reference prices. The Netherlands stands out in coverage of dental care by complementary voluntary health insurance (VHI). Most dental care services are not publicly covered but reimbursed in part by VHI plans, which are used by 84% of the population. In France, private insurance also plays an important role in the reimbursement of non-routine dental care services not publicly covered. The following sections summarise results on coverage per vignette, followed by results on service access across vignettes. Vignette 1: Urgent care with root canal and prosthodontic treatment The first vignette explores treatment for acute pain due to caries. Related dental care services are in general covered in most responding countries, except for the Netherlands and Portugal (Fig. 1). Emergency services and radiography are covered in most countries, often with standard cost-sharing such as in France and Sweden (sometimes covered by complementary VHI) or with restrictions regarding the number of emergency visits and radiographs covered, such as in Ireland, where patients are eligible for one emergency consultation per year only. In Bulgaria, Ireland and Slovakia, emergency consultations Service covered, sometimes limited (e.g. one visit per year) and services require user-charges and/or services covered for only some population groups without user charges. Service not covered and/or almost always paid out-of-pocket. There is a lot of variation regarding coverage of treatment alternatives of tooth extraction and root canals. Limited services and cost coverage for tooth extractions can be found in Estonia, where it is only covered in case of emergency and also in France, Lithuania and Sweden, where cost-sharing is required. In Ireland, only DTSS beneficiaries are entitled to tooth extraction. Tooth extractions are covered overall more comprehensively than root canal treatments. Root canal treatment can be excluded from coverage, such as in Bulgaria and Ireland, or be limited to certain parts of the mouth (usually covered for visible teeth, i.e. molar to molar), as in Poland. In many countries, molar root canal treatment requires substantial cost-sharing, and it can be fully excluded from public coverage for the majority of the population, as in Ireland. Restoration with composite material and prosthodontic treatment are less comprehensively covered overall. In Germany, there is a fixed subsidy of 60% for standard treatment of crowns or onlays, which can be increased if patients are demonstrably consistent about preventive visits. The remaining costs, as well as any difference of costs due to patients choosing superior materials Service covered, sometimes limited (e.g. one visit per year) and services require user-charges and/or services covered for only some population groups without user charges. Service not covered and/or almost always paid out-of-pocket. In all other countries, only a fraction of the costs for fixed prosthodontic treatment is covered by the statutory health insurance. In several countries, complementary VHI seems to play an important role for the reimbursement of dental treatments that are not or only partially covered, including prosthodontic treatment. Vignette 2: Chronic periodontal condition The second vignette describes a multimorbid patient with chronic periodontitis that requires a scaling and root planning, and regular follow-up visits. Regular checkups with the dentist seem to be less comprehensively covered across countries than the acute visit in Vignette 1. In some countries, the number of dental check-ups is capped at one per year (Bulgaria, Ireland, Slovakia, Poland) or subject to cost-sharing, such as in Estonia and France (Fig. 2). Scaling and root planning are also only partially covered in many countries or limited to a share of teeth (e.g. in Poland). Moreover, the number of planned follow-up visits to stop disease progression and stabilise bone loss are restricted in some countries (Ireland, Poland and Slovakia). Interestingly, there are large variations in coverage of periodontal probing and elimination of dental calculus (which is part of periodontal treatment to prevent disease progression). The latter treatment is usually performed by a dental assistant or dental hygienist. In Germany, with comparatively comprehensive coverage for dental care overall, dental cleanings are not covered by the statutory health insurance, while in Slovakia (which has more limited coverage) the social health insurance covers periodontal probing and elimination of dental calculus. Basic dental hygiene in Slovakia is partly covered by SHI insurance in the case patients attend regularly preventive check-ups twice a year. In Ireland, one scale and polish per year is covered up to EUR 42 for those who contributed to social insurance in the last three years (Dental Treatment Benefit Scheme (DTBS)), corresponding to almost half of the population. Some cost-sharing applies in Estonia and Lithuania, while patients in the remaining countries (as in Germany) have to pay fully out-of-pocket for these services. Vignette 3: Coverage of implant-borne restoration and prosthetic rehabilitation across countries The third vignette describes prosthetic treatment for an older, edentulous patient who received full upper and lower dentures five years ago. Overall, the required interventions of prosthetic restoration are less comprehensively covered than services in Vignettes 1 and 2. * The "100% Santé dentaire" reform in France in 2019 increased coverage for removable and fixed protheses that are fully reimbu rsed by the compulsory health insurance by 2021 up to a defined price level. Service covered, no or almost no user-charges apply. Service covered, sometimes limited (e.g. one visit per year) and services require user-charges and/or services covered for only some population groups without user charges. Service not covered and/or almost always paid out-of-pocket. Coverage gaps exist regarding the requirement for costsharing (Fig. 3). While some countries employ financial protection measures to assist lower-income individuals procure dentures (e.g. Germany, Ireland, the Netherlands), the OOP costs to be borne by patients can still be substantial. In many countries, coverage of prosthetic rehabilitation or dentures is time-bound, with coverage intervals ranging between three to five years. In Lithuania and Estonia, for example, costs for new prosthetic rehabilitation are covered up to a ceiling of EU 561 (Lithuania for pensioners, disabled and cancer patients) and EUR 260 (Estonia) every three years and if provided by contracted dentists (the exact amount covered can vary by level of bone retention). France expanded coverage of dental prostheses (including bridges, crowns and movable prosthetics) as of 2021. In Germany, surgical implantation is only covered for patients with exceptional medical indications (e.g. jaw deformities). For prosthetic rehabilitation or fixed dentures, the fixed subsidy for dentures applies that covers 60-75% of costs. Overall, implants are not covered by statutory insurance and are fully OOP in most countries. An exception in coverage for prosthetic treatment is the Netherlands, where general dental care is usually excluded from the broad benefit package for adults. The Dutch statutory basic tariff, however, covers the cost of full dentures at a reimbursement rate of 75% for new prothesis and at 90% for the repair of full dentures, with an annual deductible of EUR 385 (this deductible also applies to other health services and has to be paid by adults before the insurer reimburses). An additional fee of EUR 250 per jaw applies, though lower jaw implants are covered under certain conditions. Service access: physical availability and other determinants The results reported in the three vignettes also show that patients may experience very different kinds of physical barriers in accessing dental care ( Table 2). The most important barriers reported in all three vignettes across countries relate to the availability of dental care providers, be that due to a general shortage of professionals contracting with public payers or regional variation. In Estonia, for example, the number of contracted dentists per capita is very low and represents the major limitation for access. In Ireland, the number of dentists contracted to operate in the public dental scheme is rapidly declining. Almost all countries reported a shortage of dentists, particularly in rural and remote areas as well as deprived areas with impacts for waiting times, opening hours (shorter in rural areas) and travel distances. As dentists are primarily located in urban areas, physical access to dental care for patients in rural areas is often Germany Physical availability Lower density in rural areas, potenƟal lack of equipment in older clinics Lower availability of specialists in some areas, regional variaƟon of dental assistants and hygienists Specialist in implantology scarce in rural areas Ireland Physical availability VariaƟon of denƟsts by region and area deprivaƟon: specialist pracƟces generally confined to more urban areas, while only two dental specialƟes are recognised in Ireland (oral surgery and orthodonƟcs); Declining numbers of contracted denƟsts parƟcipaƟng in the DTSS scheme which largely provides care for lower socioeconomic groups. Determinants of access Socioeconomic status; area of residence; access difficulƟes for older adults in rural areas (especially those with mobility issues) and vulnerable groups parƟcularly children and adults in residenƟal care, refugees, asylum seekers, homeless people, and other socially excluded groups resulƟng in long waiƟng lists for general anaestheƟc and other referral services Lithuania Physical Sweden Physical availability WaiƟng Ɵmes and variable opening hours in rural seƫngs, Accessibility issues for paƟents with physical impairments, very low denƟst-to-populaƟon raƟo in remote areas more difficult. This compounds for interventions requiring multiple visits, making waiting times a major access barrier. In Poland, for example, the average waiting time in 2020 was 16 days, but varied from six days to 41 days across regions. Moreover, appropriate technical equipment (e.g. X-ray units) is not equally available across dental practices, necessitating referrals to other providers or laboratories, as reported in Bulgaria. Accessibility issues for people with reduced mobility in smaller and older dental clinics were reported as another access barrier in France, Lithuania and Sweden, with an example of this being dental care facilities lacking ramps or having narrow doors and thus not accessible for wheelchair users. While the majority of physical access barriers were similar across the three vignettes, emergency care (Vignette 1) and more specialised treatment pathways (Vignettes 2 and 3) highlight access barriers specific to specialised services and providers. Emergency dental services and out-of-office hour dental care in general are often only available in large cities in some countries (Vignette 1). The unequal distribution and/or lack of specialised dentists as well as dental hygienists constitute major barriers in many countries. In Ireland, dentists with a special interest in endodontics are generally confined to more urban areas. In Slovakia, the lack of specialists on periodontal conditions results in a low quality of care for these patients (Vignette 2). Lithuania experiences a lack of dental assistants in facilities contracted by the statutory health system. As a result, patients incur OOP costs, as the services of dental assistants are only covered if they are employed in a contracted facility. Moreover, the lack of specialists in rural areas has become a main barrier for access (Vignette 2). For Slovakia, respondents highlighted that stomatology centres are confined to larger cities, creating access barriers for patients requiring implant-based treatments and also in Bulgaria, where very few dentists are experienced in dental implantology as it is a relatively new specialty (Vignette 3). The socioeconomic status of patients was reported as the main determinant of access to dental care in nearly all countries. This is particularly pronounced when patients have to pay upfront for services that are reimbursed retrospectively by health insurance or cover very high OOP costs. In Lithuania, for example, the high cost of dentures (Vignette 3) implies that the intervention remains unaffordable for low-income groups. Several countries have recognised that in theory, those with cognitive impairment or mental health conditions might be less able to formulate a care request or understand the different benefits and treatment processes of alternatives, such as getting a root canal vs. an extraction. In some countries, providers might deny care due to financial reasons (related to insurance status or income level). Across all vignettes, most respondents highlighted that patient age can inhibit access and affect outcomes, for instance by needing to travel long distances. Access barriers due to difficulties with formulating the care request may be similarly exacerbated in this patient group, particularly for the third vignette, with patients potentially finding it difficult to understand the benefits of different options and/or navigate complicated administrative processes that can help with claiming support to cover OOP costs. Other determinants may also impact access. Evidence from Sweden, for example, identified female gender, higher educational levels and native status as drivers for seeking care for chronic conditions-men, less educated people and foreigners are less likely to seek care. Foreigners and the less educated are also less likely to take advantage of cost-sharing mechanisms. The question on the role of provider attitudes was the one most frequently left without adequate responses due to lack of relevant evidence. However, several countries reported indicative reasoning for motivating factors. Most frequently, care denial was driven by insufficient coverage (either because public coverage tariffs are too low or because patients are deemed unable to cover OOP costs) or insufficient skill on the side of the practitioner (i.e. being able to work with children, cognitively impaired patients or individuals living with a mental disorder). One country also mentioned dentists refusing care to patients with chronic infectious diseases like hepatitis C and HIV due to the associated precautions. Discussion This vignette study has demonstrated the limited public coverage of several common dental services in many settings. The three vignettes exemplified the considerable variation of service and cost coverage for dental are across the 11 countries. Basic dental care, such as emergency consultations, tooth extraction and X-rays are covered in most countries without co-payments. In general, tooth extraction might be considered as the most affordable choice and therefore be more broadly covered by statutory insurance. However, this largely depends on the location of the tooth. In most cases, tooth loss creates not only deteriorating jawbone, gum disease, poor eating habits or difficulty speaking, but also reduces overall quality of life [37] and requires more expensive treatments to replace removed teeth. Cost-sharing applies as a rule for most services in the vignettes and is structured very differently across countries. Cost-sharing may come in the form of co-insurance (such as in France), fixed subsidies (Estonia, Germany and Sweden) or as a deductible 1 (the Netherlands, where co-payments also apply for total prothesis). The most significant cost-sharing applies to fixed prosthodontic treatment, where only a fraction of costs is covered by the statutory system and therefore these options remain unaffordable for many people. In many countries, the number of dental services covered is limited per annum (e.g. dental examination) or over a defined period of several years (for dental protheses). The specific teeth covered for some treatments can also be restricted. In most countries, statutory coverage is limited to standard materials; above-standard materials, which ensure high-quality dental care and thus better health outcomes have to be paid out-of-pocket by the patient. This showcases the general limited coverage of dental care as regards service coverage when compared to other health services. Overall, dental care seems to be subject to more cost-sharing and restrictions compared to other areas. This results in limited financial protection for the costs of oral health care in many countries (see also [13]) and financial hardship for households that also impacts the use of dental care. When comparing unmet needs for different types of care (medical care or prescribed medicines), dental care is the most frequent type that people forego due to financial reasons. On average, 14% of adults report unmet needs for dental care due to costs in EU countries [38]. Financial protection measures often address the needs of specific population groups like low-income earners or other vulnerable groups (pregnant women, children, patients with serious illness or mental or physical disabilities) [7]. Some financial protection mechanisms also exist for older people, as reported in Estonia and Lithuania, where pensioners receive higher reimbursement for prosthodontic treatments than younger adults. In Sweden, people above the age of 65 as well as individuals 24-29 years old are eligible for a general dental care grant, which is higher than for all other adults [7]. However, even mitigating measures such as the highcost protection scheme in Sweden, do not necessarily fully alleviate OOP burdens. For services only provided in the private sector without public coverage, prices are often unregulated (e.g. Poland), and resulting OOP costs are substantial. In many countries, VHI is common for dental care (e.g. Germany, France, the Netherlands and Portugal), for (full) coverage of services or coverage of cost-sharing obligations. In the Netherlands, VHI reimbursement is capped depending on the insurance policy, incurring additional OOP costs for more expensive treatments. Older patients are particularly threatened with high(er) OOP costs, as many teeth increasingly being retained into older age are often heavily restored and/ or have some degree of advanced periodontal disease [3,25]. There is a large variation of incentives created by service coverage across countries, which can be contradictory. While dental extraction seems to be better covered than tooth retaining procedures (root canal treatment) in many countries, there are different schemes to incentivise preventive care, such as in Germany or Slovakia. In Slovakia, patients only receive a dental allowance (EUR 100 to 150 per year) towards cost-sharing requirements if they had a dental examination in the previous year. In Sweden, the general dental care grant intends to encourage adults to regularly visit their dentist for check-ups and preventative care. However, the current potential of preventive therapies in dentistry to improve oral health and contain costs is still underutilised throughout Europe. Countries need to step back from the current treatment-focused approach and create new ways of oral disease prevention and oral health promotion by strengthening the integration of oral health into primary health care [4,5,39,40]. Overall, there is potential for mutual learning from existing incentive schemes that focus on preventative care as well as benefit schemes that cover dental care more comprehensively. In all countries, statutory coverage of dental care does not necessarily imply that people have unrestricted access to dental care services. Many similar barriers limit access to dental care across countries, which relates to the physical availability of care (due to long distance, poor quality, reduced opening hours, waiting times) as well as a person's ability to obtain necessary care or the attitude of the provider. In particular, the limited availability of contracted dentists creates a major access barrier to public dental care in many countries. This is especially detrimental for patients residing in rural areas or less wealthy regions that may not profit from the same density of professionals, specialised clinics or modern equipment as those residing in urban centres. The impact of geographical imbalances of dental care providers highlights the need for a more diversified skill mix among oral health care professionals and improved workforce planning. Another interesting element is the lack of consideration of physical accessibility for people with disabilities in older, more remote facilities (e.g. wheelchair access) and the potential difficulties of patients with cognitive impairment or other types of dependency to understand the benefits and disadvantages of different care options, adhere to treatment plans or navigate the complicated reimbursement system. New policies to improve oral health should take these factors into account in workforce education and capacity planning. Barriers to high-quality care in some countries are also attributable to the lagging establishment of "best practices". In Vignette 3, newer prosthetic treatments involving surgical implants were not widely reported as available in all countries. In Slovakia, for example, implants are still not a standard procedure for some dentists and thus the physical availability of the service is worse in some parts of the country. A lack of a respective dental guideline may be the major reason for these nonharmonised treatment pathways. At the European level, there is currently no detailed, common guidance concerning management and treatment of patients with oral health problems, complicating the comparison of coverage and access to oral health services. This vignette study on coverage and access to dental care has several strengths and limitations. On the one hand, it demonstrated the potential of the vignette approach to pick up access barriers usually not demonstrated by performance assessment indicators and exemplified the variations and complexities of dental care coverage. It confirmed previous knowledge about the limited coverage of dental services, which automatically pre-disposes patients from lower socioeconomic strata to experiencing further barriers along the path to realised access, widening health inequalities. The study also showed the impact of a limited or unbalanced supply of dental care providers on access to care, even among eligible individuals and for covered services. At the same time, the study has several limitations. A clear limitation of vignettes is that they may not accurately reflect the real world, both with regard to the textual descriptions of used case examples and the elicited hypothetical behaviour [21,41]. The comprehensiveness and accuracy of information relied on the knowledge and experience of respondents. Participating experts may not always have comprehensive knowledge on each dental procedure covered in the vignettes, the relevant regulations of coverage, or the effective access to these services. There was also substantial variation in the detail level of responses. Moreover, due to lack of harmonised dental guidelines, the treatment pathways described in the vignettes did not necessarily correspond to the usual treatment options in some countries. Thus, it became clear that responses could have been skewed by the initial focus of the vignette template on coverage, as categories further to right of the table related to realised access were not always tackled in detail. This was probably also compounded by the background of respondents (see methods section). For this exercise on dental care, it is conceivable that the three chosen vignettes were too many in terms of services included to be answered at once, as a certain level of respondent fatigue was obvious for the third vignette on edentulism (less granularity, more skipped fields in the template). Based on the results of our work, future studies should investigate the association of (limited) coverage and access with the burden of oral diseases more closely. This might be hampered by the limited availability of comparable data on oral health measures within and across European countries, which in itself constitutes a call for additional funding for data collection. The role of different incentive models for preventative oral health services and the extent to which evidence on (cost-)effectiveness guides decisions on dental benefit baskets should also be further explored to guide the formulation of future policies. Conclusion The results of the vignettes reveal that statutory coverage of dental care varies across 11 European countries, but access barriers are largely similar. Statutory coverage of many dental services is limited, and substantial costsharing applies in most countries, leading to high OOP spending. Socioeconomic status is thus a main determinant for access to dental care, though other factors such as geography, age and comorbidities can inhibit access and affect outcomes. Additionally, different incentive structures have implications on how patients are treated regarding state-of-the-art dental care. Furthermore, our findings showed that coverage in most oral health systems is targeted at treatment and less at preventative oral health care. Policies are needed that exploit the potential of preventive oral care and favour its integration into existing strategies for the prevention and control of NCDs, which have major risk factors and social determinants in common. Enhanced integration of oral health care with medical care is also needed to better meet the needs of the growing population of older adults with multiple health conditions. The study showed that the vignette approach revealed important gaps in access that would have stayed under the radar when only looking at available services in the benefit basket and thus remains interesting for further research. Finally, our approach revealed the lack of common guidelines in the field of dentistry at national and European levels. Developing common guidelines and promoting best practice rules that dentists in the EU adhere to are important. A major prerequisite for this is an evidence base for dental guidelines that is established and internationally agreed upon.
v3-fos-license
2016-05-12T22:15:10.714Z
2015-07-01T00:00:00.000
8554874
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://drc.bmj.com/content/bmjdrc/3/1/e000100.full.pdf", "pdf_hash": "20b3c3b7f0865d757c0d1ee8e543f875f6f954fc", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43597", "s2fieldsofstudy": [ "Medicine" ], "sha1": "20b3c3b7f0865d757c0d1ee8e543f875f6f954fc", "year": 2015 }
pes2o/s2orc
Effect of a 1-week, eucaloric, moderately high-fat diet on peripheral insulin sensitivity in healthy premenopausal women Objectives To determine whether a weight-maintaining, moderate (50%) high-fat diet is deleterious to insulin sensitivity in healthy premenopausal women. Design/setting/participants 23 African-American and non-Hispanic white, healthy, overweight, and obese premenopausal women recruited in New York City, USA, fed either a eucaloric, 1-week long high-fat (50% of total Kcal from fat) diet or a eucaloric, 1-week long low-fat (30% of total Kcal from fat) diet, assigned in a randomized crossover design. Main outcome measures Peripheral insulin sensitivity and metabolic flexibility during a euglycemic hyperinsulinemic (80 mU/m2/min) clamp measured during the follicular phase of the menstrual cycle, at the end of each diet period. Results Peripheral insulin sensitivity (mg kg/fat-free mass/min (µU/mL)×10−1) was not decreased after the high-fat diet vs the low-fat diet (0.09±0.01 vs 0.08±0.01, p=0.09, respectively) in the combined group of African-American and white women, with no significant diet by race interaction (p=0.6). Metabolic flexibility (change in substrate utilization, ΔNPRQ, in response to insulin during the clamp) was similarly unaltered by the diet (0.12±0.01 vs 0.11, p=0.48, for the high-fat diet vs the low-fat diet, respectively) in the combined group of women, with no significant diet by race interaction (p=0.9). African–American women had a lower insulin clearance compared with the white women, regardless of the diet (p<0.05). Conclusions We conclude that a short term (1 week), moderate (50%), eucaloric high-fat diet does not lower peripheral insulin sensitivity in healthy, overweight and obese premenopausal women. INTRODUCTION The role of the macronutrient composition of the diet with regard to the carbohydrate-to-fat ratio in the treatment of obesity and diabetes prevention has been only partially elucidated. While a low-fat (LF) diet was the mainstay for the diabetes prevention program 1 and is the basis for the 2010 Dietary Guidelines for Americans, 2 hypocaloric diets of both high-fat (HF) and LF compositions have been effective for weight loss. 3 Epidemiologically, higher total fat intake was associated with higher rates of progression to type 2 diabetes in the San Luis Valley Diabetes study 4 ; however, two other large population-based studies in women (Iowa Women's and Nurses' Health studies) did not replicate these findings. 5 6 Whether increasing the fat-to-carbohydrate ratio of a eucaloric, weight-maintaining diet decreases insulin sensitivity is controversial, particularly in women. [7][8][9][10][11][12] One study in women has shown a decrease in insulin sensitivity, measured by a frequently sampled intravenous glucose tolerance test (FSIVGTT), after 3 weeks of an HF diet compared to a LF diet in healthy premenopausal Key messages ▪ There is controversy over whether a eucaloric, moderately high-fat (50%) diet vs a lower fat (30%) diet induces insulin resistance in overweight and obese women; substituting fat for carbohydrates to a moderate degree (50% vs 30%) in a weight-maintaining diet is not deleterious for peripheral insulin action in healthy overweight and obese women, at least in the short term (1 week). ▪ Similarly, metabolic flexibility (the ability to suppress fat oxidation by insulin during a hyperinsulinemic clamp) is not affected by a higher (50%) vs a lower fat (30%) eucaloric diet in healthy overweight and obese women, at least in the short term (1 week). ▪ African-American women are more insulin resistant and have lower rates of postabsorptive fat oxidation than similar white women, as we have previously reported, but we did not find that a moderately higher fat diet (50%) compared to a lower fat diet (30%) adversely affects their peripheral insulin action or ability to suppress fat oxidation during a high-dose insulin clamp. African-American and non-Hispanic (NH) white participants. 13 However, other work has demonstrated that peripheral insulin sensitivity, measured by the euglycemic hyperinsulinemic clamp, does not decrease after eucaloric HF diets of various durations (6 days and up to 3 weeks) in lean or obese men 7-10 or combined groups of lean men and women. 11 12 Metabolic flexibility (the ability to suppress fat oxidation during the euglycemic hyperinsulinemic clamp) has been closely associated with insulin sensitivity 14 15 and decreased in response to a HF diet in men, 8 9 yet this has never been studied in women. Therefore, we aimed to determine whether insulin sensitivity measured during a euglycemic hyperinsulinemic clamp will be deleteriously affected by a 1 week, eucaloric HF (50% total Kcal from fat) diet in African-American and non-Hispanic white, healthy, premenopausal, overweight and obese women. In addition, we determined the effect of the diets on metabolic flexibility during the clamps. We and others have previously reported lower peripheral insulin sensitivity [16][17][18][19][20] differences in muscle adipose tissue distribution 19 and lower systemic rates of fat oxidation in African-American vs non-Hispanic white women. 15 21 22 Therefore, we also examined any race differences in substrate utilization during the clamps. RESEARCH DESIGN AND METHODS Subjects Twenty-three healthy premenopausal (25-45 years) overweight and obese (body mass index, BMI 25-40 kg/m 2 ) women (11 African-American and 12 non-Hispanic white) participated in the study. Participants were included if they reported all four grandparents to be of African or Caucasian descent, had regular menstrual cycles, and were without diabetes according to an oral glucose tolerance test (75 g glucose load). Self-reported use of any medications (including contraceptive pills), smoking within the past 6 months, and consumption of >2 oz. ethanol/day were exclusionary. All participants signed consent forms approved by the St. Luke's-Roosevelt Institute for Health Sciences Institutional Review Board. Study design In a randomized crossover design, participants consumed a LF (30% fat, 50% carbohydrate and 20% protein) or a HF (50% fat, 30% carbohydrate and 20% protein) weight-maintaining diet for seven consecutive days as per the protocol we had previously published. 15 On the morning of day 8 after an overnight admission to the Clinical Research Center at St. Luke's-Roosevelt Hospital Center, insulin sensitivity and substrate utilization were measured before and during a euglycemic hyperinsulinemic clamp. There was a minimum 2-week washout period between diets. All measurements were conducted during the follicular phase of the menstrual cycle. Dietary protocol All study participants completed dietary surveys indicating foods they liked and disliked. Eucaloric, weightmaintaining diets were constructed from food items available commercially with known macronutrient and caloric composition. Food item caloric content and macronutrient composition were verified using Nutritionist IV (V.2.0, Nsquared Commuting Co, Salem, Oregon, USA). Total daily calories for weight maintenance were calculated based on resting metabolic rate measured by indirect calorimetry in a fasting state (Horizon metabolic Cart or V-Max29; Sensor Medics, Yorba Linda, California, USA) and multiplied by an activity factor (1.5). Diets were matched in distribution of fat calories with equal parts of saturated fat, monounsaturated fat and polyunsaturated fat. Participants were provided with a 7-day food supply to consume at home. Dietary compliance was assured through weight stability measurements and adjustments were planned for a weight change of more than 1 kg. Insulin sensitivity Following an overnight fast, a three-hour euglycemic hyperinsulinemic clamp (80 mU/m 2 /min) was performed. We used a high-dose insulin clamp to measure the effect of the diet on peripheral insulin sensitivity in African-American vs non-Hispanic white women as we sought differences between races as well. We, as others, have previously reported lower peripheral insulin sensitivity [16][17][18][19][20] in African-Americans vs non-Hispanic whites. Blood samples were collected at 10 min intervals during the postabsorptive and steady state of hyperinsulinemic euglycemic clamp, immediately centrifuged, aliquoted and frozen at −70°C. Insulin was measured by RIA (Linco Research, St. Charles, Missouri, USA), glucose was measured by the Beckman glucose analyzer (Beckman, Fullerton, California, USA) and nonesterified fatty acids (NEFA) were measured by the enzymatic colorimetric method (Wako Chemicals USA, Richmond, Virginia, USA). NEFA suppression was calculated as the difference between the NEFA levels at steady state and the postabsorptive NEFA levels divided by the postabsorptive NEFA levels times 100 ( percentage). Insulin clearance was calculated according to DeFronzo 23 as the ratio of the difference in insulin concentration between the post-absorptive and steady states and the rate of insulin infusion during the clamp study, which was assumed to be 80 mU/m 2 /min for all participants. Insulin sensitivity was calculated as M/I using the glucose disposal rate M (mg kg/fat-free mass (FFM)/ min) and insulin concentration in the hyperinsulinemic steady state I (µU/mL). Indirect calorimetry Oxygen consumption (VO 2 ) and carbon dioxide production (VCO 2 ) were measured using a ventilated hood in the postabsorptive and hyperinsulinemic steady states of the euglycemic clamp. In both states, the participants were supine and awake. Substrate oxidation rates were calculated using Frayn's equations, 24 and non-protein respiratory quotient (NPRQ) was calculated as a ratio of VCO 2 to VO 2 . Metabolic flexibility was estimated as a change in NPRQ (ΔNPRQ) between the postabsorptive and hyperinsulinemic steady states. Statistics All data are reported as mean±SEM as noted. All variables were checked for normality of distribution; only fasting triglycerides were log transformed for analyses (log10). Statistical comparison of participant characteristics by race was performed using the independent t test (table 1). Selection and confounding biases were controlled for by symmetrical case-crossover methodology with identical length of exposure to the LF and HF diets. While the participants were unaware of the diet composition, there was no allocation concealment from the investigators. Analysis of variance and multivariate analysis of variance (ANOVA/MANOVA) were used to determine the effects of diet (LF vs HF, repeated measures, within effect) and to compute diet by race interactions (African-Americans vs non-Hispanic whites, between effect) from measures in the postabsorptive state and during the steady state of the clamp (glucose, NEFA and insulin levels, substrate utilization, NEFA suppression, insulin clearance, insulin sensitivity and metabolic flexibility), as shown in table 2 and figures 1A, B and 2A, B. Only data from women who completed either one of the dietary interventions (LF or HF diet) were used for analysis. A power analysis was performed for the effect of diet on peripheral insulin sensitivity, using initial pilot data (first seven participants of the study), which yielded a large effect size, Cohen's d=0.81 (M/I change between diets mean±SD, 0.014262±0.017618) from which the required sample size for 2 tailed α=0.05, power=0.80 was calculated to be n=15. A general linear model was used to determine whether there were any race differences in response to insulin during the euglycemic hyperinsulinemic clamps, for the LF and HF diets separately ( figure 2A and B). Both differences by race and any interactions by race in the effect of insulin during the clamps were computed for the LF and HF diets separately. The difference by race in steady-state insulin levels was also determined after adjusting for the postabsorptive insulin level (as a covariate). No other covariates were included in the analyses. A p value less than 0.05 was considered statistically significant. Statistical analysis was performed using Statistica (V.10.0, Tulsa, Oklahoma, USA). RESULTS Participant characteristics are shown in table 1. Twenty-three premenopausal (age 33.61±1.18 years) overweight (BMI 29.65±0.90 kg/m 2 ) women participated in the study. Eight of 11 African-American and 11 out of 12 white women completed insulin sensitivity measurements after both the LF and HF diet periods (repeated measures). Additionally, one African-American woman completed the studies only after the LF diet condition and three women (2 African-Americans and 1 white) completed the studies only after the HF diet condition. For personal reasons, they did not participate in the second dietary period. There were no statistically significant differences in age, BMI, body composition measurements, fasting triglycerides and high-density lipoprotein (HDL)-cholesterol levels between the two races (table 1) or in the subgroups which had repeated measures (not shown). For the 19 participants who had repeated measures, the effect of diet on insulin sensitivity and metabolic flexibility (ΔNPRQ during the clamp) are shown in table 2. There were no significant diet by race interactions on any of the variables ( p range 0.31 to 1.0); thus, the main effects of diet are presented here. Insulin sensitivity computed as the glucose disposal rate per kg of FFM and divided by the steady-state insulin level (M/I) was not significantly decreased by the diet in the African-American (0.06±0.01 vs 0.07±0.01, for LF vs HF diet, respectively, p=0.40) or in the white women (0.09±0.01 vs 0.10±0.01, for LF vs HF diet, respectively, p=0.09). In most women, insulin sensitivity either remained unchanged or was higher after the HF compared to the LF diet ( figure 1A and B). Similarly, metabolic flexibility (ΔNPRQ during the clamp) was not significantly altered by the diet type (table 2). Using data from all participants, the steady-state insulin levels during the clamp were higher in the African-American vs non-Hispanic white women, after adjustment for the postabsorptive values, after the LF diet (1449.17 ±40.34 pmol/L vs 1247.48±80.68 pmol/L, p=0.02) or after the HF diet (1490.98±59.59 pmol/L vs 1286.55 ±103.75 pmol/L, p=0.05). Thus, the calculated insulin clearance was lower in African-American vs white women, after the LF diet (407.55±12.26 mL/m 2 /min vs 489.60±30.19 mL/m 2 /min, p=0.03) or after the HF diet ((397.02±14.27 mL/m 2 /min vs 486.86±34.65 mL/m 2 / min, p=0.04). There were no other significant differences by race, after the LF diet (p range 0.91-1.0) or after the HF diet (p range 0.14-0.9). Fat oxidation was significantly suppressed by insulin during the euglycemic clamp, for both African-American and white women, after both the LF ( p<0.001) and HF ( p<0.001) diets ( figure 2A), with no significant insulin by race interaction (figure 2B) on either diet ( p=0.27 and p=0.28, respectively). ΔNPRQ, that is, metabolic flexibility during the clamp, was not significantly different in African-American vs white women after the LF diet (0.10±0.02 vs 0.12±0.02, respectively, p=0.59) or after the HF diet (0.12±0.02 vs 0.13±0.02, p=0.58). CONCLUSIONS Our study did not show a decrease in peripheral insulin sensitivity in response to a short-term (1 week) eucaloric 50% HF diet compared to a 30% LF diet in healthy, overweight, and obese premenopausal African-American and non-Hispanic white women. Metabolic flexibility (ΔNPRQ) was similarly unaffected. The only significant race difference we found was the lower insulin clearance in African-American vs white women, regardless of the diet. Our results highlight the controversy surrounding the effect of a eucaloric increase in the fat content of a weight-maintaining diet on insulin sensitivity and metabolic flexibility, a precursor of insulin sensitivity. One other study, utilizing FSIVGTT to measure insulin sensitivity in premenopausal obese women showed a deterioration of insulin sensitivity after 3 weeks of a eucaloric HF diet vs a eucaloric LF diet, 13 whereas other studies, in agreement with our results, have used a euglycemic hyperinsulinemic clamp to assess insulin sensitivity, which is the 'gold standard' for this outcome. A eucaloric HF diet consumed over a period of ∼3 weeks did not alter insulin sensitivity in mixed groups of lean men and women, 11 12 and similar results were demonstrated in lean men after just 6 days, 7 and in lean and overweight men after 3 weeks, 8 10 of a eucaloric HF diet. Thus, diet duration does not seem to account for the discrepancy between our results and other work in a similar population. 13 Hepatic insulin sensitivity remained unchanged in two studies with a similar HF diet as used by us, 7 10 but was shown to decrease in lean men after 11 days of an 83% HF diet. 9 FSIVGTT does not differentiate between hepatic and peripheral insulin×sensitivity. Different effects of a HF diet on hepatic vs peripheral insulin sensitivity may to some extent account for the difference in results noted by us. 13 Other factors playing a role may be the account of the menstrual cycle stage when insulin sensitivity was measured, 13 25 and the differences in the amounts of saturated fat employed. 13 We also found that the metabolic flexibility, measured as a suppression of fat oxidation during the hyperinsulinemic (80 mU/m 2 /min) euglycemic clamp (ΔNPRQ), 14 was not affected by the 1 week of a eucaloric 50% HF diet in our women. The effect of a eucaloric HF diet on ΔNPRQ has been studied in men, yet the results are inconclusive. In lean men, ΔNPRQ was not decreased in response to an HF (75%) diet compared to a similar LF (35%) diet, after 6 days, 7 or 3 weeks, 10 but was decreased after 11 days of a HF (83%) diet. 9 In overweight men, ΔNPRQ decreased after 3 weeks of an HF (55%) diet. 8 No similar studies are available in women. A certain threshold in the fat/carbohydrate ratio of the diet and the effect on hepatic insulin sensitivity 26 may modulate the degree of fat oxidation suppression by insulin after a eucaloric HF diet. Hepatic insulin sensitivity and its relationship to metabolic flexibility was not evaluated in our study and needs to be investigated further. Some of the findings in the present study, specifically a lack of differential effect by race, may be due to a lack of power secondary to a small sample size. Furthermore, 1 week of a eucaloric 50% HF diet may have different effects in other populations, with different genetic susceptibility. 27 28 Finally, we previously reported lower rates of postabsorptive fat oxidation in response to an HF diet and lack of fat oxidation suppression by insulin during a pancreatic clamp in African-American vs white women. 15 In this study, we observed similar trends for the postabsorptive fat oxidation values, but the higher dose of insulin during the clamp similarly suppressed fat oxidation in the two races, in agreement with a recent report. 29 We also found lower insulin clearance in the African-American women compared to the white women, in contrast to one, 30 but in agreement with another study in adult women. 31 The lower insulin clearance could be contributing to unmeasured postprandial hyperinsulinemia, which may partly explain the numerous reports of lower fat oxidation rates in African-Americans without diabetes compared to other white populations. 21 22 32 33 In conclusion, peripheral insulin sensitivity was not deleteriously affected by 1 week of a eucaloric HF diet (50% of total Kcal from fat), compared to a LF (30% of total Kcal from fat) diet, in healthy, premenopausal, overweight and obese African-American and non-Hispanic white women. Our findings need to be verified with regard to the effect on hepatic insulin response and more importantly in other susceptible populations. Contributors NMB analyzed the data, and wrote and prepared the manuscript for publication. ME analyzed the data and reviewed the manuscript. RWW collected the data, and reviewed and critiqued the manuscript. ESB designed the study, collected and analyzed the data, and reviewed and critiqued the manuscript. JBA designed the study, collected and analyzed the data, and wrote and prepared the manuscript for publication. She is also the guarantor of this work. Funding This work was supported by the following grants: NIH R21DK71171, New York Obesity Research Center Grant P30DK26687, CTSA M01RR00645, DERC P30DK63608 and American Diabetes Association Grant 1-10-CT-01. Competing interests JBA is an Associate Editor for Open BMJ DRC. She is also a reviewer of grants, abstracts and papers for the American Diabetes Association and its journals. JBA reports research grants funding from Weight Watchers, Eli Lilly, Roche, Takeda, Merck and Novo Nordisk, outside the submitted work. ESB is a current employee of GlaxoSmithKline. No other potential duality or conflicts of interest were reported relevant to this article. Ethics approval St. Luke's-Roosevelt Institute for Health Sciences Institutional Review Board. Provenance and peer review Not commissioned; externally peer reviewed. Data sharing statement Methodology details information from this study is available through consultation with the corresponding author. Open Access This is an Open Access article distributed in accordance with the terms of the Creative Commons Attribution (CC BY 4.0) license, which permits others to distribute, remix, adapt and build upon this work, for commercial use, provided the original work is properly cited. See: http:// creativecommons.org/licenses/by/4.0/
v3-fos-license
2017-04-20T05:58:10.696Z
2012-03-12T00:00:00.000
7647770
{ "extfieldsofstudy": [ "Biology", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0032635&type=printable", "pdf_hash": "c64788cb3fe75170165ea3e56a0c2711a253ba26", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43603", "s2fieldsofstudy": [ "Biology", "Medicine" ], "sha1": "c64788cb3fe75170165ea3e56a0c2711a253ba26", "year": 2012 }
pes2o/s2orc
CRIM1 Complexes with ß-catenin and Cadherins, Stabilizes Cell-Cell Junctions and Is Critical for Neural Morphogenesis In multicellular organisms, morphogenesis is a highly coordinated process that requires dynamically regulated adhesion between cells. An excellent example of cellular morphogenesis is the formation of the neural tube from the flattened epithelium of the neural plate. Cysteine-rich motor neuron protein 1 (CRIM1) is a single-pass (type 1) transmembrane protein that is expressed in neural structures beginning at the neural plate stage. In the frog Xenopus laevis, loss of function studies using CRIM1 antisense morpholino oligonucleotides resulted in a failure of neural development. The CRIM1 knockdown phenotype was, in some cases, mild and resulted in perturbed neural fold morphogenesis. In severely affected embryos there was a dramatic failure of cell adhesion in the neural plate and complete absence of neural structures subsequently. Investigation of the mechanism of CRIM1 function revealed that it can form complexes with ß-catenin and cadherins, albeit indirectly, via the cytosolic domain. Consistent with this, CRIM1 knockdown resulted in diminished levels of cadherins and ß-catenin in junctional complexes in the neural plate. We conclude that CRIM1 is critical for cell-cell adhesion during neural development because it is required for the function of cadherin-dependent junctions. Introduction Development of multicellular organisms requires the coordinated movement of cells in a process generally referred to as morphogenesis. Morphogenesis at the organismal scale can be dramatic -for example, the closure of the neural tube -and complex because it requires synchronizing distinct activities in multiple tissue layers. Component activities of morphogenesis include cell migration, cell elongation, process formation, coordinated shape change during epithelial bending as well as regionally increasing and decreasing tissue volumes driven by cell proliferation and cell death. The regulation of adhesive interactions is a key factor in control of morphogenesis. Among adhesion molecules, cadherins have a critical role [1,2,3]. The classical cadherins exist in a complex with catenins. The catenins regulate association of cadherins with the actin cytoskeleton, though binding may not be direct [3,4]. Association of cadherins with actin is likely mediated by the bridging molecule eplin [5]. Adhesive activity of cadherins can be regulated in a variety of ways [3] and this is clearly important in permitting and mediating the cellular movements of morphogenesis. In some settings, catenins are essential for cell-cell adhesion. For example, p120 catenin loss-of-function in the salivary gland results in severe defects in adhesion accompanied by the downregulation of E-cadherin [6]. ß-catenin loss-of-function in the presumptive lens results in a reduction of the F-actin cytoskeleton and loss of cell adhesion [7]. Antisense oligonucleotide depletion of both a-catenin and EP-cadherin in Xenopus embryos causes a failure of cellular adhesion at blastula stages [8,9]. A two-tiered regulation of E-cadherin has recently been reported in embryonic epithelia of Drosophila whereby a stable cell-cell homophillic Ecadherin complex pool and a more diffusible monomeric Ecadherin pool co-exist at cell junctions [10]. These pools of Ecadherin have different connections to the intracellular actin network and must require different mechanisms for turnover and regulation during embryonic morphogenesis. Cysteine-rich motor neuron 1 (CRIM1) was originally identified as a partial cDNA in an interaction screen [11] and in a screen for secreted proteins (C. Tabin, personal communication). Assembly of the full sequence representing the CRIM1 cDNA [11] revealed that it was a type 1 trans-membrane protein with N-terminal homology to insulin-like growth factor binding domains (IGFBP; [11,12]) and a set of six cysteine-rich von Willebrand factor C (vWC) repeats occupying the remaining extracellular domain. The cysteine-rich repeats of CRIM1 are similar to those of chordin [13] and its Drosophila homolog, short gastrulation [14] that can bind bone morphogenetic proteins (BMPs) [15,16]. Another protein that contains an IGFBP and single cysteine-rich domain is Cyr61, a secreted heparin binding, extracellular matrix associated protein that is required for normal gastrulation movements [17]. CRIM1 is expressed in a variety of tissues and cell types that include the vertebrate CNS [11] urogenital tract [18] eye [19,20] and vascular system [21]. CRIM1 protein has been localized to the endoplasmic reticulum [21,22] or to junctional complexes upon stimulation of vascular endothelial cells [21]. Analysis of CRIM1 function suggested it has a role in vascular tube formation both in culture [21] and in vivo in the fish [23]. Consistent with expression of CRIM1 in the neural tube [11], over-expression of the CRIM1 ectodomain in the chick neural tube reduces the numbers of certain spinal cord neurons [20]. CRIM1 was also been proposed to be an antagonist for bone morphogenetic proteins (BMPs) through suppression of BMP maturation and sequestration in the Golgi or at the cell surface [22]. This activity is dependent upon the extracellular vWC repeats [22]. Expression of CRIM1 in the chick neural tube was, however, insufficient to modulate ventral patterning [20] where BMP activity is critical [24]. An assessment of the function of crm-1, a C. elegans homologue of CRIM1, has suggested a role in enhancing BMP signaling [25]. Identification of a CRIM1 hypomorphic mutant in the mouse (CRIM1 KST264 , [26]) that was generated by lacz insertional mutagenesis has revealed that CRIM1 is involved in the development of multiple organ systems including the limbs, eye and kidney vascular system [26,27]. In the current study we have focused on understanding the activity of the CRIM1 cytoplasmic domain, a region of 82 amino acids that is highly conserved. Antisense oligonucleotide mediated loss of function studies in Xenopus laevis revealed an essential role for CRIM1 in neural plate cell adhesion. In these experiments there was a loss of junctional cadherin labeling intensity, reduced epithelial polarity and organization and ultimately, the sloughing of neural plate cells. Based on this result we screened CRIM1 containing complexes for the presence of known adhesion mediators. We found that the cytoplasmic domain of CRIM1 can form complexes with ß-catenin and cadherins, though this interaction is probably indirect. Combined, these data suggest that CRIM1 is essential for cadherin mediated cell-cell adhesion in the developing nervous system. Ethics Statements All experiments were performed in accordance with institutional guidelines under Institutional Animal Care and Use Committee (IACUC) approval at Cincinnati Children's Hospital Research Foundation (CCHRF). IACUC at CCHRF approved the study described in this manuscript with Animal Use Protocol number 0B12097. Plasmid constructs Plasmid constructs were generated using conventional methods using full-length CRIM1 cDNA Xenopus laevis EST 5537401 (Invitrogen). Cell lines and transfection HEK 293T cells (ATCC, CRL-11268) were cultured in a conventional manner. Cell lines were transfected with DNA constructs using Fu-Gene (Roche) or Trans-IT (Mirus) reagents. Morpholino experiments and in situ hybridization Translation-blocking morpholino oligonucleotides (MOs) were designed against xCRIM1a (XLCA) and xCRIM1b (XLCB) (Gene Tools, LLC). The MOs were prepared at a concentration of 30 mg/mL in sterile water. We used a 10,000 MW fluorescent dextran (Molecular Probes) or a GFP-encoding mRNA as lineage tracers. X. laevis eggs were fertilized in vitro and grown in 0.1X modified Barth saline (MBS) [28], staged according to [29] and transferred to 1X MBS, 4% Ficoll for microinjection. Embryos were injected at the 4 to 16-cell stage in individual blastomeres and cultured in 0.1X MBS, 2% Ficoll at 18uC or in 0.1X MBS for longer incubations. Embryos were fixed in 1X MEMFA at various stages for analysis. Antisense RNA probe synthesis and in situ hybridization on whole embryos were performed as previously described [30]. cDNA from staged X. laevis embryos (stages 8 to 32) were a generous gift from C. Wylie. We used PCR primers specific to the 59-UTR of xCRIM to amplify sequences from isolated cDNAs. Results Knockdown of CRIM1 in Xenopus embryos causes defects in neuronal structures development Using available chick CRIM1 sequence (accession #NM_204425) to design primer sets, we PCR amplified cDNA products from a stage 28 Xenopus laevis cDNA library and identified two distinct sequences that had extensive homology to chick CRIM1. Based on the high degree of homology, these clone families represented the Xenopus laevis A and B genes. We used this sequence information (Fig. 1A, accession number pending) to design PCR primers, antisense Morpholino oligonucleotides (MO, Fig. 1A, Table 1) and in situ hybridization probes for Xenopus laevis CRIM1. RT-PCR analysis for CRIM1 on a staged series of embryos ( Fig. 1B) showed that CRIM1 mRNA is detected in the early neurula at stage 12. For comparison, n-tubulin mRNA was detected in the late neurula at stage 22. By in situ hybridization, CRIM1 was detected in the neural plate of stage 12.5 embryos (Fig. 1C). Expression of CRIM1 in neural structures continued and at stage 18, albeit faintly detected, in posterior neural tube (Fig. 1D) as well as anterior neural structures including optic vesicles (Fig. 1E). CRIM1 expression at stage 22 was detected in the early somites and weakly in neural structures (Fig. 1F). The hindbrain, cement gland and somites were all locations of CRIM1 expression at stage 35 (Fig. 1G). CRIM1 loss-of-function experiments in Xenopus laevis were performed using antisense MO-mediated translation and splicing blocking [32,33]. Sequence differences in the 59 untranslated region of the CRIM1 A and B genes ( Fig. 1A) required that we use a mixture of MOs (XLCAB, Table 1) for translation blocking. To design splicing-blocking MOs, we first identified Xenopus tropicalis genomic CRIM1 sequences in the available database (JGI Genome Browser) and used that sequence to PCR amplify and sequence Xenopus laevis genomic clones. The CRIM1 A and B genes also had sequence changes in the exon 2 splice donor region ( Fig. 1A) that necessitated a mix of MOs (XLCSDAB, Table 1) to target both mRNAs. Using two sets of PCR primers that detected either unspliced or spliced mRNA ( Fig. 1H) we confirmed that the MOs targeted to the splice donor of CRIM1 exon 2 suppressed splicing. Translation and splicing blocking CRIM1 MOs injected into a dorsal blastomere at the 4-cell stage produced dramatic effects on the development of neural structures. In a typical experiment where 15 ng each of XLCA and B were injected, more than 70% of embryos had major defects including a small or missing eye on the injected side ( Table 2 and Fig. 1I, J, K, L). Tracing of MO distribution with coinjected Dextran Alexa488 confirmed that the affected region of the embryo received MO but that any remaining neural tube was tracer negative (Fig. 1L). Histological assessment of affected Xenopus embryos at stage 42 confirmed the neural tube and eye were both missing on the injected side (data not shown). In embryos injected bi-laterally at the 2-cell stage with 30 ng each XLCA and B MOs, a loss of anterior neural and head structures resulted but ventral and posterior structures were retained (Fig. 1M). This phenotype induced by loss of CRIM1 in the whole embryo by administering the MOs at this stage correlates well given the expression pattern of CRIM1 in the developing neural plate. Since the absence of an eye served as a simple read-out for phenotype severity, we assessed changes from MOs injection in different amounts. There was a dose response for both translation and splicing blocking MOs and that each produced the same phenotype ( Table 2). Injection of ventral blastomeres with the translation blocking XLCAB combination had a minimal effect ( Table 2, vent). MOs (Table 1) in which 5 of the nucleotides were mismatched had a greatly reduced effect though this was not zero ( Table 2). Since 5 nucleotide mismatch MOs are known to retain some activity at the concentrations used here [33] we also used the standard control MO (GeneTools) that has no measurable activity as a control and observed no obvious phenotype (Table 2). We also determined whether co-injection of a MO-resistant CRIM1 mRNA with the XLCAB MOs resulted in phenotypic rescue. Though we did not observe a complete reversal of the effects of the XLCAB MOs, the MO-resistant xCRIM1 mRNA reduced the percentage of embryos showing small or missing eyes (Table 2). Together, the activity of both MO types in producing the same phenotype, the correlation of that phenotype with the expression domain of CRIM1, suppression of CRIM1 mRNA splicing with XLCSDAB MOs and a degree of phenotype rescue with xCRIM1 expression suggest the antisense oligonucleotides are specific. The absence of neural structures in tailbud stage embryos was consistent with the loss of neural plate integrity at earlier stages. Examination of XLCAB MOs injected pigmented embryos at stage 15 when neural plate morphogenesis is occurring revealed that the injected side had defects in neural plate formation. Specifically, failure of the neural plate boundary (the neural folds) to move toward the midline produced embryos with a pronounced asymmetry (Fig. 2). In many XLCAB MOs injected embryos, cells were seen sloughing from the surface of the injected side (Fig. 2, red arrowheads). In a typical experiment using 15 ng each XLCA and XLCB (Table 2), 20/70 embryos (28.6%) show severe cell sloughing. Time-lapse video microscopy of embryos bi-laterally injected with MOs (dorsal blastomeres, XLCAB at the 4-cell stage) in some cases showed a mild phenotype of delayed neural fold morphogenesis with a failure of anterior neural tube closure (Video S1) and in others a severe failure of cell adhesion across the entire neural plate (Video S2). This suggested that CRIM1 might have an essential role in promoting cell adhesion or suppressing cell death within the neural plate. Reduced cadherin junctional complexes is a primary consequence of CRIM1 loss-of-function To distinguish between these two possibilities, we first determined whether the level or localization of cadherins that are critical adhesion molecules in Xenopus neural plate [34] might be affected in CRIM1 knockdown embryos. We coinjected XLCAB MOs with a tracer mRNA encoding GFP at the 4-cell stage and then performed whole-mount immunolabeling for cadherins at stage 13 (early neurula stage). In these preparations, an apical cadherin junctional complex is identified revealing patterns of cell packing and cell size at the surface (Fig. 3). In this case we controlled the experiment by injecting the GFP tracer mRNA alone. In other experiments co-injecting control MOs with dextran tracer gave identical results (Figs. 4,5,6). In control embryos we see slight junction-to-junction variation in labeling intensity for both E-cadherin (Fig. 3A) and C-cadherin (Fig. 3B), but this did not correlate with GFP expression. By contrast, when the GFP mRNA and the CRIM1 MO were co-injected there were dramatic changes in cadherin labeling in GFP expressing cells. At low magnification (Fig. 3C, D, E, F) tracer positive regions have reduced immunoreactivity for both Ecadherin and C-cadherin. Higher magnification (Fig. 3G, H, I, J) shows the precise correlation between GFP expression and reduced junctional labeling intensity. In addition, a junction between two tracer positive cells generally has a low level of cadherin immunoreactivity compared with junctions between a tracer positive and a tracer negative pair or between two tracer negative cell junctions (Fig. 3G, H, I, J). To quantify the E-and Ccadherin labeling, we measured pixel intensities over a curved line interval superimposed along junctional labeling between two cells. When normalized to the value of junctions between pairs of tracernegative cells, a tracer positive-tracer negative pair showed no reduction in labeling intensity whereas tracer-positive pairs showed significantly reduced labeling intensity for E-cadherin (Fig. 3K) and C-cadherin (Fig. 3L). At higher magnification, some tracer positive cells have a rounded shape and a greater apical surface area than their tracer-negative neighbors (Fig. 3G, H, I, J) disrupting the pattern of cell packing. While these changes in junctional cadherin levels and cell shape were consistent with a role for CRIM1 in adhesion, it remained possible that the cells with low cadherin levels were undergoing apoptosis as a primary response to CRIM1 loss-of-function. To determine whether this occurs, we performed two different assays for cell death. First, we injected embryos with either the fluorescent dextran tracer alone or with tracer plus 15 ng each XLCA and B MOs into a dorsal blastomere at the 4-cell stage. We harvested embryos at stage 13, permeablized and performed whole-mount TUNEL labeling (Fig. S1). As a positive control, we used the same combination of control and MO-injected embryos but treated them with DNase I to nick genomic DNA and enhance TUNEL labeling (Fig. S1). DNase I-treated embryos were TUNEL labeled; control or XLCAB-injected embryos without DNase-I treatment were not. Embryos were injected with the same amount of MOs that reliably caused reduced junctional cadherin labeling at the same analyzed stage (Fig. 3). Since it can be argued that TUNEL labeling monitors a late event in the activation of cell death pathways, we also performed labeling for activated Caspase 3, an early marker of cell death pathway activation combined with labeling for C-cadherin ( Fig. 4A and B). In this set of experiments, we analyzed CRIM1 knockdown embryos that showed a patch of de-adhering cells judged morphologically (Fig. 4B). We performed quantification of pixel intensity for the dextran tracer, C-cadherin and activated Caspase 3 along 450 pixel line intervals extending through tracernegative to tracer-positive regions ( Fig. 4A and B). These data are graphically represented in pixel intensity histograms (Fig. 4C and D). Regions of the micrograph containing the line interval are reproduced at higher magnification below the histogram (Fig. 4E and F). We analyzed 14 examples each of control MO and XLCAB-injected embryos and found consistent results. In embryos co-injected with the tracer and the standard control MO, lineage tracer-positive cells retained strong C-cadherin junctional staining (Fig. 4A, middle panel and 4C, red). Activated caspase-3 levels, with the exception of the occasional positive cells (Fig. 4A, blue arrowhead), were consistently low across the whole (Fig. 4A, right panel) and along the line interval used for analysis (Fig. 4C, blue). By contrast, in embryos co-injected with the tracer and 15 ng each XLCAB MO, C-cadherin labeling was consistently lower in tracer-positive regions as seen in the micrographs (Fig. 4B, middle panel, 4F, Ccad) and also when comparing red channel pixel intensities in tracer-negative and positive regions on the histogram (Fig. 4D). We performed quantification of these signals by measuring pixel intensities over 150 pixel line intervals located exclusively in tracer negative (control MO and XLCAB injected embryos), tracer positive, adherent (control MO and XLCAB injected embryos), or tracer positive, non-adherent regions (XLCAB injected embryos only) in 8 different embryos. In XLCAB injected embryos, Ccadherin labeling was significantly reduced in both adherent and non-adherent tracer-positive regions compared with tracernegative regions (Fig. 4G, red bars. A number lower than 1 indicates reduction of C-cadherin expression in MO injected regions). Importantly, adherent, MOs injected regions with reduced C-cadherin levels show no change in the level of activated Caspase 3 (Fig. 4G, blue bars). In addition, activated caspase 3 levels only increase dramatically when cells show non-adherent morphology (Fig. 4G, blue bars). These data argue that the primary consequence of CRIM1 loss-of-function is a diminished level of cadherin junctional complex and that cell de-adhesion followed by activation of cell death pathways is a secondary consequence. CRIM1 is required for ß-catenin localization to junctional complexes The cadherin junction defects apparent in CRIM1 knockdown experiments prompted us to determine whether CRIM1 might regulate the level or distribution of other major adhesion complex proteins. To assess this, we generated embryos co-injected with the dextran tracer and control or XLCAB MOs and labeled for both C-cadherin and ß-catenin. As described above, we chose to analyze experimental embryos that had regions of non-adherent cells as judged morphologically (Fig. 5B). This analysis is illustrated and quantified as described for Fig. 4. Control MO injected embryos showed levels of C-cadherin and ß-catenin signal that were consistent across tracer-negative and tracer-positive regions of the embryo (Fig. 5A, C, E, G). By contrast, tracer-positive regions in XLCAB-injected embryos showed reduced levels of both C-cadherin and ß-catenin regardless of whether these regions were adherent or non-adherent (Fig. 5B, D, F, H). To quantify the level of C-cadherin and ßcatenin, we generated pixel intensities over 150 pixel intervals on control MO-positive, and XLCAB-positive adherent and nonadherent regions. We then quantified the changes in average pixel intensities in MO-positive (tracer-positive) regions compare to those in MO-negative regions for both C-cadherin and ß-catenin labeling (Fig. 5I). Compared with control MO regions, the XLCAB MO resulted in a mild but statistically significant reduction in C-cadherin signal and a more pronounced reduction in ß-catenin signal (Fig. 5I). Interestingly, the level of C-cadherin signal reduced dramatically when cells become non-adherent while ß-catenin signal showed no further reduction (Fig. 5G, H, I). This suggested that a primary consequence of CRIM1 loss-offunction is the failure of ß-catenin to stably associate with cadherin junctional complexes. Compromise of the cadherin junctional complex leads to defects in apical-basal epithelial polarity [5]. To determine if this feature of neural plate epithelial cells might be changed with CRIM1 loss-of-function, we performed similar experiments by co-injecting XLCAB with dextran tracer and performed immunofluorescent labeling on cross sections of the neural epithelium. Embryos displaying a mild phenotype were analyzed midway through neurulation at stage 16. The tracer was generally (Fig. 5K, green) but not always (Fig. 5J, green) distributed in a region that abutted the midline as would be expected for injection of a dorsal blastomere at the 4-cell stage. Tracer positive regions had a markedly different labeling pattern for ß-catenin. In unaffected neural epithelium (tracer negative, Fig. 5J, K) the neural epithelium has intense ß-catenin labeling at cell junctions and the columnar cell shape of the outermost epithelial layers is distinct (tracer negative, Fig. 5 J, K, grayscale panels). In all regions receiving the XLCAB MOs (Fig. 5 J, K, green region with dashed white line boundary) junctional ßcatenin labeling level is lower, the cells show a more rounded shape and the epithelium is disorganized. Out of 24 embryos each of experimental and control, we found polarity defects that were restricted to the tracer-positive regions in 7 experimental embryos. We then determined whether restoration of CRIM1 expression would rescue the abnormal distribution of ß-catenin in CRIM1 knockdown cells. To this end, a MO-resistant, FLAG-tagged fulllength CRIM1 mRNA (CRIM1-FL) was co-injected with XLCAB MOs into a dorsal blastomere at the 4-cell stage. The expression level of the tagged protein was measured by comparing average pixel intensities over a 150 pixel line interval placed in tracer positive and tracer negative areas (Fig. 6 A, B, C, white lines). Injection of the mRNA resulted in robust expression of tagged fulllength CRIM1 with or without co-injection of XLCAB MOs (Fig. 6B, C right panels, Fig. 6D blue bars). Whole-mount ßcatenin labeling was performed on embryos injected with different combinations of MOs and mRNA. We found that while injecting CRIM1 mRNA alone did not change the expression of ß-catenin (Fig. 6C, middle panel; Fig. 6E), co-injecting CRIM1 mRNA with XLCAB MOs restored the ß-catenin intensity (Fig. 6A, B middle panels) to the normal level of ß-catenin as in embryos injected with control MO (Fig. 6E). Combined, these data suggest CRIM1 has an essential role in stabilizing the cadherin junctions. CRIM1 complexes with ß-catenin and N-cadherin via its cytoplasmic domain As a first step in understanding the mechanism of action of CRIM1, we determined whether multiple CRIM1 molecules could associate in a complex. We co-expressed a FLAG-tagged ectodomain form (Fig. 7A, top line) with a series of deletion Figure 3. CRIM1 is required for junctional localization of E-and C-cadherin in the neural plate. (A-J) Immunofluorescence labeling of whole-mount Xenopus embryos after injection of translation-blocking XLCAB MOs. Embryos were co-injected with mRNA encoding GFP at the 4-cell stage and were fixed and labeled at stage 13 (early neurula) with antibodies to GFP (green), E-cadherin (A, C, D, G, H, red) or C-cadherin (B, E, F, I, J, red). Cadherin junctional complexes were visualized by combining multiple optical sections generated by confocal microscopy. In lower magnification images (C, D, E, F) it is apparent that tracer positive regions have lower levels of cadherin immunoreactivity and are irregularly shaped. In the magnified regions (G, H, I, J) indicated by white corner marks in (C, D, E, F) the loss of cadherin immunoreactivity in tracer positive cells is more obvious. The gray line between panels indicates separated color channels of the same image. (K-L) Graphs show the measured average E-cadherin (K) and C-cadherin (L) junctional staining intensity between two tracer-negative, one tracer negative and one tracer positive, or two tracer-positive cells (n = 20 pairs for each categories). doi:10.1371/journal.pone.0032635.g003 mutants carrying C-terminal V5 tags (Fig. 7A) and determined whether this would coimmunoprecipitate (co-IP) from HEK293 cells. According to immunoblots with appropriate antibodies, all proteins expressed well (Fig. 7B, left panels) and the V5 tagged proteins could also be efficiently IPd (Fig. 7B, far right). Anti-V5 IP followed by immunoblot with anti-FLAG showed all deletion mutants of V5 tagged CRIM1 could form complexes with CRIM1-FL-ED (Fig. 7B, center left). These data indicate that CRIM1 can form complexes where multiple CRIM1 molecules are present. These data also show that an N-terminal region containing the IGFBP-like domain is sufficient for formation of this complex. The apparent role of CRIM1 in stabilizing cadherin junctions shown by knockdown and rescue experiments prompted us to determine whether CRIM1 might directly interact with major adhesion complex proteins. To this end we over-expressed epitope-tagged CRIM1 in HEK293 cells, and determined whether CRIM1 could be IPd in these complexes (data not shown). When anti-ß-catenin antibodies were used for IP, CRIM1 was readily detected by immunoblot (data not shown). We then generated mutant forms of CRIM1 that lacked the cytoplasmic domain (Fig. 7C). We also used two different locations for epitope tagging given the possibility that a C-terminal epitope tag might prevent a cytoplasmic domain interaction (Fig. 7C). All four modified CRIM1 proteins expressed well in HEK293 cells (Fig. 7D, left panel) and could be IPd effectively with the antibody to the appropriate tag (Fig. 7D, right panel). Only CRIM1 with an intact cytoplasmic domain would form a complex with ß-catenin (Fig. 7D, center left) through IP using anti-ß-catenin antibodies. To determine whether the cytoplasmic domain of CRIM1 was sufficient for ß-catenin complex formation, we expressed CRIM1-cyt (consisting of the secretory leader, transmembrane and cytoplasmic domains, Fig. 7E) in 293 cells and performed ß-catenin IPs. Both CRIM1-cyt and the full-length CRIM1 expressed well as indicated by an anti-V5 immunoblot of cell lysates (Fig. 7F, left panel -tracks 3 and 4 are duplicates). Using anti-ß-catenin antibodies, both fulllength CRIM1 and CRIM1-cyt co-IPd (Fig. 7F, right panel). We used antibodies to the FLAG epitope in CRIM1-FL and CRIM1-FLDcyt (Fig. 7G) in reciprocal IPs and detected ß-catenin (Fig. 7H) in immunoblots. In lysates from CRIM1-FL expressing cells, total ßcatenin levels appeared unchanged where a CRIM1-ß-catenin complex was demonstrated via co-IP (Fig. 7I). These data provide strong evidence that CRIM1 and ß-catenin exist in the same complex. We could not convincingly demonstrate a direct interaction between a variety of recombinant forms of the CRIM1 cytoplasmic domain and ß-catenin in vitro (data not shown). The CRIM1 knockdown adhesion defect, together with coexistence of CRIM1 and ß-catenin in a protein complex raised the possibility of CRIM1 association with cadherins. N-cadherin is expressed in HEK293 cells whereas E-cadherin is not (data not shown). When CRIM1 was over-expressed, anti-N-cadherin antibodies IPd CRIM1 (Fig. 7D). Formation of a CRIM1-Ncadherin complex was also dependent upon the presence of an intact CRIM1 cytoplasmic domain (Fig. 7C, D). Combined, these data indicate that CRIM1 can form complexes with ß-catenin and N-cadherin via its cytoplasmic domain. This, with reduced junctional cadherin levels in Xenopus CRIM1 knockdown expreriments, suggested that the adhesion defect resulted from disruption of cadherin-dependent junctional complexes. Discussion In this report we assessed the function and mechanism of action of the unique transmembrane molecule cysteine-rich motor neuron 1 (CRIM1). Using antisense oligonucleotide knockdown experiments in Xenopus laevis, we showed that CRIM1 is essential for formation of the nervous system. Since the expression of early neural markers is unaffected, CRIM1 clearly did not regulate the inductive phases of neural development when BMP signaling is involved. Rather, we provide evidence at both the cellular and protein levels that CRIM1 is required for formation of cadherindependent adhesion junctions. Specifically we show that CRIM1 can form complexes with ß-catenin and cadherins and that these proteins are reduced in junctional complexes of CRIM1 knockdown Xenopus embryos. Combined, these data suggest normally, CRIM1 is critical for the formation of cadherin junctions in the developing neural plate. These findings raise several questions. CRIM1 function in cadherin-mediated morphogenesis The classical cadherins have diverse roles in development and homeostasis including mechanical cell-cell adhesion, coordination of cell movements during morphogenesis, establishment and maintenance of epithelial polarity as well as cell-to-cell signaling and recognition [35]. There are different ways in which these various cadherin activities are regulated, some are post-transcriptional and therefore mediated by the interaction of cadherins with other proteins. The association of ß-catenin with cadherins is regulated by different phosphorylation states that have either positive (serine phosphorylation of E-cadherin or ß-catenin) or negative (tyrosine phosphorylation of ß-catenin) effects on complex formation [36]. Other levels of cadherin negative regulation include cleavage of the extracellular domain by ADAM (a disintegrin and metalloprotease domain) 10 [37] and cleavage of the intracellular domain by proteases such as c-secretase/ presenilin-1 [38] thus promoting disassembly of the cadherin complex. Cadherin endocytosis into clathrin-coated vesicles [39] may also negatively regulate cell-cell junctional adhesiveness perhaps as a consequence of the loss of p120 catenin association [40]. In this study, we show that CRIM1 has an essential role in cellcell adhesion during development of the central nervous system. CRIM1 appears to lack any intrinsic capacity to mediate cell-cell adhesion (unpublished results) yet it seems essential for the formation or stabilization of cadherin-dependent adhesion complexes. A comparison of the expression patterns of CRIM1 and cadherins within epithelia reveals that CRIM1 is expressed in sub-regions within larger cadherin-positive domains. An example is the presumptive lens in the mouse where CRIM1 is first expressed in a patch of ectoderm that will invaginate to form the lens pit [19]. This small region of presumptive lens ectoderm is part of the larger embryonic head ectoderm that expresses Ecadherin [41]. Similarly, the region of the Xenopus neural plate that expresses CRIM1 is part of a larger surface ectoderm that expresses cadherins [42,43,44]. The CRIM1 expressing neural plate will, like the presumptive lens, undergo dramatic morphogenesis at the time CRIM1 is expressed [45]. The mild morphogenesis phenotype observed in the CRIM1 knockdown experiments is similar to the failure of hinge point formation in neural tube bending induced by knockdown of the actin associated protein, Shroom [46]. Connectivitity to the cytoskeleton is important for stabilization of cadherin junctional complexes. CRIM1 is not obligatorily expressed in cadherinpositive regions suggesting that it is not universally required for the formation or stabilization of adhesion complexes. CRIM1 may become essential for the function of cadherin where it is expressed, perhaps displacing another cadherin complex stabilization mechanism thus regulating adhesive activity perhaps during morphogenesis. Cell-cell adhesion between animal cells undergoing normal morphogenetic movements, as in the bending of epithelial sheets, must be dynamic without losing cell-cell contact. Kametani and Takeichi demonstrated basal-to-apical cadherin flow occurs at cell junctions between moving transformed cells in culture [47]. They visualized junctional instability and cadherin-catenin-actin protein rearrangements at sites of cellular morphogenesis while maintaining cell contact. CRIM1 may play role in regulating cadherincatenin junctional stability. We show CRIM1 interaction with these proteins and expression in sites where epithelial sheet bending and dynamic cellular rearrangement occurs. CRIM1 mechanism of action Beyond the demonstration that the 82 residue intracellular domain of CRIM1 is required for association with ß-catenin and cadherins, the mechanism of complex formation is unclear. The cytoplasmic domain of CRIM1 is highly conserved but does not have obvious interaction motifs. In particular, there are no primary sequence features of typical ß-catenin ligands. Proteins that bind ß-catenin in an extended conformation along the armadillo repeat (ARM) domain (such as the cadherins, ICAT, TCFs, APC) are characterized by a DXHHXWX 2-7 E motif where H is an aliphatic residue, and W an aromatic residue [48]; there is no such motif in CRIM1. Furthermore, ligands that bind in the positively charged groove of the ß-catenin ARM domain are typically acidic (calculated pI (isoelectric point) for the cadherins is 3.3, for APC, 4.1, and for the Tcf family, 4.4). The calculated pI of the CRIM1-cytoplasmic domain is 9.8. Thus, it may not be surprising that we were unable to demonstrate a direct interaction of recombinant forms of the CRIM1 cytoplasmic domain and ß-catenin or between the CRIM1 cytoplasmic domain and the N-cadherin cytoplasmic domain (data not shown). This suggests that the formation of a complex between CRIM1, ß-catenin and cadherin may depend on additional proteins that might have a bridging activity or perhaps on post-translational modifications. Association of ß-catenin with cadherins in the endoplasmic reticulum (ER) is important for efficient transit of the complex to the plasma membrane and formation of adhesion complexes [49]. Some characteristics of CRIM1 are consistent with participation in this pathway. For example, in vascular endothelial cells, CRIM1 moves to the membrane from the ER upon activation with an inflammatory stimulus [21]. It has also been shown that that CRIM1 can interact with bone morphogenetic proteins via its extracellular domain and can retain them in the ER as a way of suppressing their activity [22]. Combined with data presented in this report, these findings might suggest that a critical cellular location for CRIM1 is the ER and furthermore, that CRIM1 might associate with ß-catenin and cadherins in this location. Further investigation of this proposal is required.
v3-fos-license
2020-01-30T09:14:34.470Z
2020-01-25T00:00:00.000
210946840
{ "extfieldsofstudy": [ "Medicine", "Mathematics" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://doi.org/10.1016/j.tpb.2020.01.004", "pdf_hash": "fde61b7145b17377b713bd2d0c6a521253ab3c09", "pdf_src": "MergedPDFExtraction", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43606", "s2fieldsofstudy": [ "Computer Science" ], "sha1": "e264d3487e9ce832a7cc4189fe881d27c9518cec", "year": 2020 }
pes2o/s2orc
On the approximation of interaction effectmodels by Hadamard powers of the additive genomic relationship Whole genome epistasis models with interactions between different loci can be approximated by genomic relationship models based on Hadamard powers of the additive genomic relationship. We illustrate that the quality of this approximation reduces when the degree of interaction d increases. Moreover, considering relationship models defined as weighted sum of interactions of different degree, we investigate the impact of this decreasing quality of approximation of the summands on the approximation of the weighted sum. Our results indicate that these approximations remain on a reliable level, but their quality reduces when the weights of interactions of higher degrees do not decrease quickly. © 2020 The Authors. Published by Elsevier Inc. This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/4.0/). Introduction With the broad availability of genomic data of individual animals or plant lines, genomic prediction (Meuwissen et al., 2001) has been widely implemented in modern breeding programs (Hayes et al., 2009;Jannink et al., 2010;Meuwissen et al., 2016;Crossa et al., 2017). The standard method genomic best linear unbiased prediction (GBLUP) is based on the commonly used additive effect model (Falconer and Mackay, 1996;Gianola et al., 2009). Given the n × p matrix M describing the marker states of the n individuals at p loci, the additive effect model is defined by y = 1 n µ + Mβ + ϵ. (1) Here, y is the n × 1 vector of phenotypic data, 1 n an n × 1 vector with each entry equal to 1, µ a fixed effect, β a p × 1 vector of marker effects and ϵ an n × 1 vector of errors. Moreover, usually the additional assumptions of β ∼ N (0, σ 2 β I p ) and ϵ ∼ N (0, σ 2 ϵ I n ) are made. I p and I n denote the Identity matrix of respective dimension (Note that the term GBLUP usually refers to a reformulated version of Eq. (1) using g := Mβ, but this distinction will not be used here since both models are statistically equivalent). Having estimated/predicted the relevant parameters asμ and β, the predicted effect of a change at an arbitrary locus k is independent of the state of other markers. This characteristic seems contrary to the function of biological systems which rely on interaction and where thus the effect of a change at locus k is assumed to depend on the genetic background. The discrepancy between the intrinsic logic of the statistical additive model and the mechanistics of biological processes may provide a motivation to consider ''non-additive" relationships for the prediction of (non-additive) total genetic values or phenotypes (de los Campos et al., 2009;Ober et al., 2011). An epistasis model extending the additive setup of Eq. (1) with products of markers as additional predictors (Ober et al., 2015;Jiang and Reif, 2015;Martini et al., 2016) is called the extended genomic best linear unbiased prediction (EGBLUP) (Jiang and Reif, 2015). In more detail, this pair epistasis model is defined by k=1,...,p−1;l>k M i,k M i,l Here, µ and β are as previously defined and M i,• denotes the ith row of M, that is the genomic data of individual i. Moreover, h k,l is the interaction effect of loci k and l with h k,l i.i.d ∼ N (0, σ 2 h ) and all random effects being stochastic independent from each other. It has been demonstrated that model (2) is equivalent (Martini et al., 2016) to a model y = 1 n µ + g 1 + g 2 + ϵ The operator • denotes here the Hadamard, that is the entry-wise product. This model and some variations have been shown to be able to increase predictive ability for some incidences and compared to the additive GBLUP model (Su et al., 2012;Ober et al., 2015;Jiang and Reif, 2015;Martini et al., 2016 types of relationship matrices have also been used to control genetic background effects in association studies (Xu, 2013). In Eq. (2), the interactions are modeled pairwise and only between different loci (l > k). Some variations of this model have been used in literature (Jiang and Reif, 2015;Martini et al., 2016) defined by allowing interaction of the loci with themselves (l ≥ k) or modeling the p 2 interactions by counting the interaction between different loci twice (k, l = 1, . . . , p) It has been shown that the interaction terms of Eqs. (5)-(6) translate to covariance matrices of g 2 of Eq. (3) the following way (Martini et al., 2016): Moreover, it has also been demonstrated that for higher degrees of interactions d, the sum of all p d d-wise interaction terms translate to Hadamard powers of G (Martini et al., 2016): l,m,o=1,...,p M i,k M i,l M i,m M i,o h k,l,m,o= and analogously for any degree d. We are not aware of a general concise formula that generalizes Eq. (4) to a general higher degree d (the cases d = 3 and d = 4 are treated in the Appendix.) Note here that Eq. (9) also includes terms of form M 3 i,k that is a three way interaction within a locus. Terms of this type of intra-locus interactions of higher degree d may be difficult to interpret from a quantitative genetics point of view. It has been argued in the context of a specific marker coding (VanRaden, 2008) that for increasing number of markers p, the quality of an approximation of Eq. (4), which models only the interactions between different loci, by Eq. (8), which models p 2 interactions, improves (Jiang and Reif, 2015). The reason for this improving quality of approximation is roughly spoken a result of (M • M)(M • M) ′ getting relatively small compared to (G • G), and the factor 0.5 being compensated by an adapted estimate of the variance componentσ 2 h . For the case of d = 2, Eq. (4) allows to use model (2) grows very fast. We show that the argumentation which illustrates that for d = 2 and increasing p the quality of an approximation of Eq. (4) by Eq. (8) improves, can analogously be adapted to any fixed degree d and increasing p. This means that for any fixed degree of interaction d and increasing p, the quality of the approximation of a model based on interactions between different loci by a Hadamard power of the additive genomic relationship improves. A different situation -which is important for limit considerations of models with increasing degree of interaction -is increasing d for fixed p. The incorporation of higher degree interactions can lead to new relationship models that aim at reflecting biological complexity better. We show that the quality of the approximation of a model with interactions between different loci by Hadamard powers of G reduces when p is fixed and d increases. Moreover, we investigate the limit behavior of weighted sums of interactions of increasing degree, in particular, their reliability when substituting interactions between different loci by Hadamard powers of the additive relationship. We show that the approximation may be less reliable if the weights of higher degree interactions do not decrease fast enough. As a remark please be aware of the problem that the coding of the markers has an impact on epistasis models which use the products of marker values as additional predictor variables (He and Parida, 2016;Martini et al., 2017Martini et al., , 2019. However, this topic of how to code markers will be ignored in this manuscript. The presented results are independent of the coding of markers. Some words on an improvement of the approximation Let us first make some theoretical considerations on what ''the quality of an approximation of Eq. (4) by Eq. (8) improves'' shall mean. We definẽ G p denotes here the additive genomic relationship matrix based on p markers. Considering the fact that a factor c ∈ R + can be compensated by estimating a different variance component, a first idea to make the statement on improving the quality of the approximation more precise could be: This expression has earlier been used (Jiang and Reif, 2015) with c = 2 and for the specific situation of an allele-frequencycentered and scaled additive genomic relationship according to VanRaden (2008). (Note again that even though subtracting the mean from each column will not have an effect on the prediction of additive effects (Strandén and Christensen, 2011;Martini et al., 2017), it has been shown that this transformation has an impact if the centered values are multiplied to model interactions (He and Parida, 2016;Martini et al., 2017Martini et al., , 2019.) In the current setup -with G := MM ′ -expression (11) raises some questions and would require case distinctions or restrictions. Additionally to the formal question of which metric to use to define the limit, there are more conceptual questions. For instance it is not clear how it should be decided which marker pattern a new column has when another marker column is added. Without a restriction on how to add a new column, one can find examples for which the limits in Eq. (11) are not defined or for which a convergence of the entries to 0 leads to a situation in which Eq. (11) is satisfied, but the approximation of the two matrices does not improve. Some examples can be found in the Appendix. A simple criterion for the quality of the approximation To avoid these complications, we choose a simple way to characterize how good the approximation is for fixed p and d = 2. This criterion will not solve the problem of how to add new marker columns when p increases, but gives a simple and well-defined equation. The model described in Eq. (8) models p 2 interactions consisting of the interactions which we want to model (each of them twice) and additionally the p interactions of markers with themselves which are not included in Eq. (4). Since model (4) is a submodel of G • G, and since each interaction has the same influence on the relationship matrix due to assuming their effects to be independent and identically distributed, we can define a measure for the ''error'' of the approximation as the proportion of interactions which we model in G • G but which are not included in Eq. (4). In this case of d = 2, this is given by the p 2 interactions which we model minus twice the E 2 describes the error as portion of interactions which we model in our approximation but which are not included in Eq. (4). We see that which confirms relatively easily that G • G is a good approximation for Eq. (4) if p is large. Note here that the difference between Eqs. (2) and (6) which is here -for degree 2 -(not) including interactions of a marker with itself, can also be considered as the difference of the possible events when drawing from {1, . . . , p} with or without replacement. We have to subtract from a set of events of drawing with replacement, those that are not possible when drawing without replacement. This view may facilitate to comprehend the relation between the two models when considering higher degrees d afterwards. Expression E 2 (p) is well-defined, but also only guarantees that the approximation of H (2) p is constantly the 0 2×2 matrix. Thus, the approximation will not improve. The reason is here that the only interaction which is not zero is the interaction of the first marker with itself for individual 1. An improving approximation will only be given if the values of General degree d Analogously, for general d ≤ p, Eq. (12) generalizes to The term is built by subtracting the portion of required interactions (and their d! permutations) form 1. Also here, we see that However, for fixed p and increasing degree, the quality of approximation reduces and the error E d (p) reaches even 1 when d is larger than p (recall that the binomial coefficient is defined as being zero when d > p): This means that is the error tends to 0 when d is fixed and p is increasing, but it approaches 1 when p is fixed and d is increasing. In parts, this is an obvious result because dealing with a model with p markers, we can calculate the (p + 1)th Hadamard power G •(p+1) , but there is no interaction between p + 1 different loci. Also note that E d (p) is a strictly monotonously increasing function for fixed p and increasing d ≤ p since A proof of this statement can be found in the Appendix. To illustrate this observation, let us consider a small example. Example 1. Let us consider the case of five markers (p = 5) and the approximation of degree five (d = 5). Then Example 1 illustrates that the quality of the approximation can decrease quickly when the number of markers is very small. Approximations for real genotypic data Let us consider a data set of real genotypes. We use the marker data of a wheat data set published by Crossa et al. (2010) and also provided by the R package BGLR (Pérez and de Los Campos, 2014). For more information on the data set, see Crossa et al. (2010). We use a {±1} coding of the marker data, start with p = 1 and take the first marker of the data set to calculate G 1 = M 1 M ′ 1 and the corresponding H (1) 1 . Note that for d = 1, H (1) p = G p for any p, meaning that the additive matrices are the same, or in other words, when drawing only one-element sets, it does not matter whether we draw with or without replacement. We then subsequently increase the number of markers by adding the following columns of the marker matrix, and calculate the matrices G •d p and H Table 1 for p ∈ {1, . . . , 25} and d ≤ p. We used the correlation of the entries as a similarity measure of the matrices because it is a simple criterion and independent of the data structure of a phenotype y. Since H (d) p does not exist for d > p, no correlation is given for these cases. However, it is clear that the error of the approximation in the previously discussed sense is 100% for these cases. We see that for any fixed d and increasing p, the correlation of the entries of G •d p and H (d) p increases, but for any fixed p, the correlation reduces with increasing d. An interesting aspect is that the correlation for d = 2 is directly equal to one, already for the case of p = 2. This is a result of using markers which only have two states coded as {±1}. Also if we consider the cases d = p, we see that the correlation of the matrices tends to 0 for increasing p, which has already been stated by Eq. (14). Limit considerations of models with higher degree interactions In the following, we would like to investigate empirically whether the decreasing quality of the approximation for higher degree interactions has an impact on limit considerations. Limit Problem 1. We would like to build a model that takes all interactions of p different loci into account. We assume for this limit model that the variance component σ 2 β remains the same for any degree d, that is any interaction effect of any degree comes from the same distribution. We can formulate the model which we are interested in as Since it is computationally demanding to calculate the matrices H (d) for higher degree interaction, we are interested in an approximation using Hadamard powers G •d . As discussed above, the matrix G •d counts each interaction which we aim to model in H (d) d! times. Moreover, additional interactions are included in G •d in which we are not interested. To give an equal weight to any interaction which we would like to model, we have to divide each G •d by d! to guarantee that the weights are adapted between degrees. Without this adjustment, we would model the interactions of degree two twice giving them twice the weight of the additive effects. Analogously, the interactions of degree three would be modeled six times, giving each of them six times the weight of an additive effect. An approximation of our desired relationship model can thus be given by where we include the inverse factorial to scale the matrices relatively to each other. Since each entry in Eq. (17) follows the exponential power series, it can be approximated for large p by Please recall that the operations are here meant entry-wise. In particular, the exponential function refers to the entry-wise exponential (and not the matrix exponential): (exp(G)) i,j := exp(G i,j ). Moreover, note here that this limit is not identical to the Gaussian kernel in a reproducing kernel Hilbert space approach, since the exponential function is not applied to the squared Euclidean distance but to the entries of G (which is slightly different from a limit consideration leading to the Gaussian kernel and presented by Jiang and Reif (2015)). A question is how good the approximation of the covariance model which we actually would like to model (Eq. (16)) by Eq. (18) is. Although the presence of the inverse factorials, which give the weights to the Hadamard powers of G in Eq. (17), suggests that the influence of higher degree interactions will quickly vanish, a general theoretical consideration is difficult since the quality of approximation also depends on how fast the entries of G p grow. For this reason, we use the relationship matrices calculated for the wheat data set (Table 1) as well as a {−1, 0, 1} coded maize data (for more details on the data see the section Data at the end of the manuscript) and consider the correlation of the matrices defined by Eqs. (16) and (18) for increasing p. The results are presented in Fig. 1. We see for the wheat data (blue line) an initially high correlation which is a result of only using one marker. Since we have only one marker with the two possible values {±1}, each entry of G 1 has only two possibilities. Applying the non-linear exponential function still gives a matrix with only two values which is perfectly correlated with G 1 . This high correlation is then reduced when a second marker is introduced and the correlation keeps decreasing until the increase in p improves the approximation sufficiently to push the correlation again towards 1. For the maize data (black line), the correlation starts -since we are dealing with markers with three states -on a lower level but increases quickly towards 1. Limit Problem 2. Let us assume that we would like to use a model in which the variance of the higher degree interaction increases by d! It should be mentioned here that epistatic effects are usually defined as deviations from the fit defined by lower degree interactions. This concept would translate to the assumption that the variance components decrease with increasing d. However, Eq. (19) is a valid covariance model and we would like to investigate the effect of these increasing weights on the overall approximation. A reason for defining an increasing variance could be to give the higher degree interactions more flexibility to capture some important interaction terms. We may approximate Eq. (19) by which converges for increasing p to . 2. Correlation of the entries of the matrices defined by Eqs. (19) and (21) for increasing p and the maize data (coded as {−1, 0, 1}/ √ p). iff |G i,j | < 1 ∀i, j. The latter condition of the entries having absolute values smaller than 1 is essential, since otherwise the series of Eq. (20) will not converge. Recall again that all operations are meant entry-wise. This condition of not any absolute entry being larger than or equal to 1 is for instance given when we are dealing with {−1, 0, 1} coded data, the marker data is divided by the square root of p (which is equal to dividing G by p), and if none of the lines is completely homozygous. The wheat data is not appropriate for this limit, since it has only two values and the entries on the diagonal would be equal to 1 (when dividing the markers by √ p). However, we consider the behavior of the correlation for the maize data (with dividing the marker values by √ p). Fig. 2 illustrates that -since the weights of the higher degree interactions are not reduced in Eq. (20) -the accumulated error across the different degrees d matters more than in Limit Problem 1. The correlation of the entries of the matrices defined by Eqs. (19) and (21) reduces, and a reversion of this trend cannot be observed up to p = 30, which was the maximal value for which we were able to calculate H (p) with our approach. Note here that for values of p below five, some of the included lines were completely homozygous, which leads to a situation of the maximal value of the additive relationship matrix being 1 and thus Eq. (21) not being defined. Therefore, no correlation is given for these points. Summary and outlook We gave an explicit formula to quantify the error when approximating a model with interactions between different loci by Hadamard powers of the additive genomic relationship (Eq. (13)). The criterion used to quantify the quality of the approximation also struggles with the problem of how to add a new column of markers when p increases, but gives a simple and well-defined equation. We illustrated that when the number of markers p is fixed and d increases, the quality of the approximation of H (d) by G •d decreases. For limit considerations such as Limit Problem 1, where the impact of higher degree interactions reduces quickly, this reduced quality does not have a big impact on the quality of approximation of the overall limit. However -as illustrated by Limit Problem 2 -for models in which the weight is not reduced fast enough with increasing degree d, the overall limit can have a lower (but still high) correlation with what is supposed to be approximated. Due to the computational restrictions, we were not able to calculate all H (d) with d ≤ p for p larger than 30. Thus, we cannot judge the behavior of our empirical considerations of Limit Problems 1 and 2 above 30 markers. An interesting theoretical problem -which would also allow to investigate the behavior of limits for larger values of p -would be to find a concise equation generalizing Eq. (4) to any degree d. Data As described above, the wheat data has been published by Crossa et al. (2010) and is also provided by the R package BGLR (Pérez and de Los Campos, 2014). The maize data was provided by the same publication (Crossa et al., 2010). We used the file dataCorn_SS_asi.RData which is available in File S1 of following link https://www.genetics.org/content/186/2/713.supplemental We reduced the set to the 101 lines which had at least one heterozygous (0) marker within the first five markers. Since calculating H (d) is computationally demanding, we restricted us to this subset for which Eq. (21) is already defined for p = 5. This reduced data set was used for Limit Problems 1 and 2. Acknowledgments We are thankful for the financial support provided by CIMMYT, CGIAR CRP WHEAT, the Bill & Melinda Gates Foundation, as well as the USAID projects (Cornell University and Kansas State University) that generated the CIMMYT wheat data analyzed in this study. We acknowledge the financial support provided by the Foundation for Research Levy on Agricultural Products (FFL) and the Agricultural Agreement Research Fund (JA) in Norway through NFR grant 267806. Moreover, we thank two anonymous referees, especially the one who pointed out an important error in the Appendix. Appendix A. Extensions of Eq. (4) to d ∈ {3, 4} and illustration of the general problem As pointed out earlier, the difference between G •d and H (d) is represented by the difference in the sets of possible events when drawing d times from {1, . . . , p} either with or without replacement. For d = 2, we can simply use the matrix corresponding to the set of interactions {1, . . . , p}×{1, . . . , p}, remove the covariance matrix coming from the tuples (i, i) and divide the remaining matrix by 2 to account for not considering the order of the draws, which means (i, j) is considered to be equal to (j, i). The matrix G •2 corresponds to {1, . . . , p} × {1, . . . , p} and the matrix Before, we go to the cases of d ∈ {3, 4}, recall that we are looking for a concise formula, that is one that can be easily calculated using Hadamard products of M and G. It is obvious that there are equations which allow to calculate H (d) , for instance the straight-forward approach used in this manuscript, but we are looking for a formula allowing the use of Hadamard products and simplifying the computation. Let us now consider the case of d = 3. Analogously to the case of d = 2, we identify G •3 with the set of interactions described by {1, . . . , p} × {1, . . . , p} × {1, . . . , p}. We have to subtract the matrix corresponding to {(i, i) i=1,...,p } × {1, . . . , p} and its permutations. This can be represented by the matrix 3 However, this matrix also includes the covariance generated by This procedure is similar to the ''principle of inclusion and exclusion" known from basic set theory and gives the following identity Analogously, a formula for degree d = 4 can be derived. However, we have to consider here the sets with their corresponding permutations. Moreover, an important point is to be aware of the fact that the sets are in parts subsets of others. Thus, we obtain . Each summand on the right hand side corresponds to one of the sets mentioned above. For d = 5, the following sets would have to be considered: ) In particular this means that the diagonal will increase with p and ''tend to ∞", but the off-diagonal elements will alternate. Here, the limit ofH is not defined (even if ''∞" is considered as a limit), that is Eq. (11) does not make sense for this example. Note here that one could argue that the limit G p /p would be well-defined and that we have to use this alternative definition of the genomic relationship matrix. However, we can define a more complicated example for which G p /p will neither converge. B.2. An example for which G p /p does not converge and thus Eq. (11) is not defined ) . We would like to thank an anonymous reviewer here who pointed out that -based on the above definition -we can write Consequently, G p /p has a subsequence converging to . Thus, G p /p does not converge. The examples described above aimed at illustrating the problem of possibly alternating entries. Another important situation to consider is the convergence of the entries of G p to zero. Both matrices have the same limit, which means Eq. (11) is fulfilled with c = 1, but inH p the variances will always have the double weight of the covariances, whereas all entries are identical in H p . Thus, the quality of approximation when approximating one by the other should remain on the same level -independent of p.
v3-fos-license
2018-04-03T01:43:58.821Z
2016-09-21T00:00:00.000
54468206
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.nature.com/articles/srep33825.pdf", "pdf_hash": "11ae1bdcd874fdf2ab8395ed1948daf3033166a6", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43610", "s2fieldsofstudy": [ "Biology" ], "sha1": "9ba0c5af4993c135b4a5702638c60f12b2a71102", "year": 2016 }
pes2o/s2orc
A miRNA-based signature predicts development of disease recurrence in HER2 positive breast cancer after adjuvant trastuzumab-based treatment Approximately 20% of HER2 positive breast cancer develops disease recurrence after adjuvant trastuzumab treatment. This study aimed to develop a molecular prognostic model that can reliably stratify patients by risk of developing disease recurrence. Using miRNA microarrays, nine miRNAs that differentially expressed between the recurrent and non-recurrent patients were identified. Then, we validated the expression of these miRNAs using qRT-PCR in training set (n = 101), and generated a 2-miRNA (miR-4734 and miR-150-5p) based prognostic signature. The prognostic accuracy of this classifier was further confirmed in an internal testing set (n = 57), and an external independent testing set (n = 53). Besides, by comparing the ROC curves, we found the incorporation of this miRNA based classifier into TNM stage could improve the prognostic performance of TNM system. The results indicated the 2-miRNA based signature was a reliable prognostic biomarker for patients with HER2 positive breast cancer. Several studies investigate the molecular predictors of patients with HER2 positive breast cancer. It was reported that single nucleotide polymorphism (SNP) of metastasis-associated in colon cancer-1 (MACC1) gene, a key regulator of the HGF/MET pathway, was significantly associated with clinical outcome in HER2 positive breast cancer. Increased risk for progression or death was observed in carriers of the G-allele of rs1990172 and T-allele of rs975263, respectively. While C-allele of rs3735615 showed a significant protective impact on event-free survival as well as overall survival 11 . In another study, the presence of HER2/HER3 heterodimers and the loss of p21 expression were also discovered to predict a significantly poorer clinical outcome in patients when submitted to adjuvant chemotherapy and trastuzumab. But these biomarkers still require validation and are not part of standard clinical practice. miRNAs are evolutionally conserved, small (18-25 nucleotides), endogenously expressed RNAs, which has emerged as critical modulators involved in malignant activity 12 . Furthermore, several miRNAs have been reported to be aberrantly expressed in HER2 positive breast cancer cell lines and associated with the resistance to anti-HER2 treatment [13][14][15][16][17] . However, the prognostic value of miRNA expression in clinical tumor tissue is not fully tested in this specific population. In this study, patients with HER2 positive breast cancer who had undergone radical resection and completed adjuvant chemotherapy and trastuzumab were enrolled. We performed the comprehensive miRNA analysis and generated a multi-miRNA based signature to predict DFS. Prognostic accuracy of this classifier was assessed in training set and internal testing set, and further confirmed in an independent testing set. We also compared the prognostic efficacy with traditional clinical factors. Results Development of the miRNA prognostic classifier. A total of 211 patients who have undergone radical surgical resection with histologically negative resection margins followed by adjuvant chemotherapy and trastuzumab were included. Table 1 showed clinicopathological characteristics of the training set (101 patients), internal testing set (57 patients), and external independent testing sets (53 patients). HR positive tumor comprised 50.5%, 65.0%, and 68% of patients in each cohort, respectively. The median follow-up time was 58.4 months (IQR 42.8-76.9), and 49 of 211 patients (23.2%) developed tumor relapse during the follow-up period. Firstly, we compared the global and targeted miRNA expression profiling in another 14 FFPE primary breast tumor specimens, including seven non-recurrent cases (group A) and seven recurrent cases (group B). There was no statistically difference between the two groups in terms of clinical characteristics, except the DFS (Supplementary Table 1). The median DFS was 81.5 ± 37.0 and 24.2 ± 7.1 months in group A and group B, respectively. As a result, nine miRNAs that were significantly differentially expressed between the two groups were identified. (Table 2; Supplementary Figure 1) Then, the expression of the nine miRNAs was confirmed using qRT-PCR analysis in the training set. The optimum cutoff values for these candidate miRNAs were generated by X-tile plots, which translate miRNA data from continuous variable into categorical variable (high expression or low expression). (Supplementary Figure 2A-I) After that, we put each miRNA status (high or low expression), together with DFS data, into COX regression formula, and thus identified 2 miRNA (mir-150-5p and mir-4734) that were independently significantly Training set (n = 101) Internal testing set (n = 57) External Independent set (n = 53) The risk score of each patient in the training set was calculated, and then took into X-tile plot. As a result, − 0.1 was selected as the optimal cut-off value of risk score. ( Supplementary Figure 2J) Therefore we classified those patients with risk score ≥ − 0.1 as high-risk group, and those with risk score < − 0.1 as low-risk group. Number of patients We further compared the prognostic predict performance of 9-miRNA model with 2-miRNA model in the training set using time-ROC curve. As a result, 2-miRNA set was significantly superior to the 9-miRNA set (p = 0.034) (Supplementary Figure 3A) Besides, the kaplan-meier curve also suggested 2-miRNA set had an advantage over 9-miRNA set. (Supplementary Figure 3B), which supporting the establishment of 2-miRNA signature. Validation of the miRNA prognostic classifier. Patients in the low-risk group generally had better disease free survival than those in the high-risk group. There was no significant difference in the distribution of clinicopathological features between the high-risk and low-risk group in each set (Table 1). In the training set, 5-year disease-free survival was 59.0% for the high-risk group and 89.2% for the low-risk group (hazard ratio [HR] 5.35, 95% CI 2.13-13.44; p < 0.001) (Fig. 1B). We did the same analysis using in the internal testing cohort. Five-year disease-free survival was 45.8% for the high-risk group and 87.5% for the low-risk group (HR 3·71, 95% CI 1.08-12.74; p = 0.025). (Fig. 1D) To confirm that 2-miRNA based classifier had consistent prognostic value in different populations, we applied it to the external independent testing set. Five-year disease-free survival rate was 38.9% for the high-risk group and 80.1% for the low-risk group (HR 3·43, 95% CI 1.35-8.69; p = 0.006) (Fig. 1F). Univariate analysis showed that 2-miRNA signature was significantly associated with DFS in each cohort. (Supplement Table 2) After multivariable adjustment by other prognostic factors including HR status, TNM stage, tumor grade and age, the 2-miRNA based model remained a powerful and independent factor in the entire cohort of 211 cases (HR 4.63, 95% CI 2.45-8.74, p < 0.0001) ( Table 4). When stratified by clinicopathological risk factors, the 2-miRNA based classifier still showed clinically and statistically significant prognostic effect in all subgroups (Fig. 2). Comparing the prognostic performance of miRNA classifier with other clinicopathological factors. To further evaluate the prognostic performance of the miRNA signature, we assessed the prognostic accuracy of the 2-miRNA based classifier with time-dependent ROC analysis at five years, and calculated the AUC of the ROC curves for disease recurrence in all 211 patients. Collectively, our results demonstrate that expression of a small set of miRNA, measured from primary breast cancer tissues at initial diagnosis, was a valid prognostic indicator and will improve the prognostic capacity of AJCC stage for the development of disease recurrence. Discussion In this study, we developed and confirmed, for the first time, a novel prognostic model based on 2-miRNA expression to improve the prediction of disease recurrence in patients with HER2 positive breast cancer who completed standard treatment. Our results clearly demonstrated that this classifier can successfully stratified patients into two groups by their risk of tumor recurrence, regardless of the clinical features. Furthermore, this signature predicted the five year DFS better than other clinicopathological factors, and added prognostic value to the TNM staging system. The value of miRNA as prognostic biomarkers has been increasingly explored [18][19][20] . However, it have not been comprehensively studied in patients with primary HER2 positive breast cancer. Jung et al. 14 showed that circulating miR-210 levels were associated with trastuzumab sensitivity, tumor presence, and lymph node metastases in patients who received neo-adjuvant trastuzumab based chemotherapy. In contrast to short-term efficacy, the present study focused on the association between miRNA and long-term benefit of trastuzumab based treatment. We accessed to large numbers of primary tumor tissues with extensive clinical follow-up in distinct study cohorts to confirm the robustness of this miRNA based signature as a useful predictor of long-term prognosis. Our results suggested that incorporation of miRNA signature into the conventional clinical factors can provide more accurate prognostic information. The miRNA signature successfully distinguished patients with similar clinical features into distinct groups depending on their risk of tumor recurrence. Besides, combination of miRNA signature and TNM system improved the prognostic predict performance than other models including miRNA alone or TNM stage alone. Therefore, the novel miRNA classifier is valuable, in supplement of current TNM system, to define more accurate prognosis, and had the potential to improve the management of HER2 positive breast cancer patient. Our data suggested that patients with high risk of recurrence may be inadequately treated with the currently available treatment. Presently, strategies of escalating adjuvant anti-HER2 treatment have been pursued. Neratinib significantly reduced the risk of recurrence by 33% versus placebo at first two years, in patients who have completed the chemotherapy and one year of trastuzumab 21 . The effect of combination of trastuzumab and pertuzumab in adjuvant setting was also examined in phase III APHINITY trial. Therefore, development of this miRNA based prognostic assay will contribute to identify patients who may benefit most from more extensive adjuvant therapies, and thus help to tailor the adjuvant anti-HER2 treatment. The biologic function of the two miRNAs in breast cancer still needs to be established. Increased expression of mir150 was reported to correlate with poorer clinical outcome in intrahepatic cholangiocarcinoma 22 and non-small cell lung cancer 23 . Besides, recent evidence showed that mir150 was member of a miRNA based prognostic model for primary melanoma, and was associated with CD45+ TILs in tumor tissues 18 . On the other hand, mir4734 is a newly identified miRNA in breast cancer by extensive next-generation sequencing analysis. It encodes within the ERBB2/Her2 gene, which is amplified in HER2 positive breast cancer and cause the clinically genomic aberration 24 . Our work provided new evidence revealing the association between mir4734 expression and clinical outcome of HER2 positive breast cancer, which may aid further exploration of potent biological function. The major limitation of this work is the small sample size of the original cohort selected for microarray study. Be aware of this issue, several efforts were made to make up for the deficiency. Firstly, the original groups of patients for miRNA screening were carefully selected, matching all the clinical prognostic factors well, and making DFS being the only significantly different factor. Secondly, the preliminary result of microarray experiment was strictly validated with larger sample size using appropriate methods. Besides, a large, multicenter prospective study to assess the robustness of prognostic signature in the general HER2 positive breast cancer population is required. In summary, the described miRNA signature represented the first step to develop a molecular prognostic assay for HER2 positive breast cancer. We believe such a model has the potential to improve the management of this specific population. Methods Study population. We used formalin-fixed paraffin-embedded (FFPE) tissue samples from 211 patients with stage I-III HER2 positive breast cancer. For the training and internal testing set, data were obtained from 158 patients in Cancer Hospital, Chinese Academy of Medical Sciences, Beijing, China, between June 1, 2000, and June 30, 2015. We used computer-generated random numbers to assign 101 of these patients to the training set, and 57 patients to the internal testing set. We enrolled another 53 patients, with the same criteria as above, from other 10 hospitals in China as the independent validation set. HER2-positive was defined as 3+ on immunohistochemical [IHC] analysis or 2+ with gene amplification by fluorescence in situ hybridization [FISH]. Informed consent was obtained from all patients and approval acquired by the Institutional Review Board. Clinical information relevant to this study include date of diagnosis, date of recurrence or last follow-up or death, age, HR status, tumor grade, TNM stage for primary tumors and menopausal status. We defined disease free survival as the time from the date of surgery to the date of confirmed tumor relapse or the date of last follow-up visit for disease-free patients. We excluded patients who had no FFPE tumor sample from initial diagnosis, or insufficient RNA (less than 100 ng/μ L) available. All the patients provided informed consent. The study was performed in accordance with Declaration of Helsinki and approved by ethics committee in Cancer Hospital, Chinese Academy of Medical Sciences. Clinical specimens. All specimens were human primary breast cancer samples that were collected, formalin-fixed, and paraffin-embedded at the time of surgery. All tumors were classified according to the 2010 American Joint Committee on Cancer (AJCC) staging system RNA extraction. All the FFPE tissues comprised at least 80% tumor cells. RNA extraction was performed using the miRNeasy FFPE Kit (Qiagen) following manufacturer's recommendations, using the Xylene/Ethanol method for deparaffinization/rehydration. microRNA microarray expression profiling and data preprocessing. To generate miRNA expression profiles, we selected another panel of FFPE tumor samples from 14 patients including seven relapsed disease and seven non-relapsed. The miRNA profiling was performed using Agilent miRNA array, which contained probes interrogating 2006 human mature miRNAs from miRBase R19.0. Microarray experiments were conducted according to the manufacturer's instructions. Briefly, the miRNAs were labeled using the Agilent miRNA labeling reagent. Total RNA (100 ng) was dephosphorylated and ligated with pCp-Cy3, the labeled RNA was purified and hybridized to miRNA arrays. Images were scanned with the Agilent microarray scanner (Agilent), gridded, and analyzed using Agilent feature extraction software version 10.10. The miRNA array data were analyzed for data summarization, normalization and quality control by using the GeneSpring software V12 (Agilent). To select the differentially expressed genes, we used threshold values of 2-fold change and a Benjamini-Hochberg corrected p vlaue of 0.05. The data was Log2 transformed and median centered by genes using the Adjust Data function of CLUSTER 3.0 software then further analyzed with hierarchical clustering with average linkage. Finally, we performed tree visualization by using Java Treeview (Stanford University School of Medicine, Stanford, CA, USA). microRNA real-time qPCR and data processing. On the basis of the miRNA microarray results, we further examined miRNA expression using qRT-PCR to analyze the 211 FFPE samples to validate the prognostic value of every candidate miRNA. One microgramme of RNA was reverse-transcribed in 25-mL reactions using the Superscript II reverse transcriptase (Invitrogen) according to the manufacturer's instructions. Quantitative real-time PCR (qPCR) was conducted using the SYBR Premix Ex TaqTM II (TliRNaseH Plus) kit (TaKaRa, Japan) with the Bio-Rad (USA) machine. U6 small nuclear RNA was used as internal normalized references. Expression levels of individual miRNA were determined by − Δ CT approach (Δ CT = CT miRNA − CT U6 RNA).
v3-fos-license
2023-02-16T16:18:42.390Z
2022-12-22T00:00:00.000
256879955
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "http://emdnaes.org.ua/index.php/Educ_Mod_discourse/article/download/112/112", "pdf_hash": "4436c921c894e8d58fb40bd11dd580ebcc220d4f", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43611", "s2fieldsofstudy": [ "Environmental Science", "Education" ], "sha1": "6ed9acb706d8a3f538d779469f0729ac40491866", "year": 2022 }
pes2o/s2orc
TRENDS IN SUSTAINABLE CIRCULAR EDUCATION TRANSFORMATION: A CASE OF FINLAND . The research presents the case study aimed at in-depth studying of experience of Finland in transition to sustainable circular economy and education. The country is chosen as it has become the first one in taking measures to integrate the Sustainable Development Goals into national economics and education. These two aspects are analysed to seek patterns and determine trends that can be generalised to other countries. The article investigates the current trends in the transition to sustainable circular economy and education in Finland on macro (the EU and the EHEA) and micro levels (participants of the educational process – national government, the labour market and higher education). Particular attention is paid to higher education and the labour market interaction – key actors enhancing decent work and economic growth as well as environmental awareness. Among the trends in higher education, there are the following: introduction of the circular economy principles in education, compliance of training with the goals of circular economy transformation, digitalisation of educational services at all levels of education, higher education modification, the new paradigm of teaching and learning, circulation of knowledge and skills, changing the composition and role of participants in the educational process. Finally, the research results in recommendations to encourage higher education importance in developing a high level of environmental knowledge, environmental awareness and culture among students and society in perspective. INTRODUCTION In 2015 the United Nations adopted the Sustainable Development Goals as a universal measure to ensure balanced social, economic and environmental sustainability by 2030. Therefore, environmental sustainability is a core goal of the post-2015 agenda (MDGR, 2015, p. 61). Policymakers believe that the only way to gain this goal is to transit to a new economic modelthe circular economy. In this research under "circular economy", we understand a "sustainable economic system where the economic growth is decoupled from the resources use, through the reduction and recirculation of natural resources" (Corona et al., 2019). Furthermore, followingVelenturf and Purnell (2021), we use the term "sustainable circular economy" that can "contribute positively to most of the sustainable development goals" in case if it fully integrates with sustainable development (Velenturf andPurnell, 2021, p. 1456). Properly managed, the transition to a circular economy can have strategic advantages at the macro-and micro-economic levels. The circular economy offers significant potential for innovation, employment opportunities, quality of work and, ultimately, a more inclusive economy that serves the needs of all people (GRSPCE, 2020, p. 4). That is why in 2015, the European Commission adopted an action plan to accelerate Europe's transition to a circular economy, increase global competitiveness, promote sustainable economic growth and create new jobs (FCEAP, 2015). In 2020, the EU developed The EU's new Circular Economy Action Plan, which is an ambitious plan to build a carbon-neutral economy (COM/2020(COM/ /98, 2020. This plan has become a vital element of the European Green Deal presented in 2021. The transition to a sustainable circular economy requires changes in society, the state, the labour market and education. Education is a crucial player in transition because it "must prepare students and learners of all ages to find solutions for the challenges of today and the future" (ESD, 2021), and on the other hand -to train "green" prifessionals in the circular economy, who should provide sustainable development. That new educational paradigm requires a sustainable circular transformation of education systems, i.e. providing new methodology and even an education model. The situation is more challenging as national education systems are expected to create their unique way of transformation whose valuable experience can contribute to other countries development. For a short time, there is a country pioneer -Finland, which has managed to be the first to gain particular experience in sustainable circular economy and education transformation. Therefore, Finland attracts our attention, as some lessons can be drawn from its successful experience for the last 3-5 years. Thus, our research is focused on revealing trends in sustainable circular education transformation in the case of Finland. MATERIALS AND METHODS This research is a case study aimed at the in-depth study of the experience of one EU Member State (Finland) in transition to a sustainable circular economy and education. The country is chosen as it has become the first one in taking measures to integrate the United Nations and the EU sustainable goals into national economics and education. These two aspects are analysed to seek patterns and determine trends that can be generalised to other countries. This type of study is appropriate for the research needs as it allows collecting and analysing various information presented in national and international reports, policy documents, publications, websites, guidelines, educational programs and even MOOCs content available on the Internet. The collected material analysis makes it possible to investigate trends in the transition to sustainable circular economy and education in Finland and generalise results to more countries. Finland as a pioneer in sustainable circular economy and education transformation The EU initiatives and regulations have prompted the EU Member States to move to a circular economy, making Finland the first country. The strengthening of the circular economy market is likely to positively impact the Finnish economy in the long run (Government Resolution on the Strategic Program for Circular Economy, 2021). In Finland, a positive impact on employment is projected in the consumer electronics, construction and forestry sectors (Gass and Roth, 2019;Bassi and Palaske, 2020). To implement European initiatives, the Finnish Innovation Fund Sitra developed the world's first national "Finnish road map to a circular economy 2016-2025" in 2016. This plan has become a powerful tool for initiating change and building a solid commitment to the circular economy in Finnish society. As a result, 88% of Finns surveyed in 2021 by the Fund Sitra believe that they can play an essential role in promoting the principles of the circular economy; 82% expect to create new jobs in the circular labour market (Järvinen and Sinervo, 2021). In addition, based on the Finnish experience, the Fund Sitra has developed a guidlines to help the EU Member States create a national action plan for a circular transition (Järvinen and Sinervo, 2020). Education has played an essential role in promoting the principles of sustainable circular economy in society. The promotion of the circular economy in Finland began in 2015 under the leadership of the Fund Sitra, with the "Circular economy teaching for all levels of education" project that in 2018 covered the entire education system of Finland. The projects focus on implementing circular economy skills and competence in national educational programs (Silvennoinen and Pajunen, 2019). As a result, since 2018, the teaching of circular economy in Finland has implemented at all levels of education, i.e. in secondary, vocational and higher education institutions. Accordingly, the organisation of professional education and training in circular economy is provided in a close relationship of all educational institutions, which is the key to continuing education and lifelong learning, and also helps to meet the students' needs in obtaining a "green job" (see Fig. 1). The content of professional education and training in the circular economy is based on the goals and needs of society in terms of economic transformation, which essential features are the principles of consistency and gradual complication of educational information and skills development. That is, the content begins to form from school with gradual complication and final disclosure in the higher education institution (HEI). The competency in the circular economy is developed vertically from the bottom to the top, when in secondary schools, students are taught to understand the environment and the principles of the circular economy, formed worldview and civic qualities in a world of new challenges and opportunities. In vocational schools, students obtain knowledge and skills, conduct applied research to improve the circular transformation of the selected industry sector. In HEIs, students develop professional competencies in circular economy that conduct basic or applied research considering the prospects for developing society, science, technology in the sustainable circular transition. It is worth noting, that Finland has developed the most significant amount of educational programs in higher education in the world. Finnish Universities of Applied Sciences and universities provide Bachelor's, Master's and PhD programs and courses in the circular economy. Based on successful pilot educational projects, national educators conclude that the circular economy cannot be tied to one discipline or job sector, as its success requires collaboration among different actors. It should be an inter-social economic model needed in various sectors of the economy. Therefore, the teaching of the circular economy should be interdisciplinary and cover different educational fields (Mäkiö and Virta, 2019). Trends in sustainable circular education transformation in macro-and micro-levels It is worth noting that the trends in sustainable circular education transformation in Finland closely intersect with global trends in the transition to a circular economy. Therefore, we identify trends in Finland considering global trends, particularly at the macro and micro levels. Following the recent researches (Mospan, 2019, p. 334;Mospan, 2022, p. 119;Sysoieva and Mospan, 2019, p. 80) the macro level is referred to the EU and the EHEA, and the micro-level includes the participants of the educational process and stakeholders benefited from education and training students in the circular economy. They are the state (national government), the labour market (enterprises/companies) and higher education (universities). There are the following trends in the sustainable circular economy and education transformation at the macro level, particularly at the EU and the EHEA levels. Greening of the economy. Countries worldwide are trying to reduce resource shortages, protect livelihoods and combat climate change. To this end, countries with developed economies are beginning to green their economies, where circular strategies are of paramount importance. Circular economy strategies that go beyond a narrow focus on energy consumption and contribute to resource efficiency can reduce 39% of global emissions. The movement towards a circular economy is accompanied by a decline in capital-intensive and extractive industries and an increase in labour-intensive circular processes, including reuse, reconstruction and repair of goods, and labour market automation . The emergence of new "green jobs". New green jobs are emerging to realise the ambitions of greening due to the introduction of new policies, programs and strategies for the transition to a circular economy by governments and enterprises. High labour intensity is provided mainly at the assembly, processing, and reconstruction of goods and materials enterprises. By 2030 the green economy is expected to create 24 million new jobs, while 7 million will be lost (ILO, 2020). Digitalisation of industry. The COVID-19 pandemic has intensified the digitalisation of all sectors of the economy and society as a whole. Experts believe that we have experienced a "turning point in the development of technology" during the lockdowns, which led to the transformation of the usual format of study, work, and life. Businesses and companies have been acutely aware of the rapid transformation of the work format (for example, the emergency transition from office to remote work from home). Digitalisation comes with increasing technological progress to improve resource-and energy-efficient practices to support the transition to a circular economy. The rapid technologies implementation into industry makes basic digital skills and lifelong learning extremely important. Construction is a crucial sector with the increasing number of digital tools. Current traditional construction works are gradually being transformed through the use of secondary materials and digitalisation. In practice, building information management systems (BIM), 3D printing, blockchain, robotics, machine learning, drones are increasingly used. The digitalisation of the construction sector contributes to its sustainable circular transformation, as the construction sector accounts for 28% of global emissions . Increasing life expectancy and work. People all over the world live and work longer. Life expectancy can stimulate new lifelong learning models, which allow employees to improve their skills during their professional careers. In the transition to a circular economy, older workers can offer in-depth knowledge of the economy and society and become a vital resource by reducing the working-age population. The skills of older workers are especially relevant in changing industries and governments through the greening of the economy. The in-depth professional experience and skills of older workers can be a crucial asset. Experienced employees working many years in the industry will have the opportunity to explore new ways to transform the industry into more resource-efficient and sustainable mode and will teach new workers basic skills combined with new ones. An essential prerequisite for this is a well-structured knowledge transfer system as a human resources tool. Experienced professionals are good at supporting knowledge sharing and applying inherited skills . Upskilling is crucial to reduce the skills gap -the mismatch of qualifications to the labour market requirements. During a rapid transition to the circular economy, promotion of digitalisation, and working life increase, the paradigm of qualifications in the circular labour market is changing, where there will be a demand for workers with transversal skills. The development of transversal skills among workers can increase labour mobility and resilience in the circular labour market . These skills are becoming increasingly popular with students to successfully adapt to rapidly changeable world and lead meaningful and productive lives. The importance of vocational education. Training and development of job specific skills are essential for unlocking the circular economy's social, economic, and environmental potential. With proper management of this potential, the transition to a circular economy opens up opportunities for labour markets, emission reductions, and the fight against climate change and resource scarcity. Higher education and vocational education, in particular, are crucial mechanisms for providing the circular labour market with a skilled workforce and for stimulating society's transition to a circular economy. Vocational education is crucial to stimulate the implementation of circular strategies, promote equality and reduce the skills gap, integrate into the labour market and support large-scale and continuous education and training. In the transition to a circular economy, the vocational education transformation to the circular labour market demands is based on a deep understanding of the key skills needed for circular strategies in different contexts. This understanding can be transmeted into new qualifications, evaluation criteria, and competency frameworks backed by effective policy, funding, leadership, and stakeholder participation . There are the following trends in sustainable circular economy transformation at the microlevel, particularly at the level of participants in the educational process (state, labour market/ enterprise and higher education). The tendencies at the state (national government) level include, in particular: Leadership in the circular innovations implementation. Finland has become the first country in the world to recognise the transition to a circular economy as a national strategy. In 2016, the Finnish Innovation Fund Sitra, in collaboration with stakeholders, developed the world's first national Roadmap for the Circular Economy 2016-2025 and is disseminating this experience internationally. In recent years, various industries in Finland have improved resource efficiency. As a result, the CMU was about 7% in 2018. The modern circular economy accounts for about 5% of Finland's current GDP (GRSPCE, 2020, p. 2). Financing the sustainable development strategy. The transition to a circular economy at the state level is supported by significant investments by independent organisations and foundations. For example, Business Finland is funding the Bio and Circular Finland program with € 150 million in 2018-2022. The Finnish Innovation Fund Sitra finances implementing the national Roadmap for the Circular Economy 2016-2025 (GRSPCE, 2020). Dissemination of experience in the sustainable circular economy and education transition. Based on the Finnish experience, The Sitra Found has developed a guidelines to help the EU Member States make the circular transition and develop national plans, which can become a meaningful way to launch new circular economy initiatives (Järvinen and Sinervo, 2020). Moreover, educational resources and materials in circular economy for all levels of education are in free acces in the Internet. Interaction of higher education, the labour market and government in the educational process occurs at the national level, where each participant performs specific roles. Higher education, particularly universities of applied sciences, organise innovative projects in circular economiy. Students in a multidisciplinary team implement a project that starts with an assignment (problem) and ends with a solution presented to a client. The project involves research, mapping, development and testing, innovation and rapid piloting, Master's or PhD dissertation, or even the production of goods. The labour market through companies and industrial facilities puts forward requirements for the content and methods of education and training, as well as the competencies in the circular economy; makes an order for an educational product (project); finances projects; directly participates in the educational process and evaluation of project results; employs graduates. The government finances and legalises educational services of universities to train students in a circular economy, develops qualifications frameworks, regulates the higher education interaction with the circular labour market. The tendencies at the labour market level include, in particular: The emergence of a circular labour market. In Finland, construction, textiles, food production, mining, forestry and electronics are considered promising industries to create new jobs in the transition to a circular economy. In addition, employment is projected to positively impact the consumer electronics and forestry sectors (Bassi and Palaske, 2020;GRSPCE, 2021). Employment of students and graduates. Universities do not guarantee employment to graduates. They only outline the sectors of the economy where the graduate can find a job. However, during the project-based learning in cooperation with the "client" (employers, professionals, government officials, and research organisations), companies can offer students internships or employ promising graduates (Mäkiö and Virta, 2019). The trends iat higher education level include, in particular: Implementation of pilot projects in higher education. The professional education and training in the circular economy started through the implementation of pilot projects in 2017. With the financial support of the Fund Sitra for the "Circular economy teaching for all levels of education", new subjects in circular economy have been introduced in vocational schools, universities and Universities of Applied Sciences in Finland. Besides, since 2018, professional training in the circular economy has been implemented at all levels of education based on the close connection of secondary, vocational and higher education. Introduction of the circular economy principles in education. Finland and the Netherlands have become the first EU member states to implement the circular economy principles in education. As a result, 38% of HEIs in Finland, and the Netherlands offer courses in the circular economy (CELL, 2019, p. 11). Furthermore, since 2017 Finland has started teaching circular economy at all levels of education, including primary, secondary and vocational schools, universities and Universities of Applied Sciences (Silvennoinen and Pajunen, 2019). A strategic priority of the circular economy principles in education is defined in regulations, particularly in the Finnish Strategic Program for the Circular Economy, adopted in 2021. The program provides for various activities in education, in particular: inclusion of competencies in the circular economy in the education system and work-life skills; joint anticipation of the need for competence in the circular economy by higher education and the labour market; the circular economy inclusion in curricula, qualification requirements and educational degree; increasing the teaching of circular economy in Finnish schools; encouraging universities and vocational schools to include the circular economy educational programe as a strategic priority; increasing continuing education in the circular economy for teachers; accelerating cooperation, partnerships and research between companies, vocational schools, HEIs, research institutes on the circular economy (GRSPCE, 2021). The HEIs role in training students in circular economy. As it was menshioned above, higher education in Finland offers a significant amount of educational programs and training courses in circular economy. In addition, the principles of the circular economy have been integrated into lifelong learning. Universities of Applied Sciences in Finland have played an essential role in transforming the circular economy principles into practice. Training compliance with the sustainable development goals of the circular economy transformation. The education and training goals are to develop competencies in the circular economy and innovative competencies among students. That process starts in the secondary schools, aimed at developing awareness of the importance of the circular economy, acquaintance with its principles and improving skills in mathematics and science as well. The vocational school goal is to study the circular economy tools related to a particular job. Based on the current demands of the circular labour market, Finnish universities provide programmes and courses in the following sectors: machinery and equipment, forestry and paper production, agriculture, retail and restaurant services, construction, and disign (OCEF, 2015). Distance learning is a common mode in the circular economy education and training in Finnish HEIs. Though the traditional face-to-face learning is maintained at the Bachelor's level, where courses are offered in a traditional or distance format. However, Master's programs are designed exclusively for distance learning. Digitalisation of educational services at all levels of education. The widespread use of ICT and the development of open online learning courses in the circular economy on MOOC platforms is evidence of the gradual digitalisation of education. It is worth noting that educational technologies are widely utilized at all levels of education in Finland, i.e. in schools it is mainly online educational games and online courses, and in higher education -ICT tools are used for teaching-learning and evaluating students' performance. Shift in higher education form. Although higher education remains institutional (full-time, part-time, distance), learning based on MOOCs is spreading rapidly. At the university, sessions alternate with workshops in the workplace. It allows students to apply gained knowledge during internships under the guidance of a teacher and a professional in a particular sector of economy. Teamwork is a common form of students interaction in the classroom. Assessing measures include the project or Master's thesis presentation. However, assessing learning outcomes occurs through different project evaluation strategies without multiple-choice tests or traditional examinations (Mäkiö and Virta, 2019). Shift in teaching paradigm in HEIs. Teaching circular economy differs from traditional teaching in the classroom, which integrates three teaching methods -interdisciplinary, projectbased and vocational approaches. It allows students to develop competence in circular economy and be involved in problem-solving in real working life. Furthermore, the teacher organises the learning process in cooperation with clients of educational services (representatives of business, enterprises, companies, government, professionals) (Mäkiö and Virta, 2019). Shift in the concept of higher education. Project-based learning is the primary method of organising learning, which involves developing a project or product to client's order (company, enterprise, government) by an interdisciplinary team of students. The project-based method aims to develop students' skills and apply theoretical knowledge of circular economy in project development or product manufacturing. It is a form of education that allows students to actualise professional, industry, and social issues in sustainable circular economy transformation. Circulation of knowledge and skills occurs through exchanging experience and knowledge between students, teachers, employers, professionals and government officials. For example, during collaboration in team learning (project-based method), students share their knowledge with others, and the knowledge of others (teachers, employers, professionals and government officials) is used in problem-solving while working on the project (Mäkiö and Virta, 2019). Shift in structure and role of participants in the educational process occurs due to higher education cooperation with the labour market and the state for the training students in the circular economy. The purpose of such cooperation is to retain young people in particular sector of economy and maximise the potential for innovation by sharing knowledge between leading professionals, engineers, trainees and students during their internship in companies. Accordingly, the participants in sustainable circular education are teachers, clients (employers, professionals and government officials, research organisations) and students. Besides traditional teaching functions, the teacher acts as an intermediary between university and the company to train students in the circular economy. Furthermore, the teacher considers the client's opinion at the stage of project development, his consultation with students and evaluation of their leraning results, makes a decision regarding the final assessment together with the client or based on his assessment. Moreover, the client takes an active part in training and performs various organisational, training, consulting and assessing functions. In addition, he orders an educational service (project) and finances its development. Students act as developers of projects or products whose activities are governed by a contract with the client. DISCUSSION Based on the revealed trends and considering the recommendations offered by for key participants in the educational process (the labour market, government and higher education) in bridging the gap in qualifications for the circular labour market, here are outlined promising areas in sustainable circular education transformation. Cooperation between government, higher education, the labour market, and industry is necessary to train students and achieve specific strategies for greening the economy. Professionals from different sectors of the economy through cooperation with universities, integrate new skills in joint educational programs, concider the demands of the circular labour market, promote a culture of lifelong learning, where employees, managers and team leaders are encouraged for upskilling and continuous professional development in line with innovations and technologies. Government support of vocational education by providing targeted skills development and opportunities to have access to aducation and training; funding allocation for vocational education and training; coordinating education interaction with the circular labour market and industry; encouraging representatives of the labour market and higher education to make decisions; determining the leading role of vocational education during the transition to a circular economy in the post-pandemic era. Improving knowledge of the circular economy and implementing the new skills in interdisciplinary courses. HEIs are expected to provide high-quality vocational education and promote adult learning opportunities in the circular economy. Creating a new digital tool to combine online learning with on-the-job training will be crucial.
v3-fos-license
2020-05-21T09:18:26.988Z
2020-05-01T00:00:00.000
219430769
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.cureus.com/articles/29147-tumor-lysis-syndrome-caused-by-unrecognized-richters-transformation-of-chronic-lymphocytic-leukemia-treatment-with-venetoclax-for-suspected-disease-progression.pdf", "pdf_hash": "c7036b935d74d2ddcb10594a124344f4f0072a21", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43612", "s2fieldsofstudy": [ "Medicine" ], "sha1": "f5d69b13a138b3fe9b5ae1860526fa0c9df641de", "year": 2020 }
pes2o/s2orc
Tumor Lysis Syndrome Caused by Unrecognized Richter’s Transformation of Chronic Lymphocytic Leukemia: Treatment With Venetoclax for Suspected Disease Progression Richter’s transformation (RT) is defined as the transition of chronic lymphocytic leukemia (CLL) or small lymphocytic leukemia (SLL) into an aggressive lymphoma. The conversion generally leads to diffuse large B-cell lymphoma (DLBCL), but more aggressive forms such as Hodgkin lymphoma (HL) can also occur. RT is a rare complication of CLL. RT can be confused with CLL progression. Its identification is crucial because the management of lymphoma and CLL differ from each other. Furthermore, the use of certain agents for CLL such as venetoclax increases the risk of tumor lysis syndrome (TLS) in neoplasms with rapid replication such as DLBCL or CLL with hyperleukocytosis (blast crisis). We present the case of a 76-year-old man with a history of CLL on chemotherapy who developed fatigue, malaise, night sweats, chills, and unintentional weight loss for which he was started on treatment with venetoclax due to suspected clinical progression of his disease. The patient developed TLS, requiring hospitalization, and he was found to have an acute blast crisis. Also, his CLL was found to have been transformed into an aggressive DLBCL. This case highlights the importance of differentiating a true progression of CLL from RT into an aggressive lymphoma given that treatment would be different for the two and the prognosis with the transformation is worse. Introduction Chronic lymphocytic leukemia (CLL) is a common hematologic malignancy. Richter's transformation (RT) or syndrome is defined as the transition of CLL or small lymphocytic leukemia (SLL) into an aggressive lymphoma such as diffuse large B-cell lymphoma (DLBCL) or Hodgkin lymphoma (HL) [1]. RT is a rare complication of CLL, which occurs in approximately 2-10% of patients with CLL. The transformation rate is approximately 0.5-1% per year [2]. It might be challenging to differentiate when patients are undergoing a blast crisis or hyperleukocytosis versus a transformation into a lymphoma (RT). The clinical features to suspect that a patient may be experiencing a transformation are a marked increase in lymphadenopathy at one or more sites, splenomegaly, or increased "B" symptoms characterized as fevers, night sweats, and weight loss. Lactate dehydrogenase (LDH) elevation is another 1 2 1 useful marker. Worsening anemia and thrombocytopenia can also be seen [3]. The management of CLL and DLBCL or HL differ from each other. Therefore, it is crucial to identify RT when it occurs as there are important implications in its management, complications, and prognosis. We present the case of a patient who developed a suspected progression of CLL for which he was treated with venetoclax; he went on to develop tumor lysis syndrome (TLS). He was found to have hyperleukocytosis and RT into a DLBCL. Case Presentation A 76-year-old man with a history of a B-cell CLL presented to his oncologist's office for a follow-up of laboratory results. He endorsed having fatigue and generalized malaise that had significantly worsened in the last three days. He had been experiencing night sweats, chills, and unintentional weight loss of 8-10 pounds for the last three months. The last visit to his oncologist had been four days prior, and he had been started on venetoclax (a BCL-2 or B-cell lymphoma 2 inhibitor) due to suspicion of clinical progression of his disease. His oncologist noted abnormal laboratory results and referred him to the emergency department. In the hospital, his vital signs were within normal limits and no major abnormalities other than signs of dehydration were appreciated on physical examination. His past medical history was significant for a B-cell CLL diagnosed nine years prior. He had been treated with chlorambucil initially and then bendamustine, rituximab, and ibrutinib for two years. Other relevant past medical history included hypogammaglobulinemia treated with intravenous immunoglobulin infusions every month. Initial laboratory workup including a complete blood count showed a white blood cell (WBC) count of 164,600/mm 3 (with a baseline WBC of 14,700/mm 3 ), hemoglobin of 9.5 g/dL, hematocrit of 29.2%, mean corpuscular volume (MCV) of 95 um 3 , and platelet count of 102,000/mm 3 . The WBC differential showed 8% neutrophils (14.8 cells/mm 3 ), 88% of lymphocytes (144.8 cells/mm 3 ), 1% monocytes, 1% basophils, 1% bands, and 1% myelocytes. His chemistry showed a potassium level of 8.6 mEq/L, creatinine of 3.5 mg/dL, calcium of 9.0 mg/dL, phosphate of 3.7 mg/dL, uric acid of 26.4 mg/dL, and an LDH level of 6,861 U/L. His electrocardiogram did not show any abnormalities. A peripheral blood smear demonstrated increased prolymphocytes, anemia, and thrombocytopenia with no macrothrombocytes or spherocytes. A CT scan of the chest done 10 months prior had not shown relevant axillary adenopathies ( Figure 1A). However, new diffuse axillary lymphadenopathies were present in a new CT scan of the chest done during admission ( Figure 1B). Furthermore, there was no prominent mediastinal adenopathy in the last CT scan of the chest ( Figure 1C), but now he also developed worsening mediastinal adenopathies ( Figure 1D). His spleen had not been enlarged in previous CT scans ( Figure 2A), but now he experienced new significant splenomegaly ( Figure 2B). He was admitted to the hospital with a high suspicion of TLS in the setting of a blast crisis and venetoclax use. Furthermore, a flow cytometry analysis of his peripheral blood was performed but was not readily available. The repeat microscopy showed a WBC count of 250,000/mm 3 with marked lymphocytosis of abnormal medium to large-sized lymphoid cells ( Figure 3). These lymphocytes were characterized by an ovoid nucleus, prominent nucleoli, delicate chromatin, and increased basophilic cytoplasm (Figure 4). By flow cytometry and cytogenetic analysis, it was found that 95% of WBCs were abnormal B cells with intermediate forward scatter and mildly increased side scatter for CD5, CD19, CD20 (moderate), CD22, CD23, CD45, FMC7, and kappa restriction. There was a dim partial expression of CD11c and CD25. No significant CD10 or CD103 expression was observed. More than 90% of CD19/CD5 coexpressed B cells displayed CD38. These findings depicted markedly increased levels of monoclonal CD5+ B cells with profound morphologic atypia. The immunophenotype was not characteristic of a B-cell CLL, thus depicting a transformation to an aggressive large B cell lymphoma -a phenomenon termed RT. FIGURE 4: High-power electronic microscopy The image shows abnormal lymphoid cells with medium to large-sized ovoid nuclei, delicate chromatin, prominent nucleoli, and increased basophilic cytoplasm (blue arrow). A difference in cell size can be appreciated when comparing the normal-sized lymphocytes (yellow arrow) to the pathologic lymphoid cells. This phenomenon termed "Richters transformation" demonstrates the shift from normal-sized lymphocytes seen in chronic lymphoblastic leukemia into larger cells observed in diffuse large B-cell lymphoma The patient was administered intravenous (IV) fluids and rasburicase. His hyperkalemia was immediately treated with insulin and dextrose, calcium, patiromer, and a low-potassium diet. Oncology recommended starting allopurinol after 72 hours of admission. The decision to start chemotherapy in the setting of a blast crisis was considered but complicated by TLS. The metabolic disturbances and kidney injury improved with IV fluids and the management of electrolyte disturbances. Uric acid decreased after rasburicase administration. Unfortunately, his WBC count increased without chemotherapy to 258,000/uL within the first four days of his hospital stay. Due to the aggressive nature of his disease with an adverse prognosis and progression of a new DLBCL, the patient and his family decided not to pursue restorative treatment. The patient was transitioned to a comfort care approach. No further diagnostic workup or treatment was pursued. The patient expired a few days later. Discussion CLL is a common form of leukemia in adults. RT is defined as the transition from a low-grade lymphoproliferative disorder such as CLL or SLL into an aggressive lymphoma. The median time from diagnosis of the low-grade B cell malignancy to the transformation into large B cell lymphoma is two to four years [4]. The risk factors for RT differ from CLL risk factors. Certain features increase the risk for RT, such as clinical (Binet stage B/C, lymphadenopathy, performance status), biochemical (LDH elevation), biological [expression of CD38 and ZAP70, unmutated immunoglobulin heavy chain variable gene (IGHV)], and cytogenetic (del(13q) absence, (tri12), del(11q) and del(17p), TP53, NOTCH1, CDKN2A, c-MYC activation) [5]. Age of >65 years and male sex are risk factors for RT. Other risk factors include CLL-treatment regimens such as purine-nucleoside analog and/or alkylating agents plus/minus monoclonal antibody therapy and/or kinase inhibitor therapy, radiation therapy, and stem cell transplantation. Some other treatments have also been hypothesized to be responsible as well [6]. It might be challenging to differentiate when patients are undergoing a blast crisis or hyperleukocytosis versus a transformation into a lymphoma (RT). There are several clinical features to suspect RT, including a marked increase in lymphadenopathy at one or more sites, splenomegaly, or increased "B" symptoms identified as fevers, night sweats, and weight loss. LDH elevation is also typically found [7]. Worsening anemia and thrombocytopenia can be seen [3]. Less frequently, RT can be associated with extranodal involvement of the central nervous system (CNS), testes, eyes, and lungs [8]. Most patients that develop RT have a history of CLL. However, RT can also be the first presentation of the disease, which is sometimes referred to as "de novo Richter's transformation." It might be important to consider RT as the cause of spontaneous TLS in patients with a history of CLL. The clinical features encountered in RT can also be seen due to progressive CLL, especially after the acquisition of del(17p13.1) or TP53 inactivation [2]. Laboratory findings in RT such as lymphocytosis, neutropenia, anemia, and thrombocytopenia are also seen in CLL. However, LDH elevation (seen in 82% of patients) and/or monoclonal gammopathy (found in 44% of patients) can be important clues for RT [2]. Peripheral blood smears can show atypical large cells with scant cytoplasm and distinct nucleoli [9]. RT cells are usually larger than CLL cells. Flow cytometry and cytogenetic studies using different techniques with analysis of CD62L and CD52, karyotype, MYC abnormalities by fluorescence in situ hybridization (FISH), and other studies are helpful in the differential diagnosis. Loss of expression of CD52 in RT most likely predicts resistance to alemtuzumab, one of the most frequently used therapeutic agents for CLL [10]. Tissue analysis is important in making a definitive diagnosis. The site selection for biopsy and pathologic sample collection is an important step. There are different imaging modalities that can aid in the diagnosis as well as biopsy selection site determination. CT and positron emission tomography (PET) scans are used in the evaluation of the suspected transformation and to select the biopsy site. When using PET/CT scans with a standardized uptake value (SUV) threshold of 5 as a cutoff, the positive predictive value is 53% and negative predictive value is 97% for RT [11]. Having a high negative predictive value reduces the post-test probability of RT in patients without a fluorine 18 fluorodeoxyglucose (FDG) avid lymphadenopathy. The low positive predictive value of PET/CT means that the selection of the biopsy site is crucial to increase the diagnostic yield. Only half of the patients with a tissue biopsy will have a positive result for RT [2]. The other causes of FDG-avid lymphadenopathy are CLL progression, inflammatory conditions, and infections. Once a tissue diagnosis is achieved, the next step is to perform a bone marrow biopsy for staging purposes. FISH studies should be performed in peripheral blood and the bone marrow samples for the identification of del(17p13.1) because this genetic finding has implications in treatment selection [2]. Regarding treatment, one of the most important steps is to determine if the DLBCL is clonally related to the underlying CLL. In RT, 80% of DLBCL are clonally related to the underlying CLL while 20% are not [2]. When patients have clonally unrelated DLBCL, treatment would be similar to the de novo DLBCL with rituximab, cyclophosphamide, doxorubicin, vincristine, and prednisone (R-CHOP). When patients achieve complete remission (CR), no further treatment is needed, and periodic surveillance is indicated. However, when no CR is reached after R-CHOP therapy, salvage therapy with rituximab, ifosfamide, and etoposide (RICE) or rituximab, dexamethasone, cytarabine, and cisplatin (RDHAP) followed by stem cell transplantation should be considered [12]. For the clonally related DLBCL, standard treatment approaches are usually suboptimal [2]. Therefore, treatment should generally be a clinical trial if available. If a clinical trial is not feasible, the recommendation is to consider R-CHOP as we would do for a de novo DLBCL. When patients have received anthracycline therapy, a platinum-based regimen is preferable. Stem cell transplant is another option for selected patients depending on functional status, age, comorbidities, and chemotherapy sensitivity [2]. Conclusions RT is a known complication of CLL/SLL in which an aggressive lymphoma such as a DLBCL arises. When monitoring patients with CCL/SLL, it is important to keep in mind that the progression of CLL can be confused with RT. Treatment of CLL differs from that of DLBCL and hence it is imperative to recognize RT given the implications in management, prognosis, and complications. RT carries a worse prognosis. Before starting treatment with certain agents such as venetoclax, it is crucial to identify RT and the tumor burden to prevent complications such as TLS because DLBCL has a higher risk for lysis than CLL due to rapid replication and tumor bulk. TLS was identified in our case, which was not as expected for CLL as it would have been for DLBCL. Additional Information Disclosures Human subjects: Consent was obtained by all participants in this study. Conflicts of interest: In compliance with the ICMJE uniform disclosure form, all authors declare the following: Payment/services info: All authors have declared that no financial support was received from any organization for the submitted work. Financial relationships: All authors have declared that they have no financial relationships at present or within the previous three years with any organizations that might have an interest in the submitted work. Other relationships: All authors have declared that there are no other relationships or activities that could appear to have influenced the submitted work.
v3-fos-license
2019-12-28T15:04:17.190Z
2019-12-01T00:00:00.000
209492264
{ "extfieldsofstudy": [ "Medicine", "Geology" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.nature.com/articles/s41598-019-55743-1.pdf", "pdf_hash": "5341d67dca5d79995563b145043ad51f0216c18e", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43613", "s2fieldsofstudy": [ "Geology", "Environmental Science" ], "sha1": "5341d67dca5d79995563b145043ad51f0216c18e", "year": 2019 }
pes2o/s2orc
Evidence for a dominantly reducing Archaean ambient mantle from two redox proxies, and low oxygen fugacity of deeply subducted oceanic crust Oxygen fugacity (ƒO2) is an intensive variable implicated in a range of processes that have shaped the Earth system, but there is controversy on the timing and rate of oxidation of the uppermost convecting mantle to its present ƒO2 around the fayalite-magnetite-quartz oxygen buffer. Here, we report Fe3+/ΣFe and ƒO2 for ancient eclogite xenoliths with oceanic crustal protoliths that sampled the coeval ambient convecting mantle. Using new and published data, we demonstrate that in these eclogites, two redox proxies, V/Sc and Fe3+/ΣFe, behave sympathetically, despite different responses of their protoliths to differentiation and post-formation degassing, seawater alteration, devolatilisation and partial melting, testifying to an unexpected robustness of Fe3+/ΣFe. Therefore, these processes, while causing significant scatter, did not completely obliterate the underlying convecting mantle signal. Considering only unmetasomatised samples with non-cumulate and little-differentiated protoliths, V/Sc and Fe3+/ΣFe in two Archaean eclogite suites are significantly lower than those of modern mid-ocean ridge basalts (MORB), while a third suite has ratios similar to modern MORB, indicating redox heterogeneity. Another major finding is the predominantly low though variable estimated ƒO2 of eclogite at mantle depths, which does not permit stabilisation of CO2-dominated fluids or pure carbonatite melts. Conversely, low-ƒO2 eclogite may have caused efficient reduction of CO2 in fluids and melts generated in other portions of ancient subducting slabs, consistent with eclogitic diamond formation ages, the disproportionate frequency of eclogitic diamonds relative to the subordinate abundance of eclogite in the mantle lithosphere and the general absence of carbonate in mantle eclogite. This indicates carbon recycling at least to depths of diamond stability and may have represented a significant pathway for carbon ingassing through time. The melting relations of the convecting mantle and the behaviour of elements during partial melting vary as a function of pressure, temperature and redox state [1][2][3][4][5][6] . At the time of core formation, presuming that the silicate mantle was in equilibrium with metal, the uppermost convecting mantle had ƒO 2 relative to the Fayalite-Magnetite-Quartz oxygen buffer (FMQ, reported as ∆logƒO 2 (FMQ)), of about −4.5, whereas presently values around FMQ are recorded [4][5][6] , but there is disagreement on the timing and rate of this oxidation. The behaviour of multi-valent elements (e.g. Fe, Eu, V), which depends on their redox state 7 , in basalts has been used to infer that ƒO 2 in the convecting mantle has been similar to the present day from ca. 3.9 Ga 5 . In contrast, there is recent evidence for a subtle but significant terrestrial mantle redox evolution between 3.5 and 1.9 Ga based on the behaviour of V [8][9][10] . Moreover, recent studies reveal that garnet in mantle eclogites has low Fe 3+ /ΣFe, typically ≪ 0.10 [11][12][13] , which may be related either to Fe 3+ loss during partial melting in subduction zones or to an intrinsically more reducing convecting mantle source to the eclogites' mafic protoliths 8,12,13 . Here, we investigate eclogite and pyroxenite xenoliths derived from cratonic (>2.5 Ga) mantle lithosphere that have unambiguous signatures of a Palaeoproterozoic to Mesoarchaean spreading-ridge origin 14 , using new data from three localities (Orapa, Koidu and Diavik; Supplementary Dataset 1) and published geochemical and isotopic analyses. We use these data to extract information on the physical state of the ambient convecting mantle, analogous to how modern MORB samples are used 5,6 . We simultaneously apply two redox proxies to five eclogite suites: (1) The ratio of Fe 3+ to Fe 2+ in basalts is controlled by oxygen content, such that the average Fe 3+ /ΣFe can be used to obtain their redox state and infer that of their mantle source 6 . (2) The V/Sc redox proxy is based on V becoming more incompatible with increasing valence state as a function of ƒO 2 , whereas the partitioning of Sc is independent of ƒO 2 5,7 . Thus, the bulk peridotite-basalt distribution coefficient for V changes by nearly two orders of magnitude for a change in oxygen fugacity between FMQ and FMQ-4 7 . In this study, a range of major and trace elements in reconstructed bulk rocks as well as δ 18 O in garnet are employed to decipher the processes that have affected these samples from their formation in ancient spreading-ridges to exhumation via kimberlite magmatism. Oxygen fugacity has been suggested to decrease with pressure in eclogite at constant Fe 3+ /ΣFe based on thermodynamic consideration 13 , and is further expected to vary strongly in the subduction environment due to the juxtaposition of rocks with highly variable redox states 15 . Thus, we use Fe 3+ /ΣFe in garnet to estimate ƒO 2 using one of the recently formulated Fe-based oxybarometers suitable for eclogites 16 (Methods), which has implications for the effects of deeply recycled ancient ocean floor on processes in the mantle. Samples and eclogite petrogenesis The study utilises new mineral Fe 3+ /ΣFe acquired by Mössbauer spectroscopy and δ 18 O data acquired by secondary ion mass spectrometry (Methods) for kimberlite-borne eclogite and pyroxenite xenoliths from Orapa (Zimbabwe craton; n = 17), Koidu (West African craton; n = 16) and Diavik (central Slave craton; n = 5). These eclogites have been interpreted as subducted oceanic crust that formed by partial melting of ca. 3.0 Ga, 2.7 Ga and 2.0 Ga convecting mantle sources, respectively (Supplementary Text). Their low-pressure origin as basaltic to picritic oceanic crust is evidenced, inter alia, by the presence of Eu anomalies (Eu/Eu* = chondrite-normalised Eu/(Sm*Gd)^0.5), which anti-correlate with total heavy rare earth element contents (ΣHREE) contents requiring the participation of plagioclase in their petrogenesis, and by non-mantle δ 18 O requiring low-temperature seawater alteration 14 . This igneous protolith was subsequently subducted, metamorphosed and in part overprinted during mantle metasomatism (Supplementary Text). The major and trace element compositions of eclogites reveal them to have variably differentiated protoliths encompassing plagioclase-rich cumulates (referred to as gabbroic eclogites with high Eu/Eu*, low ΣHREE) and residual melts (low Eu/Eu*, high ΣHREE), as also suggested by their major-element relationships 14 . High-Mg and high-Ca eclogites represent protoliths having experienced low and advanced degrees of differentiation, respectively, whereas low-Mg eclogites are also more differentiated or may require more FeO-rich sources 14 . The eclogites were subsequently variably affected by seawater alteration (as gauged by δ 18 O, the permil deviation from the VSMOW standard), partial melt loss and metasomatism (as gauged by NMORB-normalised Ce/Yb < 1 and >1, respectively; normalisation indicated by subscript NMORB) 14 . These new data are combined with published studies on mantle eclogites and pyroxenites from Voyageur in the northern Slave craton, which are coeval with their ca. 2 Ga central Slave counterparts 11 , as well as from the Lace kimberlite in the Kaapvaal craton with ca. 3 Ga old protoliths 12 . Discussion Effects of post-formation processes on Fe 3+ /Σfe and V/Sc. Several processes occur between generation of the eclogites' crustal protoliths in palaeo-spreading ridges and their exhumation via kimberlite magmatism that may affect the proxies used to infer the redox state of their mantle source. These are: Degassing on the seafloor, seawater alteration between the ridge and the trench, partial melt loss during metamorphism and metasomatism due to interaction with fluids and melts during their residence in the cratonic lithosphere. Degassing. Depending on their pressure of emplacement and the nature of the volatile species, degassing of basalts can increase or decrease the Fe 3+ /ΣFe and hence redox state inferred for the magma 17 . Recent work finds no evidence that degassing or interaction with polyvalent gas species, such as S, has affected Fe 3+ /ΣFe in modern MORBs, nor that ƒO 2 is externally buffered 6 , and we suggest that this also applies to magma emplacement in palaeo-ridges. For degassing to be important, differences in process between the Archean and today would be required. However, even if degassing had affected Fe 3+ /ΣFe, such changes in valence state do not change elemental redox proxies, such as V/Sc, the ratio of which in the undifferentiated magma is set at source (the effects of differentiation are addressed in a later paragraph). Seawater alteration. Unlike fresh MORB, recycled equivalents have experienced variable degrees of seawater alteration, causing deviation of oxygen isotope compositions from the mantle range 18 . We assess this using δ 18 O in garnet, which has been shown to be a reliable proxy for seawater alteration in mantle eclogites 19 . Figure 1a shows that the Fe 3+ /ΣFe of reconstructed bulk rocks is independent of garnet δ 18 O. This result is not unexpected in light of recent evidence for near-constant and low Fe 3+ /ΣFe in seawater-altered oceanic crust before Neoproterozoic oxygenation of oceanic bottom waters occurred 20 . Similarly, V/Sc in reconstructed bulk rocks is independent of evidence for seafloor weathering (Fig. 1b), consistent with the generally fluid-immobile behaviour of V and Sc 5 . Partial melt loss. Melt extraction from eclogite has been linked to the generation of tonalite-trondhjemitegranodiorite magmas forming Archaean continental crust 21 . The effect of partial melt loss from eclogite, presumably during subduction and metamorphism, is assessed using NMORB-normalised Ce/Yb (denoted with subscript NMORB), which decreases to values < 1 as a function of melt fraction extracted (Supplementary Text). This shows that while a large range of Fe 3+ /ΣFe is observed over similar Ce/Yb NMORB (Fig. 1c, see vertical red bars), Fe 3+ /ΣFe varies little as a function of Ce/Yb NMORB (Fig. 1c, see horizontal red bars, using Orapa as an example). This may be explained by retention in residual cpx where Fe 3+ is less incompatible than Fe 2+ 7 . Likewise, there is no clear indication for dependence of V/Sc on Ce/Yb NMORB (Fig. 1d). Thus, samples from all suites also show a wide range of V/Sc at similar Ce/Yb NMORB and similar V/Sc at a range of Ce/Yb NMORB (Fig. 1d, see vertical and horizontal red bars, respectively, using Lace as an example). The immobile behaviour of V during melt loss is consistent with low ƒO 2 and V compatibility, whereas partial melting at higher ƒO 2 has been modelled to lead to incompatible behaviour and loss from eclogite during metamorphism, as exhibited by some Proterozoic orogenic eclogite suites 8 . Mantle metasomatism. Any melt formed within or below cratons and affecting the mantle eclogite reservoir at the depth where it resides would most likely leave a garnet-bearing residue and have a small volume because the thickness of cratonic lithospheres leaves little room for decompression melting. As a corollary, mantle metasomatism typically involves LREE-enrichment which is proxied by NMORB-normalised Ce/Yb > 1 (ref. 14 ). Typically oxidising metasomatism is expected to raise Fe 3+ /ΣFe 4 and can decrease V concentrations in metasomatised rocks because of the higher valence state and lower associated distribution coefficients 22 . Indeed, strongly metasomatised, phlogopite-bearing eclogites from Kimberley in the Kaapvaal craton 23 have low but variable V/Sc (3.9 ± 2.1). Thus, mantle metasomatism entails contrasting behaviour of the two redox proxies. There are some hints in the data for a link of metasomatism and an increase in Fe 3+ /ΣFe, as metasomatised eclogites from Lace and Diavik, but not from other suites, have higher bulk-rock Fe 3+ /ΣFe than unmetasomatised varieties (coloured fields in Fig. 1c). Conversely, a link between metasomatism and V/Sc is not evident (Fig. 1d). Retention of primary fe 3+ /Σfe in mantle eclogite xenoliths and sympathetic behaviour with V-based redox sensors. Several observations suggest that Fe 3+ /ΣFe in unmetasomatised mantle eclogites retains a record of igneous differentiation on the ocean floor, hence inheritance from their protoliths: (1) For all eclogite suites, the lowest or one of the lowest Fe 3+ /ΣFe is associated with high Eu/Eu* in reconstructed bulk rocks (Fig. 2a), which corresponds to the expected relationship if the more incompatible Fe 3+ (ref. 6 ) is excluded from plagioclase-rich cumulates characterised by Eu/Eu* > 1. (2) Eclogites in unmetasomatised Lace and Orapa samples show generally increasing Fe 3+ /ΣFe with increasing FeO, which is interpreted as a differentiation trend also recognisable in modern MORB (Fig. 2b). (3) www.nature.com/scientificreports www.nature.com/scientificreports/ V/Sc in unmetasomatised samples is evident (Fig. 3a). As a result of the differential response of Fe 3+ /ΣFe and V/Sc to differentiation, degassing, seawater alteration, partial melt extraction and metasomatism, as detailed above, the variation within each suite composed of eclogites each representing the sum of multiple processes is large. Despite this, the average values obtained per suite clearly show sympathetic behaviour (r 2 = 0.97; Fig. 3b), which is interpreted to reflect inheritance from the protolith and implies an unexpected robustness of Fe 3+ /ΣFe. With respect to Fe 3+ /ΣFe, the two Proterozoic eclogite suites are dominated by samples with cumulate protoliths (gabbroic eclogites), with the consequence that these eclogites have inherited lower Fe 3+ /ΣFe from their protoliths than hypothetical complementary residual melts. At the same time, more incompatible behaviour of V under oxidising conditions 7 implies stronger exclusion from accumulating minerals 5 and low V/Sc compared to 19 . Uncertainties on Fe 3+ /ΣFe are propagated from those on cpx and garnet Fe 3+ /ΣFe, assuming a total 10% uncertainty on the modal proportions, weighted by the proportion of Fe contributed to the bulk rock. Differences in uncertainty between Orapa and other eclogite suites derive from cpx Fe 3+ /ΣFe being measured in the former and calculated in the latter. Uncertainty on V/ Sc reflects that resulting from 10% uncertainty on the modal proportions. Canonical mantle range of δ 18 O from 18 ; V/Sc and Fe 3+ /ΣFe of modern fresh MORB from 8 and 25 , respectively. (c) Fe 3+ /ΣFe and (d) V/Sc as a function of NMORB-normalised (denoted with subscript NMORB) Ce/Yb in reconstructed whole rocks, as a proxy for melt loss from eclogite (< 1) and metasomatism/enrichment (> 1) (NMORB of 37 ). Only in the suites from Lace (blue field) and Diavik (light brown field) do metasomatised samples have higher Fe 3+ /ΣFe than unmetasomatised ones. There is no discernible effect of melt depletion on Fe 3+ /ΣFe or V/Sc, which varies little across a wide range of Ce/Yb N for Orapa and Lace eclogites, respectively (horizontal red bars); conversely, these eclogites show a wide range of Fe 3+ /ΣFe at similar degree of melt depletion (vertical red bars). (2019) 9:20190 | https://doi.org/10.1038/s41598-019-55743-1 www.nature.com/scientificreports www.nature.com/scientificreports/ melts. This diminishes the contrast of Proterozoic gabbroic eclogites with their Archaean counterparts, which are dominated by non-gabbroic varieties. Thus, V/Sc and Fe 3+ /ΣFe for the two Proterozoic suites must be considered minima. Variable redox state of the Archaean convecting mantle. Foley 15 suggested more reducing conditions for the Archaean convecting mantle and proposed that mantle eclogites may give useful constraints. This anticipation was confirmed by applying the V/Sc redox proxy to spreading ridge-derived (meta)basalts, which showed a significant difference between post-Archaean (∆FMQ-0.26 ± 0.44) and Archaean eclogite suites (∆FMQ-1.19 ± 0.33 2σ) 8 . The latter estimate is now considered a maximum, given recent experimental evidence that V behaves more incompatibly with increasing temperature 24 . Combined with secular mantle cooling, this implies that melts generated in the warmer Archaean mantle have higher V/Sc for a given redox state than those generated later in Earth's history. Taking the temperature effect into account, many mantle and orogenic eclogites record ∆FMQ closer to −2 ( Supplementary Fig. 2). The overall robustness of V during the evolution of mantle eclogites is strongly supported by recent work on deep-seated komatiite magmas, which were emplaced near continental margins and which never experienced seawater alteration, partial melting or mantle metasomatism. These samples show an oxidation trend across the Archaean-Palaeoproterozoic boundary of a similar magnitude (1.3 units in ∆logƒO 2 (FMQ)) 9,10 as that obtained from eclogites (at least 1.0 unit) 8 , despite being based on an entirely different approach, namely partitioning of V between olivine or chromite crystals and komatiite melt vs. forward-modelling of V/Sc in a melt as a function of melt fraction and ƒO 2 . Rather than consider V/Sc or Fe 3+ /ΣFe in isolation, as in previous studies, we here use the combined systematics in order to obtain new insights into the redox state of the Archaean mantle. Because accumulation and advanced differentiation lead to lower Fe 3+ /ΣFe and higher or lower V/Sc depending on ƒO 2 , respectively, relative to the little differentiated melt (see previous section), samples with evidence for either are excluded from consideration here, as are metasomatised samples. Both the reconstructed bulk-rock V/Sc and Fe 3+ /ΣFe for the remaining samples in two of the three Archaean suites (Orapa and Lace) are markedly and consistently low, with an average V/Sc of 5.07 ± 0.46 and Fe 3+ /ΣFe of 0.050 ± 0.015 (2σ), compared to MORB estimates of 6.8 ± 0.8 and 0.10 ± 0.02, respectively 8,25 . This attests to the significantly more reduced character of at least portions of the Archaean ambient convecting mantle. Despite having exclusively cumulate protoliths with minimum Fe 3+ / ΣFe and V/Sc as discussed in the previous section, unmetasomatised eclogites from both Diavik and Voyageur have higher V/Sc and Fe 3+ /ΣFe than Archaean ones with dominantly non-cumulate protoliths (Supplementary Dataset), and the average for the two Proterozoic suites is significantly higher (outside the 2σ uncertainty) than the average of the two reduced Archaean suites (Fig. 3b,c). Considering further that the convecting mantle has cooled through time leading to increasingly less incompatible behaviour of V 24 , the redox contrast in the source mantle that could be inferred from V/Sc systematics is even larger. This further underscores the postulated increase in convecting mantle f O2 across the Archaean-Proterozoic boundary [8][9][10] . Interestingly, ca. 2.7 Ga eclogites from Koidu have higher average calculated bulk-rock V/Sc and Fe 3+ /ΣFe, both similar to modern MORB (Fig. 3c), which may indicate that part of the Archaean ambient mantle was Figure 2. Effects of differentiation (oceanic crustal protolith). Fe 3+ /ΣFe in reconstructed whole rocks as a function of (a) Eu/Eu* (chondrite-normalised Eu/(Sm*Gd)^0.5) in garnet, as a proxy for plagioclase accumulation and fractionation during protolith formation, and (b) FeO content in reconstructed whole rock mantle eclogite. In each eclogite suite, the lowest or one of the lowest Fe 3+ /ΣFe is observed for samples with a strong cumulate signature (Eu/Eu*≫1). Average modern fresh MORB from 25 . The trend to increasing Fe 3+ / ΣFe with increasing FeO in unmetasomatised eclogites from Lace and Orapa could be related to differentiation in the protoliths, whereas low Fe 3+ /ΣFe in gabbroic eclogites may be due to accumulation; a similar trend is observed in modern fresh MORB (yellow stars) 25 . oxidised to present-day levels. It may be significant that Koidu eclogites have on average higher FeO contents than the other Archaean eclogite suites (Fig. 2b), which cannot be explained by higher pressure of melting or by advanced degrees of differentiation during protolith formation 26 . Their source may be similar to that of coeval alkaline Fe-picrites from the Slave craton, which are characterised by elevated V/Sc (average 9.2) and were suggested to sample Fe-rich heterogeneities in the Archaean mantle that were melted out over time 27 . Redox heterogeneity is also recognised in komatiites where one Archaean suite yields modern MORB-like f O2 10 . Thus, Archaean eclogites and komatiites document chemical and redox heterogeneities, and locally oxidising conditions in the Archaean ambient convecting mantle, possibly reflecting differential upward mixing of post-core formation lower mantle that had been relatively oxidised by sequestration of Fe metal in the core 4,8 . The consequences of a -predominantly -more reducing Archaean uppermost mantle are manifold. For example, it implies a decrease in the depth of redox melting (formation of CO 2 -bearing melt by oxidation of diamond in the asthenosphere 3 ), which would have impeded stabilisation of carbonated melts beneath early-formed thick cratonic lithospheres. It further implies the release of a reducing volatile mix to the atmosphere, which helped impede the accumulation of atmospheric O 2 prior to the Great Oxidation Event 8 . . Although the correlation coefficient in (a) is seemingly low (r 2 = 0.32), there is a probability < 0.5% that the two variables are uncorrelated for the number of data points in the regression (n = 33), and the correlation is highly significant 38 www.nature.com/scientificreports www.nature.com/scientificreports/ Low ƒo 2 of deeply subducted precambrian oceanic crust. Contrary to peridotite (e.g. ref. 28 ), there is no discernible effect of metasomatism on calculated ƒO 2 in eclogite (Fig. 4a). Metamorphic reactions in addition to redox reactions involving the metasomatic oxidation or reduction of Fe may in part be responsible for the lack of correlation between ƒO 2 and Fe 3+ /ΣFe in all suites except Orapa (Fig. 4b). A wide range of f O2 at a given pressure, even at a single locality, is also observed in peridotite xenoliths ( Supplementary Fig. 1). In peridotites, this variability is explained by strong reduction upon initial melt extraction and subsequent interaction with metasomatic agents which can be both reducing (e.g. methane-dominated fluids) and oxidising (e.g. carbonated melts) 28 . In eclogites, it is ascribed to a superposition of intrinsic f O2 of metamorphosed oceanic crust, auto-metasomatic redox reactions occurring upon subduction, and metasomatism that also affects peridotites. These processes can also cause shifts in Fe 3+ /ΣFe, without completely resetting inherited crustal signatures, as discussed above. Even disregarding low-Ca samples with compositions far from those on which the oxybarometer was formulated (garnet Ca# < 0.2; Methods), it is evident that (1) eclogites have lower ƒO 2 than either Archaean or modern MORB-like protoliths; (2) fluids in equilibrium with eclogite would be methane-dominated (Fig. 4b), with implications for the peridotite solidus temperature, which is higher in the presence of CH 4 than of CO 2 (1) ; (3) ƒO 2 in the majority of eclogites (20 of 25) is too low to stabilise carbonate or allow percolation of pure carbonatite melt, which requires a ∆logƒO 2 > ~FMQ-1.6 at 5 GPa (Fig. 4c). Rather, graphite or diamond is the stable carbon species. Combined with Re-Os isotopic evidence for eclogitic diamond formation during Mesoarchaean craton amalgamation and Palaeoproterozoic lateral growth 29,30 during collisional processes, these low eclogite ƒO 2 suggest that oceanic crust represented an efficient trap for oxidised carbon in fluids and melts formed in ancient subduction environments. This not only helps explain the disproportionate frequency of eclogitic diamonds, relative to the subordinate abundance of eclogite in the mantle lithosphere 31 , but also provides support for carbon recycling at least to depths of diamond stability. Considering that the proportion of oceanic crust recycled to the sublithosphere through time far exceeds that which was captured in the continental lithosphere, diamond formation in reducing subducting oceanic crust may have represented an efficient pathway for carbon ingassing upon deep subduction, consistent with the observation that 35-80% of C has been recycled from the exosphere to the deep mantle 32 . In contrast to pure carbonatite, the f O2 of a higher proportion of eclogites investigated here would be permissive of percolation of a carbonated silicate melt, such as kimberlite. A kimberlite-like melt containing 10% CO 2 would be stable to lower ƒO 2 by ~1 log unit, compared to pure carbonatite melt, by analogy with the peridotite system 3 . This redox "window" allows for the precipitation of additional diamond in mantle eclogite, by reduction of CO 2 in kimberlite-like melts. Mantle eclogites metasomatised after their incorporation into the cratonic lithosphere, either cryptically (identified by high Ce/Yb) or modally (such as phlogopite-bearing eclogites 23 ), are not representative of oceanic crust recycled into the convecting mantle. We suggest that the f O2 of the remaining samples is so low that distributed carbonate grains or carbonate pockets in seawater-altered oceanic crust (as opposed to carbonate sediments) are likely to be reduced to diamond upon subduction. Thus, the inferred (from experiments) or demonstrated (from inclusions in diamonds) presence of CO 2 -dominated fluids or carbonatite in the convecting mantle (e.g. ref. 33 ) cannot be explained by appealing to subduction of seawater-altered oceanic crust except possibly subsequent to Neoproterozoic oxygenation of oceanic bottom waters 20 . For older convecting mantle sources, recycling of carbonate-rich sediments may be required, the oxidising power of which can regionally overwhelm the buffering capacity of the dominantly reducing convecting mantle and the eclogitic/pyroxenitic heterogeneities it contains. Methods oxygen isotope analysis by secondary ion mass spectrometry (SiMS). Sample preparation and analysis by SIMS were carried out at the Canadian Centre for Isotopic Microanalysis (CCIM), University of Alberta. Garnet mineral separates were mounted with CCIM garnet reference materials (RMs) S0068 (Gore Mountain Ca-Mg-Fe garnet) and S0088B (grossularite) and exposed in a 25 mm diameter epoxy assembly (M1506) using diamond grits. The mount was cleaned with a lab soap solution and de-ionized H 2 O, and then coated with 20 nm of high-purity Au prior to scanning electron microscopy (SEM). SEM characterization was carried out with a Zeiss EVO MA15 instrument using beam conditions of 20 kV and 3-4 nA. A further 80 nm of Au was subsequently deposited on the mount prior to SIMS analysis. Oxygen isotope ratios ( 18 O/ 16 O) in garnet from Orapa, Koidu and Diavik were determined with a Cameca IMS 1280 multicollector ion microprobe, using previously described analytical methods and reference materials 34 . Briefly, a 133 Cs + primary beam was operated with an impact energy of 20 keV and beam current of ~2.0-2.5 nA. The ~12 µm diameter probe was rastered (20 × 20 µm) for 30 s prior to acquisition, and then 8 × 8 µm during acquisition. Negative secondary ions were extracted through 10 kV potential into the secondary (Transfer) column. All regions of the sputtered area were transferred and no energy filtering was employed. The mass/ charge-separated oxygen ions were detected simultaneously in Faraday cups with 10 10 Ω ( 16 O − ) and 10 11 Ω ( 18 O − ) amplifier circuits, respectively. A single analysis took 240 s, including pre-analysis primary beam implantation, automated secondary ion tuning, and 75 s of continuous peak counting. Instrumental mass fractionation (IMF) was monitored by repeated analysis of S0068 (UAG) and S0088B (δ 18 O VSMOW = + 5.72‰ and + 4.13‰, respectively), with one analysis of S0068 and S0088B taken after every 4 and 8 unknowns, respectively. The data set of 18 O − / 16 O − for S0068 garnet yielded standard deviations of 0.09‰ and 0.08‰, respectively, for each of two analytical sessions and after correction for systematic within-session drift (≤ 0.4‰). Data for S0088B and unknowns were first IMF-corrected to S0068 garnet, and then further corrected according to their measured Ca# (Ca/ [Ca + Mg + Fe]) using the methods outlined by Ickert and Stern 34 . The average 95% confidence uncertainty estimate for δ 18 O VSMOW for garnet unknowns is ± 0.30‰ and includes errors relating to within-spot counting statistics, geometric effects, correction for IMF, and matrix effects relating to Ca# determined by electron microprobe. (2019) 9:20190 | https://doi.org/10.1038/s41598-019-55743-1 www.nature.com/scientificreports www.nature.com/scientificreports/ fe 3+ /Σfe in garnet ± cpx by Mössbauer spectroscopy. The sample preparation and analytical routine for the determination of Fe 3+ /ΣFe in garnet and cpx by Mössbauer spectroscopy at Goethe-University Frankfurt, employing a nominally ~50 mCi 57 Co in Rh source, has been described in 12 . Briefly, handpicked, optically clean mineral separates were powdered under acetone and packed into a hole drilled in 1 mm thick Pb discs. To minimise saturation effects, the amount of sample and hole diameter were chosen such that a sample thickness of < 5 mg Fe cm −2 was obtained. To this end, when necessary, a small amount of sugar was mixed with the mineral powder to create a uniform sample that filled the volume of the drilled hole. This also serves to limit any preferred orientation in the sample that might influence the spectrum. 57 Fe spectra were collected until a target value of > 2 × 10 6 background counts was achieved (representative Mössbauer spectra in Supplementary Fig. 3). Recoil-free fraction effects were corrected as given by 35 . Uncertainties on Fe 3+ /ΣFe are typically ± 0.01 absolute. 16 and denoted as ∆logƒO 2 (FMQ), i.e. calculated relative to the fayalite-magnetite-quartz buffer, as a function of (a) NMORB-normalised Ce/Yb in reconstructed whole rocks (as in Fig. 1) and (b,c) Fe 3+ /ΣFe in reconstructed whole rocks. Error bars on Fe 3+ /ΣFe reflect average propagated uncertainties as described in Fig. 1, on ∆logƒO 2 they reflect average uncertainties propagated from those on garnet Fe 3+ /ΣFe (± 0.01). The symbols of samples with compositions far from the end-members on which the oxybarometer was formulated are shown with thin outline (see Methods). Thick stippled line in (b) separates CH 4 -dominated from CO 2 -dominated diamond-saturated fluid; ƒO 2 corresponding to the iron-wuestite (IW) oxygen buffer is also shown (at 5 GPa and 1140 °C 31 www.nature.com/scientificreports www.nature.com/scientificreports/ fe-based oxybarometry. Oxygen fugacity, reported as ∆logƒO 2 relative to FMQ (Fayalite-Magnetite-Quartz buffer; e.g. 36 ), was calculated with a new thermodynamic formulation of the oxybarometer for eclogites by 16 . Details on this barometer are provided in 12 , with additional information in an unpublished PhD thesis that is available in an online repository (link provided in reference list). Although this thesis has undergone and passed examination, we recognise that the lack of peer review may raise doubts regarding use of the new oxybarometer. We thus emphasise that the underlying principles are identical to those in a published barometer 13 , and that the conclusions reached in this study are independent of which oxybarometer is used. The two oxybarometers return highly correlated values, but results according to 13 are off-set towards lower values ( Supplementary Fig. 5), which would require metal saturation in some samples to occur and is inconsistent with petrographic observations. We therefore prefer to report values according to 16 ). We use the iteratively calculated pressures and temperatures (Supplementary Text), mineral compositions as well as garnet Fe 3+ /ΣFe displayed in Supplementary Dataset 1 as input parameters. The oxybarometer is based on activities of garnet solid solution end-members and the hedenbergite component in cpx in equilibrium with a SiO 2 phase, as follows: Although coesite is absent in all but one sample from Koidu, the effect is expected to be minor (for example, a SiO2 = 0.85 instead of 1.0 translates into a shift in ∆logƒO 2 of ~−0.3 log units), except under strongly SiO 2 -undersaturated conditions when corundum would be present 12 , a mineral that is not observed in the sample suite under investigation. Also, at low a SiO2 a significant Tschermaks component would be expected in cpx, which is in conflict with the observed occupancy of essentially 2 Si cations per formula unit (c.p.f.u.). In addition to uncertainty related to the thermobarometer formulation itself (~60 °C), lack of equilibrium to the regional geotherm, for example due to melt-advected heat, entails that pressures would also be overestimated; a temperature uncertainty of 100 °C translates into a pressure difference of 0.7 GPa along a conductive geotherm and an uncertainty in ∆logƒO 2 of 0.23 log units 12 . The largest source of error remains the precision with which Fe 3+ /ΣFe can be determined, which corresponds to ± 0.01, and for very low absolute Fe 3+ /ΣFe can be very large and asymmetric 16 , corresponding to average 1σ + 0.84/ −0.81. Additional uncertainty is introduced when the oyxbarometer is applied to samples with compositions far from those on which it was formulated (see equation above). There is no geological explanation (differentiation, metasomatism etc.) for the vague and marked positive trend of ƒO 2 with jadeite mole fraction and Ca# of garnet, respectively, in Supplementary Fig. 4, which may indicate that samples with grossular-poor garnet, which are furthest from the end-member compositions on which the oxybarometer was formulated, yield underestimated ƒO 2 . However, it is also clear that one of the central Slave samples with high garnet Ca# nevertheless yields very low ∆logƒO 2 and that there is a marked positive correlation between Fe 3+ /ΣFe and ∆logƒO 2 in Orapa samples (Fig. 4b), including those with low Ca#, suggesting that the relationship between Ca# and ∆logƒO 2 is not straightforward to interpret. In the interest of caution, we indicate samples with garnet Ca# < 0.2 and only discuss ƒO 2 for samples with higher values. Bulk-rock reconstruction of fe 3+ /Σfe and V/Sc. The distribution D of Fe 3+ /ΣFe between cpx and garnet varies between 3.6 and 20. Increasing temperature leads to increased partitioning of Fe 3+ into garnet at the expense of cpx 12 and this is also the case for garnet-cpx pairs in Orapa eclogites ( Supplementary Fig. 6). In addition, garnet Fe 3+ /ΣFe is higher in high-temperature than in low-temperature eclogites from Lace, Koidu and the central Slave craton, with no temperature-dependence observed for northern Slave eclogites (not shown). Large scatter is evident at low temperatures, where the xenolith population is dominated by metasomatised (LREE-enriched) samples. Clinopyroxene in metasomatised samples tends to be jadeite-poor, and Fe 3+ partitioning into jadeite-poor cpx is reduced, as evident from Supplementary Fig. 6, while partitioning into garnet is enhanced. Thus, there is a superposition of temperature and crystal-chemical effects. To mitigate the latter, we focus on samples with jadeite mole fractions ≥ 0.27. The resultant regression ( Supplementary Fig. 6) has a large uncertainty on the slope and the intercept. Removal of two visual outliers does not significantly change the slope or intercept of the regression. Propagating the ± 0.01 uncertainty on the Mössbauer-derived Fe 3+ /ΣFe in cpx and in garnet, the average resultant uncertainty on cpx/garnet D(Fe 3+ /ΣFe) is ± 5.6. The regression allows calculation of Fe 3+ /ΣFe in cpx as a function of temperature (Supplementary Dataset) and garnet Fe 3+ /ΣFe, which yields Fe 3+ / ΣFe from 0.06 to 0.33 in samples from this and published studies. Propagating the uncertainty on the slope (±0.0039) and on the intercept (±6) of the regression results in an average uncertainty on the calculated Fe 3+ / ΣFe in cpx of ± 0.28. Measured and calculated Fe 3+ /ΣFe in Orapa cpx are compared in Supplementary Fig. 6. Whole rock reconstruction is standard procedure for eclogite xenoliths to avoid kimberlite contamination. The coarse grain size combined with typically small sample size precludes accurate modal determination. Modal abundances of 55% garnet and 45% cpx are considered appropriate for eclogites with picritic protoliths 14 . These values are corroborated by average modal abundances measured in exceptionally large xenoliths (with 1σ of ~5%) and modes determined for experimental subsolidus assemblages in mafic systems where variations as a function of pressure are ~8% (see discussion in 26 ). Here, a blanket uncertainty of 10% is assumed. For Orapa, bulk rock Fe 3+ /ΣFe was reconstructed by weighting measured garnet and cpx Fe 3+ /ΣFe by the wt% Fe contributed by each mineral and applying the aforementioned modal abundances. Propagation of a 5% uncertainty on the garnet and cpx mode each (for a total estimated uncertainty of 10%), as well as the 0.01 uncertainty on mineral Fe 3+ /ΣFe as obtained by Mössbauer spectrometry, results in an average uncertainty on the Orapa bulk-rock Fe 3+ /ΣFe of 0.011. For the remaining samples, bulk rocks were reconstructed using the same modes plus uncertainties from measured garnet Fe 3+ /ΣFe and calculated cpx Fe 3+ /ΣFe. Propagation of (1) the uncertainties on their respective Fe 3+ /ΣFe, (2) a 5% uncertainty each on the garnet and cpx mode, weighted by (3) the contribution of each mineral to the calculated whole rock Fe content by weight results in average uncertainties of ± 0.057. Despite the large (2019) 9:20190 | https://doi.org/10.1038/s41598-019-55743-1 www.nature.com/scientificreports www.nature.com/scientificreports/ uncertainties on the Fe 3+ /ΣFe of the calculated cpx, its contribution to the whole rock Fe total content is minor (average ~20%). This explains the comparatively low uncertainty on the calculated whole rock, which is dominated by garnet and the much lower uncertainty on its measured Fe 3+ /ΣFe, though sizable in terms of absolute value relative to the very low Fe 3+ /ΣFe determined. Results for Orapa whole rocks using measured and calculated cpx Fe 3+ /ΣFe are compared in Supplementary Fig. 6. Eclogitic bulk elemental compositions are also reconstructed from mineral compositions weighted by modes assuming 55% garnet and 45% cpx. Mantle eclogite minerals often have very homogeneous compositions (low standard deviations for multiple analyses per sample), and V and Sc are present at concentrations far above the detection limit; since V partitions more strongly into cpx than into garnet, increasing the cpx mode by 10% will lead to an increase in V/Sc of the calculated bulk rock by < 1 8 . Furthermore, rutile, which is a frequent accessory mineral in mantle eclogite, but not always exposed in sections, contains 100 s to 1000 s ppm of V (Aulbach, unpubl. database). Rutile modes are estimated by assuming that Ti is not depleted relative to Sm and Gd, as applies to melts from subduction-unmodified sources 14 , and reported in the Supplementary Dataset. Here, bulk rock V was calculated by considering the measured V concentrations in garnet, cpx and assuming a median V concentration in mantle eclogite rutile of 1270 ppm (Aulbach, unpubl. database). This leads to a small average increase in V/Sc from 5.99 to 6.06. Assuming median FeO contents measured in rutile from Koidu are representative (0.96 wt%, n = 28) 26 , the proportion of FeO controlled by rutile is minute (<0.05; Supplementary Dataset) and is not further considered. To estimate average "primary" Fe 3+ /ΣFe and V/Sc for various sample suites displayed in Fig. 3c, only eclogites and pyroxenites with non-cumulate protoliths that did not experience high degrees of differentiation are considered, which excludes gabbroic and high-Ca eclogites 14 . In addition, metasomatised samples with Ce/Yb NMORB > 1 are excluded, while melt-depletion from eclogite has no discernible effects on the two redox proxies employed, as discussed in the main text. This yields average estimates of Fe 3+ /ΣFe and V/Sc for 13 and 4 samples, respectively, from Koidu, 4 and 3 samples, respectively, from Orapa, and 22 and 3 samples, respectively, from Lace. Eclogites from Voyageur and Diavik in the northern and central Slave craton, respectively, have a strong cumulate character and are therefore considered unrepresentative of melts.
v3-fos-license
2019-04-22T13:08:00.807Z
2017-09-05T00:00:00.000
126340536
{ "extfieldsofstudy": [ "Mathematics" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=79343", "pdf_hash": "901da0010bbbe8186fd19268d52cca04db472860", "pdf_src": "ScienceParseMerged", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43614", "s2fieldsofstudy": [ "Physics", "Mathematics" ], "sha1": "901da0010bbbe8186fd19268d52cca04db472860", "year": 2017 }
pes2o/s2orc
Discrete Heat Equation Model with Shift Values We investigate the generalized partial difference operator and propose a model of it in discrete heat equation in this paper. The diffusion of heat is studied by the application of Newton’s law of cooling in dimensions up to three and several solutions are postulated for the same. Through numerical simulations using MATLAB, solutions are validated and applications are derived. Introduction In 1984, Jerzy Popenda [1] introduced the difference operator Several formula on higher order partial sums on arithmetic, geometric progressions and products of n-consecutive terms of arithmetic progression have been derived in [5]. In 2011, M. Maria Susai Manuel, et al. [6] [7], extended the definition of α ∆ to for the real valued function v(k), 0 >  .In 2014, the authors in [6], have applied q-difference operator defined as ( ) ( ) ( ) and obtained finite series formula for logarithmic function.The difference operator ∆ with variable coefficients defined as equation equation is established in [6].Here, we extend the operator ∆  to a partial difference operator.Partial difference and differential equations [8] play a vital role in heat equations.The generalized difference operator with n-shift values ( ) , , , , 0 This operator where for some i and ( ) Equation ( 2) has a numerical solution of the form, where ( ) is the basic inverse principle with respect to Here we form partial difference equation for the heat flow transmission in rod, plate and system and obtain its solution. Solution of Heat Equation of Rod Consider temperature distribution of a very long rod.Assume that the rod is so long that it can be laid on top of the set ℜ of real numbers.Let ( ) , v k k be the temperature at the real position 1 k and real time 2 k of the rod.Assume that diffusion rate γ is constant throughout the rod shift value 0 >  . By Fourier law of Cooling, the discrete heat equation of the rod is, where . Here, we derive the temperature formula for ( ) Proof.Taking ( ) ( ) ( ) The proof of (5) follows by applying the inverse principle (3) in (6) Taking 6), using ( 7) and ( 5), The matlab coding for verification of ( 8) for (b).The heat Equation (4) directly derives the relation (c).The proof of (c) follows by replacing The following example shows that the diffusion rate of rod can be identified if the solution ( ) , v k k of ( 4) is known and vice versa.Suppose that ( ) is a closed form solution of (4), then we have the relation , which yields Theorem 2.5.Assume that the heat difference . In this case the heat Equation (4) has a solution ( ) Proof.From the heat Equation ( 4), and the given condition, we derive ( ) which yields either ( ) ( ) and hence ( ) Retracing the steps gives converse. Heat Equation for Thin Plate and Medium In the case of thin plate, let ( ) , , v k k k be the temperature of the plate at position ( ) , v k k and time 3 k .The heat equation for the plate is where Consider the heat Equation ( 16).Assume that there exists a positive integer m, and a real number 3 0 >  such that ( ) , , v k k k ml − and the partial differences , , , , are known functions then the heat Equation ( 16) has a solution ( ) The proof follows by applying inverse principle of Consider the notations in the following theorem: ) ( ) Theorem 3.2.Assume that ( ) ( ) ( ) ( ) Proof.The proof of this theorem is easy and similar to the proof of the Theorem (2.3).From ( 16) and (1), we arrive ) ( ) . Now the proof of (a), (b), (c), (d) follows by replacing From the above diagrams, when the transmission of heat is known at the boundary points then the diffusion within the material under study can be easily determined. Conclusion The study of partial difference operator has wide applications in discrete fields 1989, Miller and Rose[2] introduced the discrete analogue of the Riemann-Liouville fractional derivative and proved some properties of the inverse fractional difference operator and substituting corresponding v-values in (14) yields (b).
v3-fos-license
2020-08-06T09:06:09.440Z
2020-08-03T00:00:00.000
225354213
{ "extfieldsofstudy": [ "Chemistry" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/2311-7524/6/3/43/pdf", "pdf_hash": "fa7bbbb5cb0d088cf051da6ce9f724961f39f536", "pdf_src": "Adhoc", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43615", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "sha1": "fb5ef25a6c8146ce5254ac7b533f243b974075e8", "year": 2020 }
pes2o/s2orc
The E ff ect of Leaching Fraction-Based Irrigation on Fertilizer Longevity and Leachate Nutrient Content in a Greenhouse Environment : An experiment was conducted to evaluate the e ff ects of leaching fraction (LF) on the longevity of controlled-release fertilizer (CRF) and leachate nutrient content in a pine bark substrate. The e ff ect of LF-based irrigation was evaluated under six target LFs of 0.05, 0.15, 0.25, 0.35, 0.45, and 0.55. The 2.72 L nursery pots were filled with 100% pine bark substrate amended with dolomitic lime at a rate of 2.97 kg / m 3 and Harrell’s 16-6-13 POLYON ® applied at a rate of 6 g per container. Fertilizer was encased in vinyl-coated fiberglass mesh bags and subdressed 2.5 cm under the substrate surface for recovery at the end of 10 weeks. The total amount of nutrients leached from the container was greater at higher LFs, with twice as much inorganic nitrogen leached at a LF of 0.55 than a 0.15 LF. The amount of dissolved nutrients left in the substrate decreased as the LF treatments increased. There were 29.6% more inorganic nitrogen and 37.7% more phosphorus left in the substrate irrigated with a 0.15 LF as compared to a 0.55 LF. This suggests that at lower LFs, more dissolved nutrients may be available for plant uptake. No di ff erences were seen in the amount of nutrients lost from the CRF or remaining in the prills. Results indicate that reducing the LF did not influence the longevity of POLYON ® CRF in a pine bark substrate, but that a lower LF may be useful in reducing nutrient runo ff into the environment. Targeting a lower LF also resulted in a larger pool of plant-available nutrients, allowing nursery producers to potentially reduce fertilizer rates. Introduction Water issues are an increasing concern for the ornamental container nursery industry. Growers rely on frequent irrigation and applications of controlled-release fertilizer (CRF) to produce saleable plants [1]. These practices contribute to increase runoff of nitrogen and phosphorus, causing detrimental environmental effects such as contamination of local water resources, eutrophication, and death of aquatic species [2,3]. Overirrigation may also lead to a faster release of CRF [4][5][6] requiring additional fertilizer applications during the production cycle, at a significant cost for growers [7]. Leaching fraction (LF) is one method of monitoring irrigation efficiency [8], and is calculated by dividing the amount of water that leaches from a container by the total amount of irrigation applied: Leaching fraction = (leachate recovered)/(total applied irrigation) In previous studies and nursery applications, irrigating based on a 0.15 to 0.2 target LF or monitoring substrate moisture has shown the potential to reduce the loss of nutrients through leaching and preserve CRF longevity [5,9,10]. Owen et al. [11] found that a target LF 0.1 to 0.2 reduced Horticulturae 2020, 6, 43 2 of 8 leachate volume by 64% and reduced dissolved reactive P concentration in leachate by 64% without influencing plant dry weight. Tyler et al. [12] reported that a low LF of 0 to 0.2 decreased nitrate and phosphorus contents in effluent compared to a LF of 0.4 to 0.6. Prehn et al. [13] reported that plants irrigated with a target LF of 0.2 had equivalent growth compared to those that were irrigated with an on-demand irrigation system, suggesting that plants of similar size could be produced with a significantly reduced LF. When determining the effects of substrate moisture on CRF release rates, there are conflicting reports in the literature. Kochba et al. [5] reported that coated KNO 3 release was essentially equal if the moisture content of the soil was greater than 50% of field capacity. This contrasts with results from Du et al. [14], which demonstrated that rates of release for CRF were approximately 5 to 20% slower in a column of sand at field capacity compared to saturated sand or free water. Finally, Adams et al. [4] reported that although there were no differences between CRF release in a moist solid substrate and pure water, the mass flow of water across the prill surface in fluctuating water potential environments may lead to faster exhaustion of the CRF. The objective of this study differed from previous work in that it evaluated LF influence on the longevity of a CRF and how different LFs affect leachate nutrient content in a pine bark substrate. Materials and Methods This study was conducted in a greenhouse at Paterson Greenhouse Complex, Auburn University in Auburn, Alabama, USA (USDA Cold Hardiness Zone 8a). "Trade gallon" 2.72 L black plastic nursery containers were filled with 1200 g of 100% pine bark with a gravimetric water content of 37.3% amended with 336 g (rate of 2.97 kg/m 3 ) of dolomitic lime to simulate a common nursery mix. The pots were fallow and contained no plants. SOAX ® liquid wetting agent (Smithers Oasis, Kent, OH, USA) at 1200 ppm was applied to the substrate to help with surface wetting and minimize the effects of channeling. Harrell's 16-6-13 POLYON ® CRF (Harrell's LLC, Lakeland, FL, USA) was applied at a rate of 6 g to every container. Fertilizer was weighed and encased in 11 cm square bags made from vinyl-coated charcoal fiberglass mesh that were heat sealed around the edges (Phiefer Inc., Tuscaloosa, AL, USA). Mesh bags were applied 2.5 cm below the substrate surface. Containers were irrigated to obtain six different target LFs: 0.5, 0.15, 0.25, 0.35, 0.45, and 0.55. Each container represented an experimental unit; there were four replications of each irrigation treatment for a total of 24 containers arranged in a completely randomized design. To determine initial irrigation application, containers were thoroughly watered in and drained for one hour. Containers were then weighed, left for two days, and weighed again to determine water loss due to evaporation, noted as "water loss". Initial irrigation volumes were calculated by determining the amount of water needed to replace the evaporated water (water loss) and adding the amount needed to reach the target LF for each container (water loss × LF): Irrigation volume (mL)= water loss (mL)+(water loss (mL) × LF) After the initial irrigation calculation, adjustments to irrigation volume were determined using the actual LF obtained from each irrigation. The equation used was adapted from Owen et al. [15]: ± Irrigation volume (mL) = applied irrigation (mL) × (target LF − actual LF) Data collection began on 12 August 2019 and took place over 10 weeks. Containers were irrigated by hand three times a week with a syringe. Water was distributed slowly and evenly over the surface of the substrate. During irrigation events, each fallow container was fitted into a 2.5 L leachate collection bucket. The containers fit snugly into the collection buckets, leaving adequate space between the container and bucket for leachate to collect. After irrigating, containers drained for 30 min. Leachate volume per container was then measured with a graduated cylinder and recorded. Leachate pH and electrical conductivity (EC) were measured using a HACH Pocket Pro + Multi 2 Tester (Hach Co., Loveland, CO, USA). A 15 mL aliquot of each leachate sample was placed in a sealed collection tube Horticulturae 2020, 6, 43 3 of 8 and refrigerated. Throughout each week, the three individual samples collected from each replication were combined, a total of one pooled 45 mL sample per container per week. Samples were kept in refrigeration at 3 • C during the collection week, after which the samples were frozen. After 5 and 10 weeks, samples were thawed and sent to Quality Analytical Laboratories in Panama City, FL for a complete soilless media analysis. Leachate samples were analyzed for NO 3 -N and NH 4 -N (fertilizer did not contain urea) with a Lachat Quikchem ® 8500 series flow injection analysis system (Hach Co., Loveland, CO, USA). Total phosphorus, potassium, SO 4 -S, calcium, magnesium and micronutrients (Fe, Mn, B, Cu, Zn, Mo, Na, Al, and Cl) were analyzed using a Thermo Scientific™ iCAP™ 7400 ICP-OES analyzer (Thermo Fisher Scientific™, Waltham, MA, USA). At the end of the study, mesh bags were retrieved to determine nutrients remaining in the fertilizer prills. Bags were separated from the substrate and allowed to air-dry for 14 days. The prills from each recovered bag were weighed after which 100 prills were separated and weighed again. These 100 prills were ground using a mortar and pestle and mixed with 1 L of deionized water. The prill solution was stirred with a stir rod for 5 min before a 45 mL aliquot of the extractant was taken and sampled for pH and EC. The samples were frozen until analyzed using the same methods described above. Initial fertilizer application was determined from an average of four analyses of 6 g of unused CRF. Fertilizer recovered from the mesh bags after the completion of the study were recorded as remaining fertilizer. Fertilizer loss was calculated as: Fertilizer loss = initial fertilizer − remaining fertilizer Total fertilizer leached (mg) was determined by multiplying the concentration of nutrients in the weekly leachate samples by weekly leachate volume and totaled over the 10 weeks. Fertilizer remaining in the substrate or lost to volatilization was calculated by subtracting fertilizer loss from the total fertilizer leached: Fertilizer in substrate or volatilized = fertilizer loss − fertilizer leached Data was analyzed in JMP ® and SAS University Edition by SAS ® (SAS Institute Inc., Cary, NC, USA) using a Tukey's honestly significant difference (HSD) test for means comparison and general linear mixed models (GLIMMIX) for regression. LF and Leachate Nutrient Content The total amount of CRF leached from the containers over 10 weeks was significantly greater at higher leaching fractions, with twice as much NO 3 -N and NH 4 -N leached and over twice as much P leached at a LF of 0.55 than 0.15 LF (Table 1). Irrigating to a 0.15 LF instead of a 0.25 LF was found to reduce leachate volume by 18.9% and NO 3 -N and P in leachate by 11.8% and 11.1% respectively. While not as dramatic as findings by Owens et al. (2008) in a microirrigated system, where decreasing the target LF from 0.2 to 0.1 reduced leachate volume by 64% and dissolved P in leachate by 64%, both agree with previous research by Tyler et al. (1996) that the amount of N and P in effluent can be reduced by decreasing the target LF. Results from this study indicate that irrigating to a lower LF can reduce the amount of NO 3 -N and P, the major nutrients responsible for eutrophication, in runoff from container nurseries. The amount of dissolved nutrients left in the substrate decreased as the LF treatments increased (Table 1). There was 29.6% more inorganic nitrogen and 37.7% more phosphorus left in the substrate irrigated with a 0.15 LF as compared to a 0.55 LF, likely due to flushing of dissolved nutrients out of containers at the higher leaching fractions (Table 1). Although there were some discrepancies in the micronutrients, possibly due to the addition of mineral nutrients from substrate breakdown, irrigation water, and the addition of dolomitic lime to the substrate, there were trends observed with SO 4 -S, Ca, Mg, Fe, Mn, B, Cu, Zn, Mo, Na, Al, and Cl (Table 2). In general, more of these nutrients were lost from the containers in leachate when containers were irrigated to a higher LF, and at a lower LF more nutrients were retained in the substrate (Table 2). This suggests that at lower leaching fractions, a larger pool of plant-available nutrients may be available for uptake, potentially allowing growers to reduce the application rate of CRF. Table 1. N, P, and K in leaching fractions (LF) and retained in pine bark substrate over ten weeks. In weeks 5 through 10, leachate EC correlated inversely with target LF treatment (Table 3). Although irrigating to a lower target LF may reduce the amount of nutrients leached, a grower that is monitoring the leachate for EC may see higher numbers associated with lower LF due to the high concentration of salts in a small volume of leachate ( Figure 1). Leachate EC increased as the weeks progressed due to the control-release mechanism releasing nutrients over time (Table 3). Leachate pH decreased linearly over 10 weeks, but there were no consistent trends of target LF treatment on the pH of the leachate. (Table 4). L *** Q * L *** L *** L *** Q * z Analyzed using the proc glimmix procedure in SAS ® University Edition (SAS Institute Inc., Cary, NC, USA). y µS/cm = Microsiemens per cm; 1 µS/cm = 0.001 S/m. x Significant or nonsignificant (ns) quadratic (Q) or linear (L) trends using regression models at p < 0.001 (***), p < 0.01 (**), and p < 0.05 (*). L *** L *** L *** L *** L *** L *** z Analyzed using the proc glimmix procedure in SAS ® University Edition (SAS Institute Inc., Cary, NC, USA). Significant or nonsignificant (ns) quadratic (Q) or linear (L) trends using regression models at p < 0.001 (***) and p < 0.05 (*). Nitrogen Although irrigating to a lower target LF may reduce the amount of nutrients leached, a grower that is monitoring the leachate for EC may see higher numbers associated with lower LF due to the high concentration of salts in a small volume of leachate ( Figure 1). Leachate EC increased as the weeks progressed due to the control-release mechanism releasing nutrients over time (Table 3). Leachate pH decreased linearly over 10 weeks, but there were no consistent trends of target LF treatment on the pH of the leachate. (Table 4). LF and Fertilizer Longevity There was a quadratic trend in the pH of the fertilizer remaining in the mesh bags between treatments; pH increased as target LF increased from 0.05 to 0.35 and then decreased as target LF increased to 0.55 (Table 5). There was a similar quadratic trend between EC of the fertilizer remaining in the mesh bags and target LF treatments, with fertilizer EC decreasing as target LF increased from 0.05 to 0.25 and then increasing as target LF increased toward 0.55. ( Table 5). The fertilizer used in the study was a scheduled three-month longevity at 80 • F. This study occurred over 10 weeks. It is possible that the fertilizer was completely exhausted within the first five or six weeks of the study. However, a linear relationship was observed in most treatments over ten weeks, indicating that fertilizer was still being released from the prills. In future studies, reducing the length of the experiment or including a time factor associated with fertilizer sampling may reveal more obvious differences in fertilizer EC between target LF treatments. Despite a slightly higher EC in treatments with lower LF, there were no differences in the amount of nutrients lost from the CRF or remaining in the prills (Table 6), again potentially due to the extended period of the study. A shorter duration of six weeks may have shown differences in the amount of nutrients lost and nutrients remaining in the CRF prills. The results of this study indicate that reducing the LF did not influence the longevity of POLYON ® CRF in a pine bark substrate over 10 weeks but that a lower LF may be useful in reducing nutrient runoff into the environment. Growers could potentially reduce total loading of nutrients in runoff by reducing target LF, a benefit in areas with strict water-quality requirements or where environmental quality is a concern. Targeting a lower LF may also result in a larger pool of plant-available nutrients, allowing nursery producers to save on input costs by reducing CRF rate. Conclusions Irrigating to different target leaching fractions had no significant effect on the longevity of Harrell's 16-6-13 POLYON ® in pine bark substrate over 10 weeks. The results of this study suggest that there are benefits to targeting a lower LF, including a reduction in the amount of nutrients leached and a greater concentration of dissolved nutrients in the substrate. Lower LF were shown to reduce the total amount of N and P leached, which has implications to reduce the environmental impact of container nursery production. Limiting nutrient leaching may also help growers stay compliant with any current or future federal and state standards regarding water quality and daily nutrient loads. Lower LF treatments were also associated with larger amounts of dissolved nutrients in the substrate. Although this study did not contain any plants, higher concentrations of nutrients available for plant uptake may influence growth rate or plant size. By targeting a lower LF, growers may be able to reduce their application rates of CRFs while still producing salable plants. It is important to note that the use of CRF with a different prill coating and release mechanism may alter results. Further research is necessary to explore the effects of LF on CRF with different release mechanisms as well as the impact of LF on plant growth and salability.
v3-fos-license
2023-11-15T16:09:08.534Z
2023-01-01T00:00:00.000
265175316
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.bio-conferences.org/articles/bioconf/pdf/2023/19/bioconf_isfmxii2023_03006.pdf", "pdf_hash": "43ce2bd4fb1dd6a256a511c152e1155b907c53a2", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43617", "s2fieldsofstudy": [ "Environmental Science" ], "sha1": "1eb0bb48ad528bbea73d85b7a3cc86be2fb36ed7", "year": 2023 }
pes2o/s2orc
Ecological health assessment using Macroinvertebrate - based Index of Biotic Integrity (M-IBI): case study Lake Gunung Putri, West Java, Indonesia . Macroinvertebrate - Index of Biotic Integrity (M-IBI) is one of the most widely used to assess the health of the aquatic ecosystem. However, few studies of M-IBI on the lake ecosystem. Lake Gunung Putri, which is one of the small lakes in Bogor Regency, West Java affected anthropogenic activities. We collected macroinvertebrates with an Ekman Grab sampler at five different sampling sites of Lake Gunung Putri in February – April 2019. Metric variability, sensitivity, redundancy, and responsiveness to environmental gradients were tested on 22 candidate metrics of properties of richness, taxonomic composition, tolerance, and functional feeding. The selected metrics were the number of taxa, Shannon-Wiener diversity index, percentage of dominant taxa, and Biological Monitoring Working Party (BMWP). Application of M-IBI in the Lake Gunung Putri ranged between 20 to 4 with represented criteria of good, fair, poor, and very poor condition. Introduction Situ Gunung Putri lake is one of the natural small lakes located in Gunung Putri District, Bogor Regency, West Java.Commonly, this lake has been used by people around as a fishery, especially for captured fishery which uses fishing rods.Unfortunately, the bloom of water hyacinth is a serious problem in this lake [1].It indicates the eutrophication condition of the waters due to anthropogenic activities from the environment nearby the lake.In addition, some other sources of pollution such as domestic waste, industry, and agriculture will impact the degradation of the biotic integrity of the ecosystem. The ecosystem health of the lake can be assessed by a biological approach (bioassessment).Some bioassessment development has been dominated by single indices to describe aquatic ecosystem conditions.Integration of more than one index or metric summarized into a single index or multimetric index has been developed on fish by Karr [2].Application of the multimetric index was adapted to other biotic communities for example macroinvertebrate [3], periphyton [4], and plankton [5].Benthic macroinvertebrate is the most popular for bioassessment due to the character of its life which is sessile in the bottom of substrates [6], easy to be sampling [7], and has various tolerances to respond to water pollution [8]. The use of multimetric indices integrates biological data and reflects aquatic conditions comprehensively [9,10].Application of the biotic integrity index for macroinvertebrate involves some aspects such as richness, taxonomic composition, tolerance, and feeding structure [11].Previous research shows the application of this index for river [12,13] and lake ecosystems [14,15].However, the development of this index in Indonesia has been limited especially on macroinvertebrate in Situ or the small lake.Therefore, this research aims to assess the ecological condition of Situ Gunung Putri lake based on the Macroinvertebrate-Index of Biotic Integrity as the approach. Collection of macroinvertebrate This research was conducted at five stations in Situ Gunung Putri lake from February to April 2019 (Figure 1).Macroinvertebrate community samples were collected once every two weeks or six times in total during the research.Samples of this community were taken by using Ekman grab sampler with the size 15 x 15 cm 2 .All of these samples then been processed in the laboratory for filtering and preservation in the formaldehyde solution of 4% -5%.The next step was sorting and keeping the samples in ethanol at 70% before they were identified to the group they belong to.Furthermore, the identification process was referred to by Jutting [16], Kathman & Brinkhurst [17], and other references.Besides taking the biota, some environmental variables were also measured including physical and chemical parameters.Physical parameters that had been recorded were water depth, Secchi depth, water temperature, and turbidity.Meanwhile, three chemical parameters which had been measured including dissolved oxygen, pH, and chemical oxygen demand (COD). Determination of reference site The reference site (minimally disturbed site) in this study is determined based on the site with the score on the Shannon-Wiener diversity index (H'≥1.5)modified from Huang [18].Meanwhile, sites with the value of DO met the water quality standard of 4 mgL -1 in Government Regulation of Indonesia No. 22/2021 (class 2) also selected for determined reference site of Situ Gunung Putri lake [19]. Selection of metrics Totally 22 candidates of metric were selected to arrange M-IBI.These candidates consisted of four aspects which included richness, taxonomic composition, tolerance, and feeding structure.Candidate metrics of macroinvertebrate were calculated using Microsoft excel and PAST 4.03 from the methods by Taowu [14]. -Richness, including the total species abundance, the total number of taxa, the number of Chironomidae taxa, and the number of Mollusc and Crustacea. Range and variability Metric which has a low range (0 to 2) should be deleted in the analysis.Metric with a high variability or coefficient of variation upper 1 (CV >1) should not be analyzed to create the indices [20,12]. Sensitivity The sensitivity of metric value is determined with Box-Whisker and Plot to show differences between reference site and impaired sites.A metric that does not has overlapping interquartil value between reference and impaired sites will be considered a strong metric in differencing both sites [12]. Redundancy Only one metric will be counted in the next analysis to arrange the indices if there is a redundant metric or who has a high correlation coefficient (r > 0.8) [12]. Correlation of metrics and environmental variable Relationships between the selected metrics and environmental variables were also identified using Spearman correlation analysis. M-IBI calculation The normalization step had conducted by counting the percentile of each metric, then followed by three-section scoring (1, 3, and 5) in each metric.If the expected metric increase in tandem with the increase of the disturbance or pollution, the value of the lowest metric to the percentile of 50 th was given 5 scores, the percentile of 50 th and 90 th was given 3 scores, while above the percentile of 90 th was given 1 score.The value of minimum, maximum, first quartile, second quartile, and third quartile from each metric is determined as the threshold to determine the quartile score.The range of multimetric index value is divided into criteria good, fair, poor, and very poor [14]. Determination of reference site Table 1 shows the values of physicochemical parameters observed in Situ Gunung Putri lake.Station 1 was determined as the reference site in this study, with a dissolved oxygen content of 5.70 mgL -1 (>4 mgL -1 ).The Shannon-Wiener diversity index at the site was 1.65 (>1.5). Range and variability of metrics Metrics which has a narrow range in score, can be eliminated in the analysis, for instance, 0 to 2.About 10 metrics had been eliminated in this step, for example the number of Chironomidae taxa, % of Chironomidae, % of Pelecypods, % of Corbicula, the number of intolerant taxa, % of intolerant taxa, Beck's Biotic Index, % of predators, % of collectorfilterers, and % of shredders.Meanwhile, the remaining 12 metrics did not eliminate in the variability analysis because they did not show high variability.Therefore, this would be continued to the analysis of metrics sensitivity. Sensitivity and redundancy The sensitivity of the metrics score had been determined with Box-Whisker and Plot to show the strength of different reference and impaired sites.Metrics without overlap interquartile score of reference site and impaired sites will be determined as the sensitive metrics [12].Box-Whisker and Plot had been performed to test the sensitivity of 12 remaining matrices, for instance the total taxa abundance, the total number of taxa, the number of Mollusc and Crustacea, % of Mollusc and Crustacea, % of Gastropods, Shannon-Wiener Index, Goodnight-Whitley Index, % of dominant taxa, HBI, BMWP, % of scrapers and % of collector-gatherers. Figure 2 shows that there are 4 sensitive metrics to differentiate reference and impaired sites which include the total number of taxa, percent of dominant taxa, Shannon-Wiener index, and BMMP.Those four metrics did not show redundancy (coefficient of correlation >0.8), therefore, this will be used in the next analysis. Correlation of metrics and environmental variable Correlation of the four remaining metrics and selected environmental variables, which showed significant differences between the impaired sites and reference site including water depth, Secchi depth, water temperature, and dissolved oxygen.The result of the Spearman correlation shows that the number of taxa, Shannon-Wiener index, and BMWP positively correlated with dissolved oxygen and water temperature.While they have a negative correlation with water depth and Secchi depth.On the other hand, the metrics of the percent of dominant taxa shows the opposite correlation. M-IBI calculation The score criteria of the four metrics had been formulated based on the range score of metrics in developing M-IBI.Each range metrics score is divided into three scores (1-3-5) in tandem with the rise of disturbances or pollution.The merger of four selected metrics into a single index or the multimetric index resulted in the range score, which is determined based on the number of minimum and maximum scores from those four metrics (Table 2).The result shows that the score range between 4 to 20, divided into four criteria.The range score of the multimetric index is divided into four criteria of water quality, they are good (17)(18)(19)(20), fair (13)(14)(15)(16), poor (9)(10)(11)(12), and very poor (4-8).The result of developing the M-IBI in Situ Gunung Putri lake shows that only four metrics suit to be applied, are the total number of taxa, percent of dominant taxa, Shannon-Wiener index, and BMWP.Those four can describe the level of ecology disturbances in Situ Gunung Putri due to anthropogenic activities.The application of those indexes in this study resulted from criteria of ecology condition in Situ Gunung Putri lake, they are good to very poor condition (station 1), poor to very poor (station 2), and very poor (station 3, 4, and 5). The total number of taxa and the Shannon-Wiener Index is part of the macroinvertebrate multimetric system in the lake ecosystem that was studied by Burton [21] and Shah [22].Metrics of percent of dominant taxa is also studied by Lewis [23] and Wesolek et al. [24].Meanwhile, BMWP has been studied by O'Toole et al. [25].Ndatimana [26] stated that the application of a M-IBI is an important tool that is adopted by developing countries for lakes ecosystem management in tandem with the rise of anthropogenic stressors.Indexes that resulted might have variations in the number and type of metrics based on the local biota and environmental variables responsive to the disturbances [27]. Conclusion The application of the M-IBI in Situ Gunung Putri lake shows that four metrics which sensitive in determining reference site and impaired site are the total number of taxa, percent of dominant taxa, Shannon-Wiener diversity index, and BMWP.The index resulting from the analysis shows that the ecology health of Situ Gunung Putri lake is from very poor to good, with a range of scores from 4 to 20. Fig. 2 . Fig.2.Box-Whisker and Plot showing discriminatory capability of each of the four selected metrics Table 1 . Physicochemical parameters of Situ Gunung Putri lake Table 2 . Descriptive statistics of the selected metrics at the reference sites and their scoring criteria
v3-fos-license
2018-04-03T00:39:29.889Z
2017-09-08T00:00:00.000
26153411
{ "extfieldsofstudy": [ "Geology", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.nature.com/articles/s41598-017-11039-w.pdf", "pdf_hash": "80bcb3ec2ed508e533c36dd7fd71e3ec32c2f55e", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43620", "s2fieldsofstudy": [ "Geology" ], "sha1": "09481d74c5d7f1092970d5eacda3f8d2c095c134", "year": 2017 }
pes2o/s2orc
On the consistency of seismically imaged lower mantle slabs The geoscience community is increasingly utilizing seismic tomography to interpret mantle heterogeneity and its links to past tectonic and geodynamic processes. To assess the robustness and distribution of positive seismic anomalies, inferred as subducted slabs, we create a set of vote maps for the lower mantle with 14 global P-wave or S-wave tomography models. Based on a depth-dependent threshold metric, an average of 20% of any given tomography model depth is identified as a potential slab. However, upon combining the 14 models, the most consistent positive wavespeed features are identified by an increasing vote count. An overall peak in the most robust anomalies is found between 1000–1400 km depth, followed by a decline to a minimum around 2000 km. While this trend could reflect reduced tomographic resolution in the middle mantle, we show that it may alternatively relate to real changes in the time-dependent subduction flux and/or a mid-lower mantle viscosity increase. An apparent secondary peak in agreement below 2500 km depth may reflect the degree-two lower mantle slow seismic structures. Vote maps illustrate the potential shortcomings of using a limited number or type of tomography models and slab threshold criteria. Seismic Tomography Since the development of seismic tomography, the internal structure of Earth has been vastly sampled and imaged at different wavelengths: from local to global scales, and with a variety of spatial resolutions as a function of latitude, longitude or depth. Global tomography models of compressional (P) or shear (S) wave velocity variations have been obtained from different types of data and modeling techniques. Generally, three types of data are used: body-wave traveltime, surface wave dispersion, and normal mode spectral measurements. In global tomography, teleseismic body-wave travel times are the main source of data in constraining the structure of the lower mantle. Surface waves mainly constrain the upper mantle, and normal modes provide information on very large-scale features within the Earth. For brevity, we herein refer to "S-waves" and "P-waves", even though we technically refer to models of S-or P-wavespeed anomalies. Both S-and P-wave models have long been used to identify deep Earth features (e.g. refs 4 and 22). While the two wave types agree on long wavelength structures, this correlation breaks down at shorter wavelengths 23 . Furthermore, the two wave types have different sensitivities to lateral structure, and while both are sensitive to changes in temperature, the S-waves may be even more so 23 . Our knowledge of structural detail in the lowermost mantle is lacking compared to shallower levels 24 and is reflected in a greater uncertainty in the imaging of the lowermost mantle structures. Subducting slabs are one of the first-order features of the upper and lower mantle, contributing strongly to the organization of convective flow, and may interact with the edges of LLSVPs, possibly instigating plume genesis 25 . In the upper mantle (<660 km), recently subducted slabs are characterized by a strong and short wavelength signal 26 . Slab deformation and stalling may occur within the transition zone between the upper and lower mantle due to phase transitions, however, no known phase transitions have been described for the upper part of the lower mantle, and previously described seismic discontinuities between 700-2000 km have been attributed to a viscosity increase (e.g. ref. 27). In the lower mantle (>660 km), the apparent blurring of slabs can be attributed to the effects of thermal diffusion, buckling, advective thickening, and limitations in seismic imaging [28][29][30] . Using a different depth-integrated approach to evaluate the robustness of lower mantle features, ref. 31 applied cluster analysis to tomographic data from five global S-wave models to identify large scale features; notably, two antipodal Large Low Shear Wave Velocity Provinces regions (LLSVP) 32 and a smaller "Perm" anomaly, surrounded by a contiguous, faster than average region. Subsequently, ref. 33 expanded this analysis to five S-wave and five P-wave models and utilized a moving depth-integrated data window to allow for some assessment of the depth-dependence of lower mantle structures. Here we follow a strongly depth-dependent approach and aim to focus on the smaller wavelength component of fast seismic anomalies. Choices in data input as well as techniques involved in the inversion, including parameterization and regularization can also yield mantle velocity structures that are regionally variable. It is therefore desirable to discern a feature that is consistent across several tomography models. Whilst a velocity structure in a single given tomography model may be a result of a parameterization choice and/or belong to the "null space, " it is unlikely to be imaged across a suite of different models 31 . Here we compare fourteen global tomography models, which have variable levels of input data overlap and parameterization choices (Table 1, Methods). Plate Reconstructions Crucial constraints on plate motions come from the ocean basins, where tectonic structures such as fracture zones and abyssal hills, and measurements of magnetic anomalies and hotspot tracks directly record the speed and direction of seafloor spreading, and the timing and nature of plate boundary reorganizations. However, due to the persistent recycling of oceanic lithosphere by subduction, these key constraints for reconstructing plate motions are lost over time. By the Early Cretaceous ~60% of the present-day seafloor record is lost 34 and alternative datasets for constraining plate kinematic histories must be sought. Considering that the tomographic visibility of the mantle lies around 200 Ma (e.g. ref. 35), though older timescales of ~250 to 300 million years have been proposed (e.g. ref. 5), slabs identified in the lowermost mantle provide such an alternative constraint for refining plate motions, specifically the post-Triassic subduction record. Indeed, recent work 8 has confirmed that a significant and time-depth progressive correlation exists between reconstructed subduction zone locations and the occurrence of positive wavespeed velocities in the mantle below, providing a solid foundation for further work on a slab-based reference frame. However, such an analysis is dependent on the resolution of the tomography model used and moreover relies on several qualitative and quantitative assumptions, namely that the identified slabs are representative of the true mantle state, have undergone near vertical sinking, did not stall for a significant amount of time in the transition zone, and that the base plate reconstruction itself and key tie-point events such as orogenesis are known accurately. Traditional "anchor" slabs such as the Farallon, Mongol-Okhotsk and Aegean Tethyan slabs have been recognized (e.g. ref. 5), and can form the basis of subduction-based absolute reference frames. However, these slabs have recently come under reinterpretation as to their origins and ages (e.g. refs 9 and 36), highlighting the importance of a renewed assessment of slab identification. Results Depth-dependent variation in mean positive value (MPV). Figure 1 shows histograms of the seismic velocity anomalies for each of the tomography models at depth, before the positive wavespeeds have been extracted but after the LDM (layer dependent mean; see Methods) removal. Overall, the P-wave models show a smaller range of % wavespeed perturbations (amplitudes) than the S-wave models; i.e. the wavespeed distribution is tighter about 0. Most models exhibit a shift from normally distributed wavespeeds in the shallowest depths, to negatively skewed distributions in the lower mantle (i.e. a lengthened tail in the negative wavespeeds and an increased relative frequency of positive wavespeeds). Furthermore, this effect in the S-wave models is pronounced, leading to a long tail in the negative wavespeed space, in excess of −3% in some models. Figure 2a shows the variability in the depth-dependent mean positive value (MPV) for each tomography model with the LDM removed ( Figure S1, Table S2). The depth-dependent MPV profiles range from near-vertical to strongly convex with depth. The P-wave models generally present a lower MPV (i.e. less positive contour value) (average 0.18%) than for the S-wave models (0.41%). The S-wave models also exhibit a larger relative increase in the MPV in the lowermost mantle, below around 2200 km. Figure 2b shows the corresponding depth-dependent variability in the surface area associated with the wavespeeds that are equal to or exceed the MPV for each tomography model (Table S3). On average for a given tomography model at a given depth, the surface area contoured is 21%, with the lowest coverage at 1000 km depth (18%, σ 1.42) and the maximum at 2700 km (25%, σ 3.61). There is not a significant distinction between the proportion of the average area covered by the P-waves combined (21%) versus the S-waves combined (22%), but there is variability between the individual models. Figure 2c shows these same results according to their depth-scaled surface area. Overall, with increasing depth the amount of surface area identified by the MPV is reduced. When viewed in isolation, this panel might suggest that the area of slabs decreases as you descend in the lower mantle, however, by considering the vote maps, an improved insight into the "agreement" of the most robust features can be discussed. Figure 3 shows the vote maps for the reference case of the 14 combined models, a function of combining the P-wave models (Fig. 4) and S-wave models ( Fig. 5) (LDM retained Figure S2; Standard deviation [STD] contour value shown in Figure S5; Root Mean Square [RMS] contour value shown in Figure S6, polar projection Figure S8). The vote maps show that with increasing depth, higher vote regions transition from elongate to progressively longer wavelength aggregate and sub-rounded structures, ultimately portraying the well-known degree-two structure of the lowermost mantle. At 800 km, maximum-vote regions (identified by a 14-vote count only) are imaged under the eastern US, central and northern South America, Mediterranean, India, easternmost Eurasia including near Kamchatka, Southeast Asia and the western Pacific (from east of the Philippines to north of New Zealand). At 2800 km, maximum-vote regions are predominantly restricted to regions under the Americas and eastern Eurasia. Vote maps -a visual comparison. Combined vote maps. Difference between P-wave and S-wave anomalies. Figure 6 shows maps generated by subtracting the S-wave votes (Fig. 5) from the P-wave votes (Fig. 4), and spatially illustrates the regions where the two groups differ. The maps generally show that as depth increases, the differences between P-and S-wave votes become more pronounced and the pattern more defined (differences are clustered). The difference maps may highlight the bias that can be made in slab identification by using only one type of dataset, particularly on a regional or depth-restricted scale. At shallow lower mantle depths, short-wavelength (~100's km) scattering of difference structures both under the continents and the oceans is observed. By around 1600 km, and below, the pattern becomes more defined. There is significant regional variability throughout the depths, though some are worth noting: S-wave models display high votes under the northern Pacific at 2000-2200 km, Australia from 1600-2800 km, eastern US at 1800 km, central east Atlantic (2000-2800 km; variation also in P-waves), Arctic at 2600 km, and east of Sumatra between 1600-2200 km. Conversely, there is a tendency towards more P-wave votes under the southeast Pacific at 1400-2200 km and northwest Africa at 1600 km. The overall agreement (white/light colours) under central to southern Africa is not unexpected considering the lack of post Paleozoic subduction in the region (plate reference frame of ref. 37). Notably at 2800 km, the S-wave models image a belt of high votes that are not captured in the P-wave models to the same extent, running under the Americas, Antarctica, Australia and southern Eurasia. This pattern matches documented regions of long-lived subduction and may also be related to the LLSVPs (Fig. 3). The P-wave models show a strong positive anomaly band at 30°N running across the Pacific at 2800 km depth that is not as strongly captured by the S-waves. Vote maps -maximum agreement and surface area. The vote map characteristics can be quantified based on the calculation of the % surface area of votes as compared to the total surface area ( Fig. 7; values listed in Table S3; RMS and STD shown in Figure S7). Figure 7a shows a consideration of all non-zero votes, however, this should be treated with caution because areas of low vote count may actually belong to the "null space", and can be considered analogous to noise. It follows that the prediction of a large surface area by considering all (non-zero) votes does not necessarily translate to greater agreement when considering higher vote counts, and that the collection of all non-zero votes over predicts the coverage of slabs. The stark differences in profiles from cases with all the non-zero votes (Fig. 7a), the upper half of votes ( Fig. 7b), uppermost votes ( Fig. 7c), and the maximum agreement case (Fig. 7d) demonstrate the influence of the low vote areas. When considering only the maximum votes in the combined models (i.e. 14/14 votes, Fig. 7d), surface coverage is greatest at 1350 km (2.2% surface area) and least at 2000 km (0.2%). The P-and S-wave classes follow a similar depth-dependent trend to each other. Notably, for the shallower depths, the maximum agreement (i.e. 7/7 votes) in the P-waves is greatest at 1300 km (3.8%) below which there is a sharp decrease to a minimum agreement at 2000 km (0.7%). For the S-waves, maximum agreement is slightly deeper, at 1700 km (3.6%), and follows a similar, but less pronounced, decrease than the P-waves to a minimum between 2000-2250 km (~2.3%). There is another increase to a secondary peak at 2700 km in both models (~3.8%), followed by a decrease in the lowermost mantle. On average there is slightly less maximum consensus in the P-waves (mean for all depths 2.3%) than the S-wave models (2.8%), with a notable difference between the two models in the mid-lower mantle depths. By extension this analysis suggests that the S-waves image more slabs (in terms of the absolute surface area of the most robust slabs) than the P-waves in the mid lower mantle. Vote maps -comparison to subduction flux. The surface area results for maximum votes (here in millions of km 2 instead of % surface area) were scaled by the equivalent radius of the depth slice (Fig. 8b, green line). The trends of the individual wave groupings are comparable to those for Fig. 7d and only the combined models are shown for simplicity. An increase in the amount of surface area contoured is observed between 700-1400 km depth (~3 to 8 million km 2 ), a decrease from 1400-2000 km (~1 million km 2 ), followed by an increase to a second maximum at 2700 km (~5 million km 2 ) and a decrease towards the core-mantle boundary (~3 million km 2 ). Figure 8a shows a comparison of the subduction flux as determined from the seafloor production proxy (refs 38, 39) and as calculated directly from the plate reconstruction model of ref. 37. To the first order, the two curves are similar and thus we focus on the recent plate reconstruction model (Fig. 8b). We are limited to discussing surface area rather than volume, however, it is worth noting that the surface area of the 14 votes (green line) and subduction flux curves (black, grey and red lines) are of the same magnitude; between 1000-1400 km the maximum vote surface area is larger by around 3-4 million km 2 (the equivalent size of India). Conversely, between 1800-2600 km the vote values are around 2 million km 2 less than modeled by the subduction flux. In terms of the trends, using the whole mantle average sinking rates (1.1 and 1.3 cm/yr), Panel 8c shows a reasonable first order match between the maximum vote curve and the subduction flux curve. A sinking rate of 1.1 cm/yr (black line) matches the best, including the peak in area/rates around 1400 km, the pronounced decline around 1600-1800 km and the increase below 2400 km. However, when using the age-depth conversion with a faster average sinking rate in the upper mantle (5 cm/yr; dark grey and red line) than for the lower mantle (1.1 and 1.3 cm/yr), the match between the depth/age changes in the maximum vote curve and the subduction flux are almost opposite, notably so between 1800-2600 km depth. Discussion The vote maps presented here are only as robust as the individual tomography models which they are comprised of, and thus the varying degrees of overlap of data input and parameterization renders the votes somewhat biased as they are not truly independent. Bias could also be manifest in a regional/depth sense due to resolution and model regularization 21 , which can be different between tomography models. Furthermore, the absence or presence of a high or low vote count, or "agreement", does not necessarily mean that one suite of models are better or more robust than the other; a high vote could reflect bias in the same data input. Nonetheless, a maximum vote count of 14 out of 14 for widely used, global tomography models, including both P-and S-waves, provides some of the strongest evidence for subducted material. We are confident that the maximum vote class, i.e. 14/14, or 7/7 models to a lesser extent, represents the most robust slabs and illustrates key depth-dependent trends-we herein refer to these highest MPV contour votes as "slabs". Due to our removal of the LDM, the location of the MPV for any given model and depth is a function of the full distribution of wavespeeds at that depth, including negative wavespeeds. For example, for a negatively-skewed distribution (such as the lowermost mantle), the LDM will be less positive than the median wavespeed value, such that removal of the LDM will result in a larger number of wavespeed values being classified as "positive", with respect to a normal distribution of like-variance. However, this would also result in a relative increase in the calculated MPV, partly mitigating the inclusion of possibly "null-space" wavespeed values. The variance of the full wavespeed distribution also plays an important role, as the calculated MPV will become more positive with increasing variance. The common reduction in the variance of the wavespeed distribution of most models in the mid-lower mantle thus partly explains the observed decrease in the MPV at these depths, whereas the increased variance and negative skewness of the distributions of many of the models in the lowermost mantle explains the pronounced increase in their MPVs at those depths. Because of these competing effects, it is not straightforward Negative (red) values indicate regions where the S-wave models predict a vote count that is higher (i.e. more agreement between the S-waves) than that predicted in the P-wave models. Vice versa for positive (blue) values. to interpret a changing MPV as a direct measure of the changing volume of slab material in the mantle -in fact their trends are largely decoupled -and it is for this reason that we use a voting map to infer true slab volume changes. Nevertheless, the depth-dependence of the MPV presents a simple and useful measure of the changing On average, while the MPV calculated from the P-wave models is generally lower than that of the S-wave models (Fig. 2a), this does not translate to a significant difference in area contoured by the MPV (% or absolute surface area Fig. 2b,c) due to the P-wave amplitudes being lower (more restricted). Most of the variability between individual tomography models illustrates a function of data input and parameterization methods. While a larger area of non-zero votes is observed in the P-wave models (relative to the S-waves; Fig. 7a) in the mid-lower mantle, this trend does not persist in higher vote count considerations (see below), and we consider it insignificant. However, this highlights an important point, namely that the specific MPV threshold is arbitrary and only used as a filter with which to concentrate potentially meaningful votes. This means that some wavespeed values that belong to the "null space" will pass through the MPV filter, contributing noise to the analysis. The voting process should highlight these features, which, if not associated with a significant positive wavespeed value, should not consistently appear across the bulk of model votes, and will therefore present as low vote regions. We can therefore consider low vote regions as noise. It follows that a depth with a large non-zero vote area (i.e. 1-14 votes) does not necessarily translate to a depth with great agreement; rather the highest vote counts should be considered (i.e. 14 votes), which is a more appropriate measure of the presence of robust slabs. This effect is further demonstrated in the disparity between the depth profiles of the P-and S-waves when considering all non-zero votes, the upper half of votes, uppermost votes, and only the maximum votes (Fig. 7). To this end, our combination of the MPV contour value and the use of 7 or 14 models is a controlling factor on the total surface area measured for the votes. It suggests that considering all non-zero votes rather than just the maximum votes in isolation is misleading, and that the analysis should either be further restricted to at least (the upper) half of the number of models, or ideally only the maximum votes when looking at depth-dependent trends. This indicates that, generally, the more individual tomography models used in the study, the more robust is the analysis based on the MPV contour. In other words, if using fewer models, the match between them could be based on a more restricted/higher contour threshold e.g. RMS or upper third quartile. A comparison with the results from using the less positive STD metric or more positive RMS metric hold similar results to those described, albeit of a magnitude shift as expected from the threshold value. To the first order, the P-wave and S-wave vote maps image a similar distribution and amount (surface area, within 3%) of positive wavespeed anomalies (Figs 4 and 5), and exhibit comparable depth-dependent trends. The main difference between the classes for the maximum vote case is between 1400-2300 km depth (Fig. 7), where the S-waves predict a larger coverage. A minimum in correlation (in terms of coverage) between P-and S-wave at 2000 km has also been noted in other studies 23 . This difference is possibly due to ray coverage or compositional changes (e.g. ref. 40). It could also be related to S-wave resolution, leading to an apparently larger slab expression, though we note that such an effect might be expected to be seen across all depths and not just the mid-lower mantle. Nonetheless, mid-mantle depths generally exhibit the lowest P-and S-wave amplitudes (Fig. 1). Notably, this highlights the potential under/over prediction of slabs when only using the P/S-wave cases, respectively. Our analysis of the surface area of the maximum vote counts for the combined case (Fig. 7d) reveals that the upper part of the lower mantle between ~1000-1400 km shows the greatest coverage of maximum agreement, or potential slabs. This effect is also seen in the uppermost votes (Fig. 7c), which might be more appropriate for considering slab geometries (as the maximum votes are very restricted in overall coverage, see panel e). Slab thickening by a factor of 2-3 upon entering the viscous lower mantle is generally expected 41 . A change in spectral character of seismic wavespeeds from the upper to lower depths of the lower mantle has also been noted, whereby fast anomalies dominate above about ~1500 km and slow anomalies dominate below 42 . This lead ref. 43 to identify a "mid lower mantle transition zone" from around 1200-1600 km. The increase in maximum slab agreement/coverage is followed by a decrease in the lower half of the lower mantle towards 2000 km. This effect is also seen in the half and uppermost vote panels (Fig. 7b,c). A depth with a smaller area of maximum agreement, such as between 1800-2600 km depth could indicate a range in which there truly are fewer slabs but could alternatively reflect a decreased ability of tomography to accurately resolve true slab features. Thermal diffusion and dissipation mean that slabs in the lower mantle appear more smeared than those in the upper mantle. This is also compounded by limited data coverage, leading to relatively coarser image resolution and discrepancies across tomographic models in the lower mantle, even for large-scale features 24 . Unfortunately, with the slab vote map methodology it is not possible to rigorously distinguish between these scenarios (i.e. true slab volume or seismic resolution), but by considering changes in the apparent area of slabs with the use of less stringent vote criteria it is possible to infer what may be real depth-dependent slab volume changes. Accordingly, the similar depth-dependent trends observed between the upper votes and maximum votes suggest that there may be a true reduction in slab volumes in the mid-lower mantle relative to the upper-lower mantle above. In the case of the lowermost mantle, the wavespeed distributions are generally negatively skewed, due to the strongly negative seismic wavespeeds associated with the LLSVPs, and the variance is greater, together leading to a general increase in the calculated MPV. In this respect, the dominance of the LLSVPs appears to promote the strong imaging of "slabs", as otherwise ambient mantle, now spatially confined to the area outside the LLSVPs, is more likely to be classified as "positive". This is reflected in the increased agreement at around 2600 km in both classes of models (Fig. 7). Furthermore, when the maximum agreement is changed to the upper half of models (i.e. 8-14 or 5-7 inclusive count) the S-waves show a particularly pronounced dominance over the P-waves in the lower half of the mantle because they are strongly affected by the presence of the LLSVPs. Due to the restriction of ambient mantle to the same common areas outside the LLSVPs, votes coming from the "null space" may still reach a high vote count, and could be difficult to distinguish from the "true" slab signal. Thus, increased caution is warranted for any attempt to image and interpret slabs below 2600 km based on the vote maps. While the accuracy of the tomographic imaging (spatial resolution and model amplitude errors) is inherently critical in this analysis, a possibility is that these depth-dependent slab volume changes are a reflection of true changes in the time-dependent mass flux to the mantle, especially for depths shallower than those occupied by the LLSVPs (i.e. shallower than the few hundred kms above the core-mantle boundary). The first order match in magnitude of the surface area of the maximum 14-vote slabs and subduction flux estimates suggests that the use of the maximum vote criterion may allow an appropriate measure of depth-dependent trends in subducted slab material. We note that both estimates, however, lack the third dimension of depth (to provide volume). Considering a more generous threshold e.g. 11/14, which also follows a similar depth-dependent trend, would be more appropriate in considering actual slab contours. The magnitudes of the depth-dependent changes in surface area are higher for the individual P-or S-wave cases (not shown on the scale of Fig. 8b), illustrating the effect of using 7 over 14 models. With the application of a globally averaged mantle sinking rate to the independent subduction flux curve, a direct comparison with the depth-dependent vote maps results can be made. Use of a sinking rate of ~1.1 cm/ yr shows a good first-order match between these trends. If a faster upper mantle sinking velocity of 5 cm/yr is considered, and with no slab stagnation in the transition zone, a near anti-correlation is observed, requiring an alternative explanation for the slab area depth-dependence. The independent observation of slab stagnation or thickening at ~1000-1500 km has been attributed to a smooth density and/or viscosity change (e.g. refs 43 and 44). This effect may be partly driven by Fe++ spin transition 45,46 , an intrinsically dense lower mantle component (subducted MORB) 47 , or some (other) thermo-chemical transition 48 . Such a transition may cause flattening, fragmentation and thermal dissipation of the slabs at these depths, therefore leading to an apparent increase in slab area (volume) (e.g. refs 42, 43, 49 and 50). Observed mid-mantle seismic reflections 51, 52 may also be related to subduction related features but require further analysis. Present-day subduction zones exhibit a dichotomy between long linear subduction zones, such as those around the circum Pacific (Fig. 9a), and smaller more isolated subduction zones, such as those in Southeast Asia and the Mediterranean. This reflects an interplay between plate velocities, oceanic basin size, lithospheric structures and age, trench motion and curvature, and intra-oceanic versus continent-proximal settings, among other factors (e.g. ref. 53). The combination of both broad and short subduction zones is also presented back in time (Fig. 9a,b), however, we note that decreasing constraints will simplify reconstructed subduction zone lengths and geometries. The sizes and shapes of the maximum vote slabs are highly variable (Fig. 3); some of the slabs show a lateral linearity, for example under North America, which suggests long-lived and contiguous subduction zones. Others are smaller, sub-rounded or patchy which might be attributed to shorter lived, isolated, or strongly migrating subduction zones, including in intra-oceanic or back-arc settings. While key Cretaceous-Jurassic plate tectonic events such as fluctuating ridge activity in Panthalassa and the breakup of Pangea 37 may explain the slab distribution shown here, we do not speculate further (Fig. 9). Conclusions Our vote map technique is neither a measure of the existence of an actual slab nor a critique of individual tomography models. The quality of vote maps is highly dependent on the independence of the tomographic models as implicitly defined for each model by the data inputs, the model parameterization used to construct the tomography model, the attained resolution and model amplitude error, and assumptions regarding data errors and model regularization. Nonetheless, the vote maps provide a useful indication as to the distribution of the most robust slabs that are imaged across a selection of 14 tomography models. We show that these remnants of subduction can be identified through the use of a model-and depth-dependent threshold metric, together with a voting approach. The use of alternative threshold metrics produce similar depth-dependent results. On average 20% of a depth's surface area is contoured by the MPV but only 1% is in maximum agreement when considering 14 models. A peak in the amount (coverage and agreement) of the most robust slabs is identified between 1000-1400 km, and a minimum between 1400-2500 km depths. These trends may match an independent measure of subduction flux using an average mantle sinking rate of 1.1 cm/yr. The use of a faster rate yields an anti-correlation between the subduction flux and our results, and may indicate that the observed depth-dependent slab volume changes are due to slab stagnation, either in the upper mantle transition zone, or in the mid-lower mantle where there may be a viscosity increase. On average, the S-wave models generally agree more than the P-wave models, whilst noting that the P-and S-wave models are parameterized differently, and that S-waves have a lower sensitivity to the detail of mantle structure. The identification of slabs in the lowermost mantle, below 2500 km, is likely to be greatly complicated by the presence of the large, antipodal LLSVPs and thus interpretations derived from the lowermost mantle should be treated with caution. Our vote maps constitute an open-source workflow and can be added to and refined with advances in seismology and geodynamics. Methodology For this study we analysed a total of 14 global seismic tomography models, split between 7 P-wave and S-wave models each (Table 1). We do not filter the models to exclude any spherical harmonic degrees. Because we are interested in the relative, depth-dependent velocity structure of the models which were constructed against different 1-D reference models (Table 1), we remove the layer-dependent mean (LDM) from each model. In other words, removing the mean permits a discussion that is not dependent on the reference model used to construct the models (Figures S1-S3). We also include a comparison in which all tomography models have been recalculated with the same background model, noting that regularization, ray path geometry and event positions depend on the reference model initially chosen. Nonetheless, for our purposes we find that the use of a background model has a negligible effect ( Figure S4). Here we focus on the lower mantle and so only consider depths greater than 700 km. Regular grids (0.5°cells) in depth increments of 50 km are derived from the raw tomography models by linear interpolation from the original grids (Table S1). We used Generic Mapping Tools (GMT, version 5.3.1) 54 , and Fig. 10 shows a summary of the methodology and the GMT commands utilized. The choice of a contour value (% δlnVs, % δlnVp; seismic velocity perturbation) to represent a subducted slab can be derived from a fixed value (e.g. +0.3%), or one that depends on depth-and model-dependent characteristics, which we consider here. For each model (described above; LDM removed and depth interpolated), we extract the positive values at each depth and calculate the mean positive value (MPV to distinguish from the LDM). Areas with wavespeeds in excess of the MPV are then classified as a slab "vote. " Two alternative threshold statistics, the standard deviation (STD) and root mean square (RMS), are shown in the supplementary (Figures S5-S7) and do not significantly change the overall results. To summarize, our reference cases shown herein have the LDM removed, are contoured based on the MPV, and are based on depth-interpolated grids. The resulting grids can then be added across the different models to generate a vote count, with higher counts signifying greater agreement on any given positive wavespeed feature. Here we present vote maps for the 7 P-wave models and 7 S-wave models individually, and the 14 models combined. To quantify differences in the vote counts between depths and the two classes of tomography models, the percentage of surface area of the respective votes can be calculated. There are several ways of quantifying both the coverage and the agreement between the tomography model groupings, depending on the vote that is considered (e.g. all non-zero votes, or the maximum vote, or a selection thereof), and the reference area (e.g. total depth's surface area, or all non-zero votes etc). For simplicity, the results are presented as % surface area with respect to the given depth horizon and thus are scaled to absolute area (depth conversion is shown in Section 2.4). We illustrate the effect of choosing only the maximum votes (i.e. 14/14 or 7/7 votes only), or a relaxed selection with lower vote counts included. A comparison between the scaled surface area of the maximum vote counts and two independent measures of subduction flux derived from two plate reconstructions is also undertaken. The first is a subduction flux "proxy" 69 , whereby seafloor production rates 38 derived from the plate reconstruction in ref. 39, measure the amount of crustal accretion at mid-ocean ridges from 200-0 Ma. The second directly measures subducted seafloor area based on the recently updated global plate reconstruction of ref. 37. Because the rates from the two subduction flux curves are presented in age (Ma) versus rate (km 2 /yr), they can be converted to depth based on a mantle sinking rate. To satisfy a range of proposed slab sinking speeds we present a first set based on an average whole mantle sinking rate of 1.1 and 1.3 cm/yr in line with global studies (e.g. refs 5, 7 and 8). However, plate convergence rates, typically higher than 1-2 cm/yr have also been used to approximate upper mantle sinking rates 70,71 so to satisfy an upper range of speeds we also apply the sinking rates assuming that a slab has already reached 700 km just 14 Myrs after subduction (5 cm/yr). Data availability. Datasets for the vote maps, or any additional information can be requested by emailing the corresponding author Grace Shephard at g.e.shephard@geo.uio.no. Data also provided at http://folk.uio.no/gracees/Shephard_SlabVoteMaps/ Additional vote map figures can be generated at. http://submachine.earth.ox.ac.uk. All figures in manuscript generated using Generic Mapping Tools (GMT v5.3.1; http://gmt.soest.hawaii.edu/).
v3-fos-license
2014-10-01T00:00:00.000Z
2007-05-15T00:00:00.000
19498344
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://bmcmededuc.biomedcentral.com/track/pdf/10.1186/1472-6920-7-12", "pdf_hash": "7a478703730221219769f0f347b69eb8b67a3cf3", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43622", "s2fieldsofstudy": [ "Medicine" ], "sha1": "7a478703730221219769f0f347b69eb8b67a3cf3", "year": 2007 }
pes2o/s2orc
Teaching medical students about children with disabilities in a rural setting in a school Background To describe and implement a community paediatric placement in a school setting that teaches undergraduate medical students about intellectual disability that provides benefit to the community and is acceptable to both students and teachers. Methods Twenty six 4th year undergraduate medical students of the University of Newcastle completed their Paediatric studies based in Tamworth in 2004 & 2005 including an 8 week placement at Bullimbal School for Specific Purposes. The placement involved the students being actively involved in assisting with the delivery of a variety of activities aimed at improving the motor skills of a group of disabled children. De-identified data were obtained from completed evaluation surveys from 75% (21 of 26) of the medical students and from 100% (5 of 5) of the teachers. Results All students and teachers found the placement was acceptable and enjoyed the placement and felt that it gave the medical students a greater understanding of children with disabilities. 80% (4 of 5) of the teachers involved in the program did not feel that its implementation added to their workload and all were enthusiastic to continue with the program. Conclusion Medical students can be effectively taught and have a valuable clinical experience in a school setting to learn about children with a disability. This educational innovation has provided a mutual benefit for both the medical students and the school children who participated in the program without impacting on the workloads of teachers. Background In Australian Medical Schools there is an increasing recognition that tertiary hospitals are not able to provide for all facets of a comprehensive undergraduate medical education. Modern medical curricula recognize that student directed learning and community orientation are important components of an undergraduate medical education. Recently in Australia there has been a rapid growth in the delivery of medical education in rural settings which has led to several innovations in community based medical education [1]. This paper will describe an innovation in the delivery of the community based components of the Paediatrics course of the Bachelor of Medicine program of the University of Newcastle delivered in a rural setting. In Australia over the last decade, there have been several specific programs including the Rural Clinical Schools (RCS), University Departments of Rural Health (UDRH) and Rural Undergraduate Support Committee (RUSC) that have increased the amount of undergraduate medical education occurring in rural settings across Australia [2]. By 2008 the integration of these programs will see approximately 20% of all clinical education of undergraduate medical students occurring in rural communities in Australia. To deliver a community based undergraduate medical curriculum requires engagement with local services and people. Ideally the process of the students being attached to the community should provide a service to the community and in return the students receive an educational experience. The challenge facing the providers of the current Australian Government strategies to promote rural undergraduate medical education is to make the provision of medical education a benefit and not a burden on the local community. Bullimbal is a government school located in Tamworth, a large regional town in New South Wales with a population of 55,000 for children with a moderate to severe intellectually disability. The School also caters for children with physical disabilities and for children with severe Autism. There are 24 children aged 4 to 19 years that are educated at Bullimbal. Prior to the introduction of this educational pilot Bulimbal had accepted Medical students coming to visit their school. Each year in Tamworth, approximately 8 students would visit for about 1 hour during their paediatric placement. Although the staff of the school was happy to conduct these "tours" the task was seen as burdensome and the students did not value this opportunity during their paediatric placement. The community aspects of the paediatric placement at that time were largely a series of "visits" to observe various community services but did not involve the students doing anything active. In 2004 the Newcastle curriculum was changed and Paediatrics and Child Health was taught as a single continuous 8 week block of the fourth year of the Undergraduate Medical Degree. Students were placed in Tamworth in groups of four students. The community placements in Tamworth were reviewed and it was decided to try and make the community placement more active. There is a well documented reduction in the burden of acute infectious diseases in Australian Children and recognition of an increasing burden of developmental, behavioural and mental health problems occurring in children [3]. Recent surveys indicate inadequate training and exposure to these problems even amongst consultant paediatricians [4]. To prepare undergraduate students and give them the core skills and knowledge in these contemporary child health issues requires exposure to child health in settings outside of acute care and tertiary children's hospitals. This educational innovation is one example of how you can expose students to child health issues in the community. One of the aims of this innovation was to give medical students an understanding of the crucial role that the education system has in caring for children with disabilities. Another aim of the innovation was to try and help medical students gain a greater understanding through a longitudinal exposure to children. A third aim was to design an attachment that was seen as providing a benefit to the children that the medical students worked with and seen as being valuable by the teaching staff of the school. This paper describes an innovation in the medical education program that has enabled a special school to deliver a Motor skills program for their children and the same time allowed several groups of medical students to gain a valuable insight into the care and special needs of children with disabilities. Methods In 2004 and 2005 up to 16 students a year were offered the opportunity to complete the Paediatrics and Child Health course of Newcastle University's Bachelor of Medicine program in Tamworth. Students completed the 8 week placement in groups of 3 or 4. The students were all volunteers. The students based in Tamworth completed a parallel curriculum but were assessed using the same instruments as students based in Newcastle. The attachment was supervised by specialist consultant general paediatricians who worked as staff specialists at Tamworth Rural referral hospital. In the attachment the students gained clinical experience in a 16 bed children's ward and 7 bed neonatal unit with the opportunity to participate in outpatient paediatric sessions. The clinical experience was supported by weekly problem based learning and bedside teaching sessions. A component of the course delivered in Newcastle included a series of 4 visits to community services that each lasted approximately 1.5 hours. Approval to substitute single visits to several community sites with a number of visits to the one site was gained from the Bachelor of Medicine program committee and the Discipline of Paediatrics and Child Health on the condition that the student experience was evaluated. The community component of the Paediatric course is compulsory. It is assessed by students attending all rostered community visits. The students based in Tamworth were not given an alternative community visit. They were assessed by attendance. Students and staff that participated in the educational pilot and were asked to complete an anonymous feedback questionaire. Completion of the feedback questionnaire was voluntary. Those that completed the questionnaire gave their consent to having the results used for publication. The principal of Bullimbal special school designed a placement where up to four students came to the school at the same time for a period of 90 minutes each week. The objective of the community placement was to give the students an insight into the care required for children with special needs in the hope that they would have a greater understanding of the challenges which confront the parents of these special children. This extended attachment was introduced as pilot over 2004 and 2005 to seven groups of fourth year students. The planned schedule of activities which the students undertook included the following: • In the first week students would be orientated to the school and meet and spend time with the child they were to predominately work with over the ensuing 8 weeks of the placement. Students would also meet staff at the school. • For the next 7 weeks, students assisted a physiotherapist in helping to transport 5 of the more able bodied children at Bullimbal to the Police Citizen's and Youth Club Gymnasium approximately 1.5 klms away in the school bus. • The students were then available to help contain the children in the Gymnasium and allow the children to receive an active motor skills program that would have been impossible to deliver as a lone therapist. • This activity also freed other teaching staff from duties to allow other work at the school to progress. • Students completed their questionnaire at the end of each attachment and staff completed their questionnaire at the end of 2005. During the first rotation of students, one of the children at the school had a grand mal epileptic seizure. The medical students administered appropriate first aide to the child and were found to have reacted very appropriately to a stressful emergency situation. There were no other adverse events that occurred during the exercise sessions. Over a 2 year period, the placement permitted a total 42 extra activity sessions to be delivered to this group of children. There were 7 weeks in the two year period of the pilot where the students were completing their attachment during the school holidays and another occasion when the designated day fell on a "pupil-free" day. On average students attended the school 7 times during each attachment for approximately 10.5 hours of time. Results A standard feedback questionnaire was developed that included four questions that used a 5-point Likert Scale. The range was strongly disagree, disagree, neutral, agree and strongly agree. The students were asked if the placement at Bulimbal was a positive experience, if the placement gave the students a greater understanding of children with special needs, if the placement was relevant to Paediatrics and Child Health and if it should be assessed. The staff were asked to give an overall rating for the placement, if the placement gave the students a greater understanding of children with special needs and was a good opportunity to teach the next generation of doctors about children with special needs, if the placement was relevant to Paediatrics and Child Health and if having the students adversely affected their workload. The students and staff were then asked to give their written comments about the placement. Their comments are shown in table 1. Student Comments Staff Comments "I think this placement is really great. Much better being in the one place each week and feeling as though we are involved." "Only problem was not enough time and trouble getting activities organized." "The kids, teachers and staff were appreciative of us helping." "Helping out with gym classes, bowling etc gives you a better understanding of the needs of these children and the resources required to care for them." "Found this placement very valuable. It was great to get to know the kids over the weeks and I found it very rewarding to know their names and have them recognise us every week." "The Kids were great" " Time spent in classrooms prior to sport time didn't seem very well spent (for us to be there -that is)" "The placement offered the medical students some insight into the life of a disabled person. I feel this would assist them in understanding the difficulties that the carers/parents possibly face." "So far the placement has been a great experience for students and staff. It has allowed us to conduct activities that, due to supervision requirements, would not have been possible without the support of visiting medical students." "That the medical students were able to get to know the children well enough to be able to work confidently with them." "Students learnt and understood the various communication strategies now used with the students who are non-verbal." "The placement will allow the students to remember their interactions with our students so when they are practicing medicine and a family with a child with a disability arrive in their surgery or Emergency Department that they will have a stronger empathy due to their experience in Bullimbal." "Having the extended placement medical students allowed staff at Bullimbal to take students to the Gym and Tenpin Bowling on a ratio of 1:1 which is wonderful therapy for our children." "The medical students completing their placement with us has been a pleasure and most helpful. If this is how our new Doctors are going to be then we are very happy." 21 of 26 students who participated in the attachment completed feedback questionnaires. 10 students agreed and 11 strongly agreed that the placement was a positive experience. All 21 strongly agreed that the placement gave them a better understanding of children with special needs. 2 students were neutral and 19 either agreed or strongly agreed that the placement was important to Paediatrics and Child Health. 4 were neutral and 17 agreed or strongly agreed that completing the placement should contribute towards assessment. There was five staff that completed questionnaires at the end of 2005. Four staff strongly agreed and 1 staff agreed that the placement was a positive experience. Four staff strongly disagreed that the placement had increased their workload and one was neutral. All strongly agreed that it was an opportunity to teach the next generation of doctors an understanding of people with special needs felt and that the attachment was very relevant to the study of paediatrics and child health. The evaluation of the placement has been considered positive and the placement has continued to remain an important part of the Paediatrics and Child Health course delivered in Tamworth. The students as part of their evaluation have the opportunity to provide both written and verbal feedback about their attachments. Discussion This paper demonstrates that this type of active community placement is seen as a positive experience by both the students and the teachers involved in its implementation. The opinions of staff were also encouraging because through this attachment the students have been seen as an extra resource for the school and implementing this type of educational program has not been seen as another burden placed on education staff with very demanding professional roles. A limitation of this study is that it has only looked at a small sample of students participating at just one school and is a description of an educational innovation from the perspectives of the medical students and the teachers of the children at the school. The evaluation of the innovation did not include trying to measure any improvement in the children at the school or if the children enjoyed the activity and relied on the perceptions of the staff and medical students. Such a study was beyond the scope of a quality assurance evaluation of this educational activity. There is research that has documented how clients involved in clinical education can benefit from the experience [5]. To further expand on an approach where medical student education became a measurable benefit to the community would be an exciting future research project. There is a need for Universities in the way they provide education to be both socially accountable and community orientated. This innovation shows a mutual benefit for both the medical students and the staff of the school can accrue from an educational innovation that has not been constrained by the usual boundaries that exist around the hospital based delivery of clinical and academic training for health professional students. Previously the teaching of Paediatrics at Australian Universities has predominantly taken place in Tertiary Children's Hospitals where paediatricians usually practice as sub-specialists. The advent of the RCS program has demonstrated that students can effectively learn their paediatric clinical skills outside of the tertiary setting and that their paediatric clinical skills may be effectively learnt in a primary care medical setting [6]. This pilot community placement demonstrates how other professionals from outside of health can contribute effectively to the Paediatric education of medical students and may give students a broader understanding of the chronic care issues facing the families of children with disabilities. Previous studies on this issue have tried to achieve this through looking at students understanding of language [7] or trying to understand the child and parent's perspective of illness [8]. This educational innovation demonstrates that these skills can be learnt outside either a hospital or primary health care setting and may be experienced in an educational community setting. This is consistent with attempts to broaden undergraduate medical student experiences [9]. In Australia there is a current medical workforce shortage [10] and the government is hoping to address this problem by training more doctors. In the next five years there is going to be a doubling in the number of medical students in Australia and there will be a need to think of new ways and places to train these extra undergraduate medical students. This paper has described an innovative partnership with a school as one of the potential new places where students can obtain valuable clinical experience. lished in collaboration with the University of New England and Hunter New England Area Health service funded by the Australian Government Department of Health. The UDRH took responsibility for the funding of the design, data collection, writing and decision to submit the manuscript for publication.
v3-fos-license
2018-08-06T13:06:28.652Z
2018-08-06T00:00:00.000
51922004
{ "extfieldsofstudy": [ "Chemistry", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.frontiersin.org/articles/10.3389/fphar.2018.00845/pdf", "pdf_hash": "698a45917652f9119cd0f9a7537928f6b6f85cfe", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43625", "s2fieldsofstudy": [ "Biology", "Psychology" ], "sha1": "698a45917652f9119cd0f9a7537928f6b6f85cfe", "year": 2018 }
pes2o/s2orc
Antiepileptic Drugs Elevate Astrocytic Kir4.1 Expression in the Rat Limbic Region Inwardly rectifying potassium (Kir) channel subunits Kir4.1 are specifically expressed in astrocytes and regulate neuronal excitability by mediating spatial potassium buffering. In addition, it is now known that astrocytic Kir4.1 channels are closely involved in the pathogenesis of epilepsy. Here, to explore the role of Kir4.1 channels in the treatment of epilepsy, we evaluated the effects of the antiepileptic drugs, valproate, phenytoin, phenobarbital and ethosuximide, on Kir4.1 expression in astrocytes using immunohistochemical techniques. Repeated treatment of rats with valproate (30–300 mg/kg, i.p., for 1–10 days) significantly elevated the Kir4.1 expression levels in the cerebral cortex, amygdala and hippocampus. Up-regulation of Kir4.1 expression by valproate occurred in a dose- and treatment period-related manner, and did not accompany an increase in the number of astrocytes probed by glial fibrillary acidic protein (GFAP). In addition, repeated treatment with phenytoin (30 mg/kg, i.p., for 10 days) or phenobarbital (30 mg/kg, i.p., for 10 days) also elevated Kir4.1 expression region-specifically in the amygdala. However, ethosuximide (100 mg/kg, i.p., for 10 days), which can alleviate absence but not convulsive seizures, showed no effects on the astrocytic Kir4.1 expression. The present results demonstrated for the first time that the antiepileptic drugs effective for convulsive seizures (valproate, phenytoin, and phenobarbital) commonly elevate the astrocytic Kir4.1 channel expression in the limbic regions, which may be related to their antiepileptic actions. INTRODUCTION Epilepsy is a chronic neurologic disease characterized by recurrent convulsive and/or nonconvulsive seizures, affecting approximately 70 million people worldwide (nearly 1% of the population) (Banerjee et al., 2009;Ngugi et al., 2010;Zack and Kobau, 2017). Various antiepileptic drugs, which predominantly act on the neuronal ion channels (e.g., blockers of voltage-gated Na + and Ca 2+ channels) and the inhibitory GABAergic system (e.g., stimulants of GABA A receptor/Cl − channel complex and inhibitors of GABA transaminase), are currently used in the treatment of epilepsy (Meldrum and Rogawski, 2007). Therapy with these standard antiepileptic drugs provides adequate control in about 70% of epilepsy patients; however, the remaining 30% of patients still suffer from refractory (treatment-resistant) symptoms and are sometimes subjected to surgical treatments (e.g., ablation of seizure foci, deep brain stimulation and vagus nerve stimulation) (Mattson, 1998). It is now known that Kir4.1 plays an important role in inducing and developing epilepsy (epileptogenesis). Kir4.1 knockout mice showed severe motor impairment (e.g., ataxia and tremor), epileptic symptoms (e.g., jerky movements and convulsive seizures), and early mortality (Kofuji et al., 2000;Neusch et al., 2001;Djukic et al., 2007). In addition, astrocytic Kir4.1 expression was reported to be reduced (down-regulated) in the brain regions related to seizure foci in patients with epilepsy and animal models of epilepsy (Ferraro et al., 2004;Inyushin et al., 2010;Das et al., 2012;Heuser et al., 2012;Steinhäuser et al., 2012;Harada et al., 2013). Furthermore, it has been shown that loss-of-function mutations (i.e., missense and nonsense mutations) in the human KCNJ10 gene encoding Kir4.1 caused the epileptic disorders known as "EAST/SeSAME" syndrome (Bockenhauer et al., 2009;Scholl et al., 2009;Reichold et al., 2010). Patients with EAST/SeSAME syndrome manifested generalized tonic-clonic seizures (GTCSs) within a few months after birth, in addition to sensorineural deafness, ataxia and electrolyte imbalance. Therefore, it is likely that Kir4.1 channels are closely involved in the pathogenesis of epilepsy. However, the roles of Kir4.1 channels in the treatment of epilepsy or the influences of antiepileptic drugs on Kir4.1 expression are still unknown. Besides the acute neural inhibition, repeated treatments with antiepileptics are known to exert, to some extent, prophylactic effects in chronic epilepsy, although the underlying mechanisms remain unclear (Iudice and Murri, 2000;Michelucci, 2006;Torbic et al., 2013). This makes a hypothesis that antiepileptics may enhance Kir4.1 expression to prevent epileptogenesis. In the present study, therefore, we evaluated the effects of the antiepileptic drugs, valproate, phenytoin, phenobarbital and ethosuximide, on astrocytic Kir4.1 expression to explore the potential role of Kir4.1 expression in the treatment of epilepsy. Animals Male 6-week-old SD rats (Japan SLC, Shizuoka, Japan) were used. The animals were kept in air-conditioned rooms (24 ± 2 • C and 50 ± 10% relative humidity) under a 12-h light/dark cycle (light FIGURE 1 | Schematic illustration of a brain section selected for quantitative analysis of immunoreactivity (IR) of Kir4.1 or GFAP. Filled squares in each brain region indicate the areas analyzed for counting of Kir4.1-IR-or GFAP-IR-positive cells, which were set according to the rat brain atlas (Paxinos and Watson, 2007). Motor, motor cortex; S1BF, primary somatosensory cortex barrel field; PRh-Ect, perirhinal-ectorhinal cortex; MePV, medial amygdaloid nucleus posteroventral part; MePD, medial amygdaloid nucleus posterodorsal part; BLA, basolateral amygdaloid nucleus anterior part; BMP, basomedial amygdaloid nucleus posterior part; CA1 medial or lateral, CA3 and DG, hippocampal CA1 medial, CA1 lateral, CA3 and dentate gyrus; L, Lateral coordinates (mm); H, Horizontal coordinates from interaural line (mm). on: 8:00 a.m.) and allowed ad libitum access to food and water. The animal care methods complied with the Guide for the Care and Use of Laboratory Animals of the Ministry of Education, Science, Sports and Culture of Japan. The experimental protocols of this study were approved by the Animal Research Committee of Osaka University of Pharmaceutical Sciences. Drug Treatments and Brain Sampling Animals (6 rats/group) were intraperitoneally injected with a daily dose of an antiepileptic drug as followed; valproate (30, 100, and 300 mg/kg), phenytoin (30 mg/kg), phenobarbital (30 mg/kg), or ethosuximide (100 mg/kg) for 10 days. To evaluate the time-course, animals were treated with valproate (300 mg/kg) for 1 or 5 day(s). The test doses of each drug were set to anticonvulsive doses in rodents, according to previous papers (Walton and Treiman, 1989;Lothman et al., 1991;Löscher, 1999;Gören and Onat, 2007). Twenty-four hours after the last drug treatment, the animals were deeply anesthetized with pentobarbital (80 mg/kg, i.p.), transcardially perfused with ice-cold phosphate-buffered saline (PBS) and then with 4% paraformaldehyde solution. The brains were then removed from the skull and placed in fresh fixative for at least 24 h. Drugs Sodium valproate, phenytoin, phenobarbital, and ethosuximide were purchased from Sigma-Aldrich. Other common laboratory reagents were also obtained from commercial sources. Statistical Analysis All data are expressed as the mean ± S.E.M. Comparisons between two groups were performed by Student's t-test. Statistical significance of differences among multiple groups was determined by one-way ANOVA followed by Tukey's post hoc test. A P-value of less than 0.05 was considered statistically significant. Effects of Valproate on Astrocytic Kir4.1 Expression We first confirmed the expression pattern of Kir4.1 in rat brains using the immunofluorescence double staining method. As reported previously , confocal laser microscopic analysis revealed that Kir4.1-IR was specifically expressed in astrocytes (somata and processes of stellate-shaped cells) probed by GFAP (Figure 2A). DISCUSSION Evidence is accumulating that the dysfunction (reduced function or expression) of astrocytic Kir4.1 channels causes epileptic disorders, including not only EAST/SeSAME syndrome with KCNJ10 mutations (Bockenhauer et al., 2009;Scholl et al., 2009;Reichold et al., 2010), but also idiopathic epilepsy (Das et al., 2012;Heuser et al., 2012;Steinhäuser et al., 2012). These findings suggest that enhancement of Kir4.1 channel activities can prevent the development of epilepsy (epileptogenesis) by facilitating astrocytic spatial potassium buffering. The present study demonstrated for the first time that several antiepileptic drugs, which are commonly effective for GTCSs in patients, enhance the astrocytic Kir4.1 expression in the limbic regions. Valproate significantly elevated the astrocytic Kir4.1 expression in the amygdala, hippocampus and cerebral cortex, in a doseand time-dependent manner. Phenytoin and phenobarbital also increased the Kir4.1 expression in the amygdala region. In addition, up-regulation of Kir4.1 expression by these agents did not accompany the increase in the number of astrocytes (astrogliosis). Limbic structures such as the amygdala have been generally recognized as sites closely related to epileptogenesis in animal models of epilepsy (McNamara, 1984;Morimoto et al., 2004). Moreover, human limbic regions are also involved in seizure generation not only in temporal lobe epilepsy, the most common type of adult localization-related epilepsy, but also in epilepsy induced by autoimmune encephalitis (Tatum, 2012;Melzer et al., 2015). Thus, our results suggest that the elevation of astrocytic Kir4.1 expression in limbic regions by the antiepileptic drugs contributes to their antiepileptic actions. Indeed, in our preliminary studies using audiogenic seizure susceptible Lgi1 L385R mutant rats (Baulac et al., 2012;Fumoto et al., 2014), repeated treatment with valproate alleviated epileptogenesis (development of seizure susceptibility) of the Lgi1 L385R mutant rats which exhibited down-regulation of astrocytic Kir4.1 expression (Kinboshi et al., 2017b). Valproate inhibits GABA transaminase and increases GABA levels, thereby enhancing inhibitory GABAergic activities (Vajda and Eadie, 2014). Phenobarbital also activates the GABAergic system by prolonging the opening time of chloride ion channels within GABA A receptors. In addition, both valproate and phenytoin possess an inhibitory action against voltage-gated Na + channels. All these actions of antiepileptic drugs reduce neural excitability and contribute to an acute inhibitory action on seizure induction. Besides the acute actions, repeated treatments with these antiepileptics are known to exert, to some extent, prophylactic effects in chronic epilepsy, although such usage are sometimes limited by their side effects and/or drug interactions (e.g., enzyme-inducing properties) (Iudice and Murri, 2000;Michelucci, 2006;Torbic et al., 2013). Indeed, valproate reportedly had the potential to prevent epileptogenesis although the underlying mechanisms remain unclear (Silver et al., 1991;Bolanos et al., 1998;Hashimoto et al., 2003). The present fact that the up-regulation of Kir4.1 channels by antiepileptics was mostly manifested after repeated treatments suggests that the elevated expression of Kir4.1 channels may contribute to the seizure-preventive (prophylactic) actions of these agents. Ethosuximide specifically alleviates absence seizures and does not affect (or sometimes worsen) GTCSs. It inhibits the low threshold T-type Ca 2+ currents in thalamic neurons, although other mechanisms (e.g., inhibition of the non-inactivating Na + currents and the Ca 2+ -activated K + currents) are also proposed (Crunelli and Leresche, 2002). Interestingly, ethosuximide failed to affect Kir4.1 expression in any brain regions examined. Therefore, Kir4.1 channels may not be involved in preventive effects of ethosuximide on absence seizures. This is consistent with our previous findings that down-regulation of Kir4.1 expression was observed only in the GTCSs model (e.g., Noda epileptic rats), but not in the absence seizure model (Groggy rats), implying that pathophysiological alterations of Kir4.1 are not linked to non-convulsive absence seizures (Harada et al., , 2014Ohno et al., 2015). In conclusion, we evaluated the effects of the antiepileptic drugs, valproate, phenytoin, phenobarbital and ethosuximide, on expressional levels of astrocytic Kir4.1 channels in rats. Valproate, phenytoin and phenobarbital, which commonly alleviate GTCSs, significantly increased Kir4.1 expression in the limbic regions (e.g., amygdala) without affecting the number of astrocytes. Upregulation of Kir4.1 channels by valproate occurred in a doseand treatment period-dependent manner. In contrast, treatment of rats with ethosuximide, which selectively ameliorates absence seizures, did not affect Kir4.1 expression. The present results demonstrated for the first time that antiepileptics (e.g., valproate, phenytoin and phenobarbital) up-regulate astrocytic Kir4.1 channels in the amygdala, which may contribute to their clinical efficacy in chronic epilepsy. However, it remains uncertain how these antiepileptics elevated the expression of Kir4.1 channels region-specifically in the limbic regions. Further studies are required to clarify the mechanisms underlying the control of astrocytic Kir4.1 expression by antiepileptic drugs.
v3-fos-license
2018-12-27T19:43:21.975Z
2013-10-07T00:00:00.000
72165454
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.nepjol.info/index.php/JNPS/article/download/8254/7237", "pdf_hash": "110ee4858e17d6266634e1f0dd690a233e21e7d8", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43628", "s2fieldsofstudy": [ "Medicine" ], "sha1": "110ee4858e17d6266634e1f0dd690a233e21e7d8", "year": 2013 }
pes2o/s2orc
Study of Risk Factor for Congenital Heart Diseases in Children at Rural Hospital of Central India Address for correspondence Dr. Amar M Taksande Department of Paediatrics Jawaharlal Nehru Medical College, Sawangi Meghe, Wardha, Maharashtra 442102. India. E-mail: amar.taksande@gmail.com How to cite this article ? Taksande AM, Vilhekar K. Study of Risk Factor for Congenital Heart Diseases in Children at Rural Hospital of Central India. J Nepal Paediatr Soc 2013;33(2):121-124. Study of Risk Factor for Congenital Heart Diseases in Children at Rural Hospital of Central India Introduction C ongenital Heart Disease (CHD) is defi ned as a gross structural abnormality of the heart or intrathoracic great vessels that is actually or potentially of functional signifi cance.It is the most common congenital problem that accounts for up to 25% of all congenital malformations that present in the neonatal period 1 .The etiology of CHD is largely unknown and so prevention is almost impossible.A multifactorial inheritance is gaining ground which include genetics and environmental interaction in 90% and solely of genetic origin in 8% (chromosomal in 5% and single mutant gene 3%) 2 .CHD may present in any age group from neonatal age to adolescent age group and it may present with or without cyanosis, rapid breathing, perspiration, some with congestive cardiac failure, cyanotic spells, while some children may be asymptomatic but with a cardiac murmur detected during examination for any other illness 3 .There are certain risk factors which are responsible for congenital heart disease therefore this study was conducted to determine the risk factors for the development of congenital heart diseases in children at rural hospital. Materials and Methods This study was conducted in Department of Paediatrics, Mahatma Gandhi Institute of Medical College, Sevagram over a period of three year from March 2004 to April 2007.All the children with clinical suspicion of CHD were evaluated with history and clinical examination.They were initially investigated by performing complete blood cell count, chest x-ray and electrocardiography and fi nal diagnosis was confi rmed by echocardiography.The detailed history was taken in the congenital heart disease (cases=209) regarding the history of consanguinity and family history of congenital malformation.Antenatal history regarding drug intake, tobacco intake, alcohol intake, exposure to smoking, number of previous abortions, or history of diabetes were inquired.The control group (n=418) were randomly selected from children without congenital heart disease who were admitted during the same period.The child with acquired heart disease was excluded from the study.The statistical EPI 6 version was used and calculated the odds ratio (OR) to evaluate the strength of association of risk factors. Results Total 11,748 admissions occured in the paediatric wards during three year period.There were 209 (1.77%) children with CHD of total admissions.117(56%) cases were males and 92 (44%) females.Male to female ratio was 1.25:1.Approximately 70% children were having acyanotic CHD.104 (49.76%) cases were presented in the fi rst year of life, 68 (32.53%) in 1 to 5 year and 37(17.70%)cases after 5 year of life.Table 1 shows the distribution of risk factor according to type of congenital heart diseases.In 92% cases, symptom started in infancy.The commonest symptoms were repeated chest infection, breathlessness, palpitation, failure to thrive, and cyanosis.13 (6.22%)cases had extracardiac malformation mainly limb and renal abnormality.Table 2 shows the distribution of risk factor in cases and controls group.Exposure to smoking (OR=10.45,95% CI 2.13; 69.71), tobacco intake by mother (OR=8.28,95% CI 1.62; 56.93) and family history of congenital heart disease (OR=7.21,95%CI 1.48; 35.01) were the signifi cant risk factor present in cases groups as compared to the control groups. Discussion Congenital heart diseases are leading cause of neonatal and infant mortality.During cardiogenesis various genetic and non-genetic environmental etiological factors are starting pathogenetic mechanism results in developing of CHD 4 .In ~90% of the CHD cases, no identifi able cause detected and that can be attributed as multifactorial defects 2 .Hereditary factors may play a role since the incidence of CHD in sibling is signifi cantly higher then in general population.If one child has the defect there is 2.5-5% chance that the second baby may have defect and 5-10% chance if second children have the defect 5,6 .The chances of the next child being affected by congenital cardiac malformation are more if the parents are consanguineous.In the present study, 15.78% were born of consanguineous marriages, mostly fi rst cousin marriages.In our study, nearly 46% cases were born to primiparous mothers similar fi nding have been reported by Sugunabai 7 .Baltimore Washington Infant Study (BWIS) 8 reported, two-fold excess of familial CHD in cases than controls whereas 3.34% cases had history of CHD in the family. CHD are occurred more when the mother comes in contact with certain substances during the fi rst few weeks of pregnancy, while the baby's heart is developing.The mother who had viral diseases like rubella and mumps in the fi rst three months of pregnancy is more prone to develop multiple congenital anomalies including defects in hearts.Campbell's 9 revealed that maternal rubella was responsible for 1-2% of malformation of the heart, Sugunabai 7 reported <2% whereas 2.3% in the present study.American Heart Association's 2006 10 concluded that women who smoke or are exposed to tobacco smoke early in their pregnancies are more likely to have children with certain types of CHD.Begic H et al 11 reported that 11.08% mothers were exposed to nicotine whereas in our study 3.82% had history of tobacco intake and 4.78% had exposure to smoking in mothers.Use of tobacco intake by mother during the 1 st trimester of pregnancy may acts as teratogens.In rural area of Maharashtra, >50 adolescents are used to take tobacco are found to be a signifi cant risk factor for the development of congenital heart disease. Women who have seizure disorders and need to take anti-convulsant medications may have a higher risk for having a child with congenital heart disease 12 .Preexisting maternal diabetes is associated with a fi vefold increase in risk of cardiovascular malformations 13 .BWIS 8 concluded that maternal diabetes is highly correlated with cyanotic CHD.Risk of CHD remains high for infant of women with poorly controlled elevated phenylalanine levels.No infant in our study had a history of maternal phenylketonuria.Maternal history of previous abortions and stillbirths were signifi cant risk factors for CHD in offspring.Begic H et al 11 reported that, more number of CHD cases (83.14%) were present in children whose mothers were 20-35 years old, while only 5.11% of mothers aged >35 year, same result was found in our study also.Major genetic defects such as chromosomal abnormalities were recognized as association with congenital heart disease with the identifi cation of numerical excesses or defi ciencies 14 .The chromosomal abnormality like trisomy 21 is mainly associated with endocardial cushion defect.We found that out of 10 cases of chromosomal abnormality, 8 cases had trisomy 21 defect. Conclusion According to our knowledge, this was the fi rst study carried out in rural hospital for assessing the risk factors for congenital heart disease in children at rural area of Maharashtra state.The risk factors for CHD child identifi ed were intake of tobacco, exposure to smoking, family history of CHD, ANC infection in 1st trimester and history of child with diabetic mother.Still there are more elaborative studies required to assess various etiological factors associated with congenital heart disease. Table 1 : Distribution of risk factor according to type of congenital heart disease. Table 2 : Distribution of risk factor in cases and controls.Study of Risk Factor for Congenital Heart Diseases in Children at Rural Hospital of Central India
v3-fos-license
2021-08-11T01:16:04.772Z
2021-08-10T00:00:00.000
236965630
{ "extfieldsofstudy": [ "Physics" ], "oa_license": "CCBY", "oa_status": "GREEN", "oa_url": "https://authors.library.caltech.edu/111838/1/Hu_2021_ApJ_921_27.pdf", "pdf_hash": "58bdf40ab29dc19267dfe503af33f4179c63dcdc", "pdf_src": "Arxiv", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43629", "s2fieldsofstudy": [ "Physics", "Geology" ], "sha1": "58bdf40ab29dc19267dfe503af33f4179c63dcdc", "year": 2021 }
pes2o/s2orc
Photochemistry and Spectral Characterization of Temperate and Gas-Rich Exoplanets Exoplanets that receive stellar irradiance of approximately Earth's or less have been discovered and many are suitable for spectral characterization. Here we focus on the temperate planets that have massive H2-dominated atmospheres, and trace the chemical reactions and transport following the photodissociation of H2O, CH4, NH3, and H2S, with K2-18 b, PH2 b, and Kepler-167 e representing temperate/cold planets around M and G/K stars. We find that NH3 is likely depleted by photodissociation to the cloud deck on planets around G/K stars but remains intact in the middle atmosphere of planets around M stars. A common phenomenon on temperate planets is that the photodissociation of NH3 in presence of CH4 results in HCN as the main photochemical product. The photodissociation of CH4 together with H2O leads to CO and CO2, and the synthesis of hydrocarbon is suppressed. Temperate planets with super-solar atmospheric metallicity and appreciable internal heat may have additional CO and CO2 from the interior and less NH3 and thus less HCN. Our models of K2-18 b can explain the transmission spectrum measured by Hubble, and indicate that future observations in 0.5-5.0 um would provide the sensitivity to detect the equilibrium gases CH4, H2O, and NH3, the photochemical gas HCN, as well as CO2 in some cases. Temperate and H2-rich exoplanets are thus laboratories of atmospheric chemistry that operate in regimes not found in the Solar System, and spectral characterization of these planets in transit or reflected starlight promises to greatly expand the types of molecules detected in exoplanet atmospheres. INTRODUCTION The era of characterizing temperate exoplanets has begun. Kepler, K2, and TESS missions have found a few tens of exoplanets cold enough for water to condense in their atmospheres in transiting orbits (from the NASA Exoplanet Archive). Another handful of temperate planets may be confirmed in the next few years with ongoing validation and followup of TESS planet candidates (Barclay et al. 2018). A small subset of these planets has been observed by HST for transmission spectra (De Wit et al. 2018;Zhang et al. 2018;Tsiaras et al. 2019;Benneke et al. 2019). For example, a transmission spectrum obtained by Hubble at 1.1 -1.7 µm of the temperate sub-Neptune K2-18 b shows spectral fea-tures (Tsiaras et al. 2019;Benneke et al. 2019), and the spectrum indicates that the planet hosts an atmosphere dominated by H 2 , and has H 2 O and/or CH 4 in its atmosphere (Benneke et al. 2019;Madhusudhan et al. 2020;Blain et al. 2021). TOI-1231 b is another temperate planet suitable for atmospheric studies with transits (Burt et al. 2021). With > 7 times more collecting area and infrared instruments, JWST will be capable of providing a more detailed look into the atmospheres of these temperate exoplanets (Beichman et al. 2014). We refer to the exoplanets that receive stellar irradiance of approximately Earth's as "temperate exoplanets" and those that receive less irradiance by approximately an order of magnitude as "cold exoplanets" in this paper. Temperate and cold exoplanets include both giant planets and small planets and potentially have diverse atmospheric composition. Giant planets (Jupiters and Neptunes) have massive H 2 /He envelopes (e.g., Burrows et al. 2001), and small planets (mini-Neptunes, super-Earths, and Earth-sized planets) can have H 2 /He atmospheres with variable abundances of heavy elements, steam atmospheres mostly made of water, or secondary atmospheres from outgassing (e.g., Fortney et al. 2013;Hu & Seager 2014). In this paper, we focus on temperate/cold and gas-rich exoplanets, which include temperate/cold giant planets and mini-Neptunes. We assume that the atmospheres are H 2 /He-dominated and massive enough for thermochemical equilibrium to prevail at depths. This condition determines that the dominant O, C, N, S species should be H 2 O, CH 4 , NH 3 , and H 2 S on temperate and cold planets in most cases (e.g., Fegley Jr & Lodders 1996;Burrows & Sharp 1999;Heng & Tsai 2016;Woitke et al. 2020). Thermochemical equilibrium may also produce N 2 as the dominant N species and substantial abundance of CO and CO 2 if the planet has a hot interior (e.g., Fortney et al. 2020). On temperate and cold planets, H 2 O can condense to form a cloud and the abovecloud H 2 O is partially depleted as a result (e.g., Morley et al. 2014;Charnay et al. 2021). Cold planets may additionally have NH 4 SH (from the combination reaction between NH 3 and H 2 S) and NH 3 condensed to form clouds (e.g., Lewis 1969; Atreya et al. 1999). This paper primarily concerns the photochemical processes above the clouds, with H 2 O, CH 4 , NH 3 , and H 2 S as the feedstock. Past work on the atmospheric photochemistry of lowtemperature and gas-rich planets in the exoplanet context is rare. Moses et al. (2016) studied the thermochemistry and photochemistry in directly imaged young giant planets, and discussed the photochemical production of CO 2 and HCN in their atmospheres. Zahnle et al. (2016) showed that sulfur haze can form photochemically in the young Jupiter 51 Eri b, and the level of the sulfur haze would moves upward in the atmosphere when the eddy diffusion coefficient increases. Gao et al. (2017) further modeled the effect of the sulfur haze on the reflected starlight spectra of widely separated giant planets. Here we systematically study the atmospheric photochemistry of H 2 O, CH 4 , NH 3 , and H 2 S in lowtemperature exoplanetary atmospheres and model the abundance of the photochemical gases to guide the future observations of temperate/cold and gas-rich exoplanets. The paper is organized as follows: Section 2 describes the models used in this study; Section 3 presents the results in terms of the main behaviors of atmospheric chemistry, key photochemical mechanisms, and the corresponding spectral features in transmission and reflected starlight; Section 4 discusses the prospect to detect photochemical gases in temperate and gas-rich exo-planets and potential areas of further development; and Section 5 summarizes the key findings of this study. Atmospheric Structure Model We use the atmospheric structure and cloud formation model in to simulate the pressuretemperature profile and potential gas depletion by condensation in temperate and cold exoplanets. We have updated the model with a routine to compute the condensation of NH 4 SH cloud, in a similar way as the equilibrium cloud condensation model of Atreya & Romani (1985). In short, we compare the products of the partial pressure of NH 3 and H 2 S with the equilibrium constant of the reaction that produces NH 4 SH solid (Lewis 1969), and partition the NH 3 and H 2 S in excess to form the NH 4 SH solid cloud in each atmospheric layer. We have verified that the resulting NH 4 SH cloud density and pressure level is consistent with the previously published models when applied to a Jupiter-like planet (e.g., Atreya et al. 1999). Another update is that the model now traces the concentration of NH 3 in liquid-water cloud droplets when applicable. The model of has included the dissolution of NH 3 in the liquid-water droplets. By additionally tracing the concentration of NH 3 in droplets, we have now taken into account the non-ideal effects when the NH 3 solution is non-dilute. When the mass ratio between NH 3 and H 2 O in the droplet is > 0.05, we replace Henry's law with the vapor pressure of NH 3 in equilibrium with the solution (Perry & Green 2007). The latter merges the solubility in the Henry's law regime to that in the Raoult's law regime smoothly. We also apply the vapor pressure of H 2 O in equilibrium with the solution, which can be substantially smaller than that with pure water when the solution is non-dilute (i.e., the Raoult's law). While the impact of these processes on the overall atmospheric composition of the planets studied in this paper -planets warmer than Jupiter -is small, these processes may control the mixing ratio of H 2 O and NH 3 in the atmospheres of even colder planets Romani et al. 1989). Atmospheric Photochemical Model We use the general-purpose photochemical model in Hu et al. (2012Hu et al. ( , 2013 to simulate the photochemical products in the middle atmospheres of temperate and cold exoplanets. The photochemical model includes a carbon chemistry network and a nitrogen chemistry network and their interactions (Hu et al. 2012). The photochemical model also includes a sulfur chemistry network and calculates the formation of H 2 SO 4 and S 8 aerosols when applicable (Hu et al. 2013). We have made several updates to the original reaction network (Hu et al. 2012), and they are listed in Table 1. We have checked the main reactions that produce, remove, and exchange C 1 and C 2 hydrocarbons in the Jovian atmosphere (Gladstone et al. 1996;Moses et al. 2005) and updated rate constants when more recent values in the relevant temperature range are available in the NIST Chemical Kinetics Database. We have added low-pressure or high-pressure rate constants for threebody reactions if any of them were missing in the original reaction rate list. Certain reactions important for the hydrocarbon chemistry do not have a directly usable rate constant expression in the NIST database; rather their rates are fitted on experimental data or estimated by Moses et al. (2005). We have also added several reactions that involve NH because it may be produced by NH 3 photodissociation, and updated the rate constant of an important reaction NH 2 + CH 4 −−→ NH 3 + CH 3 to the latest calculated value. Lastly, we have removed two reactions that were incorrectly included: The photochemical model is applied to the "stratosphere" of the atmosphere, where the "tropopause" is defined as the pressure level where the temperature profile becomes adiabatic. We define the lower boundary of the model as the pressure level 10-fold greater than the tropopause pressure, and thus include a section of the "troposphere" in the model. These choices are customary in photochemical studies of giant planets' atmospheres (e.g., Gladstone et al. 1996), and reasonable because the photochemical products in the stratosphere (and above the condensation clouds) are the objective of the study. Including a section of the troposphere makes sure that the results do not strongly depend on the lower boundary conditions assumed. We apply fixed mixing ratios as the lower boundary conditions for H 2 , He, H 2 O, CH 4 , NH 3 , and when applicable, H 2 S according to assumed the elemental abundance. When interior sources of CO, CO 2 , and N 2 are included in some scenarios (see Section 2.4 for detail), fixed mixing ratios are also applied to these gases at the lower boundary. We assume that all other species can move across the lower boundary (i.e., dry deposition when the lower boundary is a surface in terrestrial planet models) at a velocity of K zz /H, where K zz is the eddy diffusion coefficient and H is the scale height. This velocity is the upper limit of the true diffusion velocity, which could be damped by the gradient of the mixing ratio (Gladstone et al. 1996); however, the velocity only matters for long-lived species (e.g., C 2 H 6 in Jupiter). Our choice of lower boundary conditions thus results in conservative estimates of the abundance of long-lived photochemical gases. The upper boundary is assumed at 10 −4 Pa, i.e., small enough so that the peaks of photodissociation of all species are well within the modeled atmosphere. Following Gladstone et al. (1996), we assume a zero-flux boundary condition for all species except for H, for which we include a downward flux of 4 × 10 9 cm −2 s −1 (Waite et al. 1983) to account for ionospheric processes that produce H. This influx of H was calculated for Jupiter and the actual flux can conceivably be different. The impact of this additional H is limited to the upper atmosphere and, in most of our cases, is swamped by the H from the photodissociation of H 2 O (see Section 3.4). Since the modeled domain of the atmosphere includes the stratosphere and a small section of the upper troposphere, the standard mixing-length scaling (Gierasch & Conrath 1985) is not applicable to estimate the eddy diffusion coefficient. We instead anchor our choice of the eddy diffusion coefficient on the value in the upper troposphere of Jupiter (∼ 1 × 10 3 cm 2 s −1 , Conrath & Gierasch (1984)) and explore a larger value in the study. Above the tropopause, we assume that mixing is predominantly caused by the breaking of gravity waves and the eddy diffusion coefficient is inversely proportional to the square root of the number density (Lindzen 1981). Because the pressure range of the photochemical model typically includes the condensation of NH 3 and H 2 O, we have added a scheme to account for the condensation of NH 3 into the photochemical model, with that for H 2 O already included in the model of Hu et al. (2012). In addition, we have added the schemes of condensation for N 2 H 4 and HCN, the two main photochemical gases expected to condense in Jupiter's upper troposphere (e.g., Atreya et al. 1977;Moses et al. 2010). The low-temperature vapor pressures of N 2 H 4 and HCN are adopted from Atreya et al. (1977) and Krasnopolsky (2009), respectively. As such, these gases are treated in the photochemical model and their production and removal paths including chemical reactions and condensation are self-consistently computed. This is important because, for example, NH 3 above the clouds in Jupiter is expected to be completely removed by photodissociation and converted to N 2 H 4 and N 2 , followed by condensation and transport to the deep atmosphere (Strobel 1973;Atreya et al. 1977;Kaye & Strobel 1983a,b;Moses et al. 2010). As we will show in Section 3, the condensation of N 2 H 4 and HCN limits their abundance in the middle atmosphere of cold planets like Kepler-167 e. For H 2 S, we make a binary choice: if the cloud model indicates NH 4 SH formation, we remove sulfur chemistry from the model, because NH 4 SH should completely sequester H 2 S (Atreya & Romani 1985); and we include the sulfur chemistry if NH 4 SH cloud is not formed. This simplifies the calculations of sulfur photochemistry and is broadly valid when N/S> 1 in the bulk atmosphere. We calculate the cross-sections and single scattering albedo of ammonia and water cloud particles using their optical properties (Palmer & Williams 1974;Martonchik et al. 1984) and the radiative properties of the sulfur haze particles in the same way as Hu et al. (2013). NH 4 SH and HCN condensates are treated the same way as NH 3 clouds. N 2 H 4 condensates have very low abundance in all models and do not contribute significantly to the opacity. Thus, our model includes the absorption and scattering of cloud and haze particles when calculating the radiation field that drives photochemical reactions in the atmosphere. Jupiter as a Test Case As a test case, we have applied the coupled cloud condensation and photochemical model to a Jupiterlike planet and compared the results with the measured gas abundance in Jupiter and previous models of Jupiter's stratospheric composition (Gladstone et al. 1996;Moses et al. 2005;Atreya et al. 1977;Kaye & Strobel 1983a,b;Moses et al. 2010). Figure 1 shows the pressure-temperature profile, eddy diffusion coefficient, and the mixing ratios of CH 4 , NH 3 , and major photochemical gases of the test case. The atmospheric structure model adequately predicts the tropospheric temperature profile and the pressure level of the tropopause, but it cannot generate a temperature inversion in the middle atmosphere ( Figure 1, panel a). We have run the photochemical model with the pressure-temperature profile measured in Jupiter and the modeled pressuretemperature profile (i.e., without the temperature inversion) to see how much the photochemical gas mixing ratios change. We find that the photochemical model can predict the mixing ratios of C 2 H 6 , C 2 H 2 , and C 2 H 4 measured in Jupiter's stratosphere, and the modeled profile of HCN is consistent with the upper limit in Jupiter's upper troposphere when the measured pressure-temperature profile is adopted (Figure 1, panel c). The only exception is the C 2 H 2 mixing ratio at ∼ 1 Pa, where the modeled mixing ratio is greater than the measured value by 2 ∼ 3σ. This less-than-perfect performance may be due to the lack of C 3 , C 4 , and higher hydrocarbons in our reaction network. For example, Moses et al. (2005) was able to fit the C 2 H 2 mixing ratio at ∼ 1 Pa together with other mixing ratio constraints, with a more complete hydrocarbon reaction network and specific choices in the eddy diffusion coefficient profiles for Jupiter's stratosphere. In terms of nitrogen photochemistry, our photochemical model finds that NH 3 is depleted by photodissociation to the cloud deck, and the vast majority of the net photochemical removal of NH 3 becomes N 2 H 4 and then condenses out. A small fraction becomes N 2 and HCN. The abundance of HCN is low (∼ 10 −9 ) in the troposphere due to the photolysis of NH 3 and CH 4 occurring at well separated pressure levels, and is limited by the cold trap near the tropopause (Figure 1). These behaviors are qualitatively similar to the past models of Jupiter's nitrogen photochemistry (Atreya et al. 1977;Kaye & Strobel 1983a,b;Moses et al. 2010). Figure 1 also indicates that adopting the modeled pressure-temperature profile that does not have a stratosphere, while preserving the overall behavior of the atmospheric photochemistry, would under-predict the mixing ratios of C 2 H 6 and C 2 H 2 by approximately half an order of magnitude. We use the atmospheric structure model in this study for speedy exploration of the main photochemical behavior, and one should keep this context in mind when interpreting the results shown in Section 3. Another interesting point to make is that the quantum yield of H in the photodissociation C 2 H 2 has been convincingly measured to be 100% by recent experiments (Läuter et al. 2002). When producing the models shown as the solid and dashed lines in Figure 1, panel c, we have applied a quantum yield of 16% so that the topof-atmosphere rate of C 2 H 2 + hν −−→ C 2 H + H would match with the models of Gladstone et al. (1996); Moses et al. (2005). Revising the quantum yield to 100%, as shown by the dotted lines in Figure 1, panel c, slightly reduces the steady-state mixing ratio of C 2 H 6 and reduces the mixing ratio of C 2 H 2 and C 2 H 4 by a factor of ∼ 5 in the lower stratosphere (∼ 10 3 Pa). The photodissociation of C 2 H 2 is the main source of H in the lower stratosphere (e.g., Gladstone et al. 1996) and thus its quantum yield is important for the hydrocarbon chemistry in the lower stratosphere. However, a quantum yield of 100% would result in poor fits to the measured mixing ratios of C 2 H 2 and C 2 H 4 , and this potential discrepancy suggests that additional consideration of the atmospheric photochemistry of Jupiter might be warranted. We adopt the quantum yield of 100% in the subsequent models. Jupiter as a test case. The planet modeled is a Jupiter-mass and Jupiter-radius planet at a 5.2-AU orbit of a Sun-like star, having an atmospheric metallicity of 3×solar. (a) The solid line is the pressure-temperature profile adopted from Galileo probe measurements and Cassini CIRS measurements in Jupiter (the solid line; Seiff et al. 1998;Simon-Miller et al. 2006) and the dashed line is the pressure-temperature profile calculated by the atmospheric structure model. (b) The eddy diffusion coefficient profile adopted in this work. (c) The calculated mixing ratio profiles of CH4, NH3, and major photochemical products. The solid lines are the results using the measured temperature profile, the dashed lines are the results using the modeled temperature profile (i.e., without the temperature inversion), and the dotted lines are the results using the modeled temperature profile and the photodissociation quantum yield of C2H2 set to unity (see discussion in Section 2.3). In comparison are the abundance data of major hydrocarbons and HCN in Jupiter's atmosphere, as compiled in Morrissey et al. (1995); Gladstone et al. (1996); Davis et al. (1997); Yelle et al. (2001); Moses et al. (2005). Planet Scenarios We use the temperate sub-Neptune K2-18 b as a representative case of temperate and gas-rich planets around M dwarf stars, and use the gas giants PH2 b and Kepler-167 e as the representative cases of temperate and cold planets around G and K stars ( Blain et al. 2021;Charnay et al. 2021), but the effects of atmospheric photochemistry remain to be studied. Kepler-167 e is considered a "cold" exoplanet because it only receives stellar irradiation 7.5% of Earth's. The equilibrium cloud condensation model would predict NH 3 to condense in its atmosphere and form the uppermost cloud deck, below which NH 4 SH solids form and scavenge sulfur from the above-cloud atmosphere. In the atmospheres of K2-18 b and PH2 b, only H 2 O is expected to condense and forms the cloud deck -and thus the physical distinction between "temperate" and "cold". The UV spectrum of K2-18 has not been measured and so we adopt that of GJ 176, a similar M dwarf star with the UV spectrum measured in the MUSCLES survey (France et al. 2016). The reconstructed Ly-α flux of GJ 176 is similar to the measured flux of K2-18 (dos Santos et al. 2020). We adopt the UV spectrum of the Sun for the models of PH2 b and Kepler-167 e, even though Kepler-167 is a K star. Figure 2 shows the incident stellar flux at the top of the atmospheres adopted in this study. K2-18 b, while having similar total irra- diation as PH2 b, receives considerably less irradiation in the near-UV. For these planets, we simulate H 2 -dominated atmospheres having 1 − 100× solar metallicities. The higherthan-solar metallicity scenario may be particularly interesting for sub-Neptunes like K2-18 b because of a proposed mass-metallicity relationship that posits a less massive planet should have a higher metallicity (Thorngren et al. 2016). For PH2 b and Kepler-167 e, we assume as fiducial values a surface gravity of 25 m s −2 and an internal heat flux that corresponds to T int = 100 K, similar to the parameters of Jupiter. Changing the surface gravity to 100 m s −2 results in slightly different cloud pressures and above-cloud abundance of gases on these planets, but do not change the qualitative behaviors of the atmospheric chemistry. For K2-18 b we assume an internal heat flux that corresponds to T int = 60 K, similar to that of Neptune. In the standard models, we assume that the dominant O, C, N, and S species are H 2 O, CH 4 , NH 3 , and H 2 S at the base of the photochemical domain. Gases and aerosols produced in the photochemical domain can be transported through the lower boundary, and thus the standard model setup implicitly assumes that thermochemical recycling in the deep troposphere effectively recycles the photochemical products into H 2 O, CH 4 , NH 3 , and H 2 S. Here we quantitatively assess how realistic this assumption is based on the quench-point theory (e.g., Hu & Seager 2014;Zahnle & Marley 2014;Tsai et al. 2018). In that theory, the "quench point" is defined as the pressure level where the chemical lifetime of a gas equals the vertical mixing timescale (typically at the pressure of 10 7 Pa or higher). The gas is close to thermochemical equilibrium at the quench point, and its mixing ratio is carried to the atmosphere above the quench point by vertical mixing. Figures 3 -5 show the pressure-temperature profiles of the three planets calculated by the atmospheric structure model, and the mixing ratios of major C and N molecules at the respective quench points. We adopt the chemical lifetime of the CO ← − → CH 4 and N 2 ← − → NH 3 conversions from Zahnle & Marley (2014) and estimate the eddy diffusion coefficient in the deep troposphere using the mixing-length theory in Visscher et al. (2010). The eddy diffusion coefficient depends on the assumed internal heat flux and has a typical value of ∼ 10 4 m 2 s −1 at the pressure of 10 6 − 10 8 Pa. The quench point of CO 2 follows that of CO, and similarly, that of HCN occurs at a similar pressure and temperature as N 2 (Zahnle & Marley 2014; Tsai et al. 2018). The mixing ratios of gases at the quench points are calculated using the thermochemical equilibrium model of Hu & Seager (2014). Figure 3 -5 show that a solar-metallicity atmosphere is likely deep in the CH 4 -and NH 3 -dominated regime at the quench points on all three planets. Specifically, we find the mixing ratio of CO ≤ 10 −8 , that of CO 2 ≤ 10 −11 , and the mixing ratio of NH 3 greater than that of N 2 by > 10 folds. With 10×solar metallicity, the atmosphere remains CH 4 -dominated, but the mixing ratio of CO transported from the deep troposphere can be on the order of 10 −6 ∼ 10 −5 and thus non-negligible. With the assumed internal heat flux and the modeled strength of deep tropospheric mixing, the mixing ratio of N 2 can be comparable to that of NH 3 at the quench point. As N 2 does not have strong spectral features and is not a feedstock molecule for photochemistry, the effect of a hot interior would be mostly seen as a reduction of the mixing ratio of NH 3 . The impact of the hot interior is the most significant in the 100×solar-metallicity atmosphere. Both CO and CO 2 have mixing ratios > 10 −4 at the quench point, and in the hottest case (PH2 b), the mixing ratio CO is greater than that of CH 4 . For nitrogen, the mixing ratio of NH 3 can be reduced by a factor of 10 ∼ 100 at the thermochemical equilibrium in the deep troposphere. As a general trend, a higher deep-atmosphere temperature favors CO, CO 2 , and N 2 , and reduces the equilibrium abundance of NH 3 . We have thus run variant models for the 10× and 100×solar-metallicity cases, and used the mixing ratios of CH 4 , CO, CO 2 , NH 3 , and N 2 at the quench points as shown in Figures 3 -5 as the lower-boundary conditions. Technically the mixing ratio of deep H 2 O is also affected, but the photochemical models have lower boundaries that are well above the base of the water cloud, and are thus immune to small changes in the input water abundance. Also, we do not fix the lower-boundary mixing ratio of HCN in these models, because the mixing ratio of HCN at the quench point does not exceed the mixing ratio found by the photochemical models at the lower boundary in any case. We emphasize that specific quantities of the input gas abundance depend on the detailed thermal structure of the interior, which is related to the thermal history of the planet and exogenous factors like tidal heating, as well as the strength of vertical mixing in the interior (Fortney et al. 2020). For example, applying an internal heat flux that corresponds to T int = 30 K (similar to Earth) largely restores the CH 4 and NH 3 dominance for the three planets. While these factors are likely uncertain for many planets to be observed, the standard and variant photochemical models presented in this paper give an account of the range of possible behaviors that manifest in the observable part of the atmosphere. CO~10 -10 CO 2 <10 -12 Figure 3. Pressure-temperature profiles of the temperate sub-Neptune K2-18 b for varied atmospheric metallicities and an internal heat flux of Tint = 60 K (similar to Neptune). The short horizontal bars show the lower boundary of the photochemical model (i.e., the pressure level 10-fold greater than the tropopause pressure). The green and red lines show the equal-abundance boundaries for major carbon and nitrogen gases in a solar-metallicity gas in thermochemical equilibrium, and the green and red dots show the expected quench point for CO and that for N2 respectively. The equilibrium mixing ratios of major C and N molecules at the respective quench points are shown. K2-18 b: a temperate planet around an M star For K2-18 b, our model predicts that water condenses to form a cloud at the pressure of ≥ 10 4 Pa for the solar and 10×solar cases, and at the pressure of ∼ 10 3 Pa for the 100×solar case. Above the cloud, the mixing ratio of water is depleted by approximately one order of magnitude, but not totally depleted. The pressure of cloud for the 100×solar abundance case we model is consistent with predictions of a non-gray radiative-equilibrium model and a 3D climate model, but those models do not predict a water cloud for the solar and 10×solar abundance Charnay et al. 2021). Given the small degree of water depletion found in our models, this discrepancy does not lead to substantial errors in the results of the above-cloud photochemistry. Both CH 4 and H 2 O are photodissociated at the pressure of approximately 0.1 -1 Pa. The photodissociation results in the formation of C 2 H 6 , C 2 H 2 , CO, and CO 2 . C 2 H 2 has a high mixing ratio at the pressure where the photodissociation takes place but is quickly depleted towards higher pressures. In the middle atmosphere (∼ 10 -10 3 Pa), CO, CO 2 , and C 2 H 6 can have a mixing ratio of ∼ 1 parts-per-million (ppm) for the 100×solar abundance case, and the mixing ratio of these photochemical gases is < 1 ppm for lower metallicities. When the deep tropospheric source of CO and CO 2 is applied to the bottom of the photochemical domain, the mixing ratio of CO at 10 2 Pa is ∼ 1 ppm for the 10×solar cases, but it can reach ∼ 4000 ppm for the 100×solar case. The mixing ratio of CO 2 at 10 2 Pa can reach ∼ 500 ppm for the 100×solar case. NH 3 is photodissociated at the pressure of 1 ∼ 10 Pa. The photodissociation results in the formation of N 2 and HCN with similar yields. The mixing ratio of HCN at ∼ 10 2 Pa is ∼ 6, 50, and 500 ppm for the solar, 10×solar, and 100×solar abundance cases, respectively. If the mixing ratio of NH 3 in the deep troposphere is applied to the bottom of the photochemical domain, the resulting mixing ratio of HCN does not change significantly in the 10×solar case but decreases to ∼ 100 ppm in the 100×solar case. Lastly, H 2 S is photodissociated at approximately the same pressure as the water cloud. The photodissociation leads to the formation of elemental sulfur (S 8 ) haze, as predicted previously (Zahnle et al. 2016). The haze layer extends to an altitude only slightly higher than the water cloud deck. 3.1.2. PH2 b: a temperate planet around a G/K star PH2 b has a slightly higher insolation and temperature than K2-18 b, but it receives much more near-UV irradiation ( Figure 2). The water condensation and small degree of depletion above the cloud, as well as the photodissociation of H 2 S and the location of the sulfur haze layer, are similar to those predicted for K2-18 b. CH 4 is photodissociated at the pressure of 0.1 -1 Pa, and H 2 O is photodissociated at 1 -10 Pa. The main products of these photodissociations are still C 2 H 6 , 10 -10 10 -8 10 -6 10 -4 10 -2 10 0 C 2 H 2 , CO, and CO 2 . Instead of CO in the case of K2-18 b, CO 2 is the most abundant photochemical gas in the middle atmosphere (∼ 10 -10 3 Pa), and its mixing ratio is 2 -10 ppm, 5 -40 ppm, and 40 -200 ppm for the solar, 10×solar, and 100×solar abundance cases, respectively. The mixing ratio of CO is less by approximately one order of magnitude, and that of C 2 H 6 is ∼ 1 ppm for the 100×solar case and < 1 ppm for lower metallicities. As a striking difference from the M star case (K2-18 b), NH 3 is fully depleted by photodissociation above the water cloud deck. The mixing ratio of NH 3 in the middle atmosphere is minimal. The photodissociation also leads to the formation of N 2 and HCN, with HCN being the most abundant photochemical product. The mixing ratio of HCN in the middle atmosphere reaches ∼ 100, 700, and 10,000 ppm for the solar, 10×solar, and 100×solar abundance cases, respectively. With a Jupiter-like internal heat flux, the equilibrium chemistry in the deep troposphere may substantially change the chemical composition in the photochemical domain. In the 10×solar cases, the mixing ratio of CO in the middle atmosphere can reach ∼ 10 ppm and that of CO 2 ∼ 60 ppm. HCN would no longer be the most abundant nitrogen product, and its mixing ratio in the middle atmosphere can be reduced to ∼ 40 ppm. In the 100×solar cases, both CO and CO 2 can have very high mixing ratios (> 10 −2 , and on the same order of CH 4 ) in the middle atmosphere, and the above-cloud H 2 O would be consumed by photochemistry and have a mixing ratio of ∼ 10 ppm at 10 2 Pa. The mixing ratio of HCN would be further reduced to ∼ 10 ppm, while still marginally greater than the mixing ratio at the quench point. 3.1.3. Kepler-167 e: a cold planet around a G/K star The atmosphere of Kepler-167 e is much colder than that of K2-18 b or PH2 b, and its atmospheric chemistry is more akin to that of Jupiter (Gladstone et al. 1996;Moses et al. 2005;Atreya et al. 1977;Kaye & Strobel 1983a,b;Moses et al. 2010). Both H 2 O and H 2 S are fully depleted by condensation or NH 4 SH formation, and the uppermost cloud predicted by the atmospheric structure model is NH 3 ice. However, the steady-state results of the photochemical model indicate that photodissociation of NH 3 should deplete the NH 3 ice cloud. NH 3 is photochemically depleted to the pressure of 7×10 4 -10 3 Pa from the solar to 100×solar abundance cases. The main product of the photodissociation that can accumulate in the middle atmosphere is N 2 , while the mixing ratios of HCN and N 2 H 4 are limited by condensation. The mixing ratio of HCN can reach > 1 ppm below the condensation level in the 100×solar case. The main photochemical gases of carbon are C 2 H 6 and C 2 H 2 , with no CO or CO 2 at appreciable mixing ratios. While the mixing ratio of C 2 H 2 strongly peaks at 0.1 Pa, where the photodissociation of CH 4 takes place, the mixing ratio of C 2 H 6 can be significant in the middle atmosphere. At 10 2 Pa, the mixing ratio of C 2 H 6 is ∼ 2, 4, and 30 ppm for the solar, 10×solar, and 100×solar abundance cases, respectively. If the deep tropospheric source of CO and CO 2 is applied to the bottom of the photochemical domain, they can have substantial mixing ratios in the 100×solar case, while the mixing ratio of C 2 H 6 is not strongly impacted. Photochemical Depletion of NH 3 From Figures 6 -8, we see that NH 3 is depleted to the cloud deck in temperate and cold planets around G/K stars but remain intact in the middle atmosphere of temperate and cold planets around M stars. This finding is significant because it implies that NH 3 should be detectable on temperate planets around M stars but not around G/K stars (see Section 3.5). The root cause of this different behavior is the M stars (represented by GJ 176 here) emit substantially lower irradiation at the near-UV wavelengths than the G/K stars (represented by the Sun here, Figure 2). The radiation that dissociates NH 3 in the H 2 -dominated atmosphere is the radiation that is not absorbed by the typically more abundant CH 4 and H 2 O. NH 3 has a dissociation limit at ∼ 230 nm while CH 4 at ∼ 150 nm and H 2 O at ∼ 240 nm, but the cross section and the shielding effect of H 2 O is small > 200 nm (Hu et al. 2012;Ranjan et al. 2020). C 2 H 2 also absorbs photons up to ∼ 230 nm but it typically does not strongly interfere with the NH 3 photodissociation due to its relatively low abundance. Thus, photons in 200 -230 nm are the most relevant for the photodissociation of NH 3 in K2-18 b and PH2 b, and photons in 150 -230 nm are the most relevant for Kepler-167 e. Having similar bolometric irradiation, the photon flux in 200 -230 nm received by PH2 b is more than that received by K2-18 b by > 2 orders of magnitude ( Figure 2). The photon flux received by Kepler-167 e is one-order-of-magnitude more than K2-18 b, and the removal of NH 3 by condensation further pushes down the pressure of photochemical depletion (see below). Criterion of Photochemical Depletion How does the photon flux control the pressure of photochemical depletion? Guided by the numerical results, here we develop a simple theory that estimates the pressure of photochemical depletion. Assuming that photodissociation is the only process that removes NH 3 with no recycling or production, its mixing ratio profile at the steady state should obey the following differential equation: where z is altitude, K is the eddy diffusion coefficient, N is the total number density of the atmosphere, f is the mixing ratio, and J is the photodissociation rate (often referred to as the "J-value" in the atmospheric chemistry literature). The number density has a scale height of H, and the equation can be rewritten as Assuming J, H, and K to be a constant with respect to z, the equation above has the analytical solution as where f 0 is the mixing ratio at the pressure of photochemical depletion (z = 0 for simplicity), and α is Therefore, when the product 4JH 2 /K is small, α → 0 and the mixing ratio profile is close to a constant; and when 4JH 2 /K is large, α can be 1 and thus the mixing ratio drops off very quickly. This explains the vertical profiles of NH 3 seen in Figures 6-8. Going back to Equation (1), which can be integrated from the pressure of photochemical depletion to the top of the atmosphere, as where n ≡ f N is the number density of NH 3 . Assuming that the photoabsorption of NH 3 itself is the sole source of opacity, J can be expressed as where J ∞ is the top-of-atmosphere J-value and σ is the mean cross section of NH 3 . The differential of Equation (6) is dJ dz = σnJ. Combining Equations (5) and (7), and recognizing df /dz vanishes at z = ∞, we obtain With J(z = 0) ∼ 0 (i.e., the J-value immediately below the pressure of photochemical depletion is minimal), and J ∞ = σI, where I is the photon flux at the top of the atmosphere, we obtain Note that to derive Equation (9), no specific profiles for J or n (f ) need to be assumed. The physical meaning of Equation (9) is that the number of NH 3 molecules that diffuse through the pressure of photochemical depletion should be equal to the number of photons received at the top of the atmosphere. This physical condition would become evident if one regards the column of NH 3 above the pressure of photochemical depletion as a whole and recognizes that one photon dissociates one molecule. To the extent that the photoabsorption of NH 3 itself is the dominant source of opacity, the criterion expressed by Equation (9) Kepler-167 e Figure 9. Pressure of photochemical depletion of NH3 predicted by the criterion in Equations (9) and (10). We compare the left-hand side (solid line) and right-hand side (dashed line) of Equation (10), assuming a solar-abundance atmosphere. Where the solid line and the dashed line meet defines the pressure of photochemical depletion. not depend on the mean cross section. Similarly, the criterion will be applicable to any molecule subject to photodissociation in a wavelength range largely free of interference by other molecules. It should be noted that Equation (9) cannot be derived by requiring the pressure of photochemical depletion to occur roughly at the optical depth of unity for the photodissociating radiation. This is because the mixing ratio profile in Equation (3) is valid only locally and depends on J, which in turns depends on the vertical profile of the mixing ratio. As such, one cannot integrate Equation (3) directly to find the pressure of photochemical depletion, and the optical-depth-of-unity condition is not as predictive as Equation (9). The left-hand side of Equation (9) can be evaluated locally using Equation (3), and Equation (9) becomes where N 0 and f 0 is the total number density and the mixing ratio at the pressure of photochemical depletion. The pressure is thus P 0 = N 0 k b T where k b is the Boltzmann constant and T is temperature. α can be evaluated with Equation (4) for a J value that corresponds to 5% of the top-of-atmosphere value. Equation (10) thus provides a closed-form criterion that determines the pressure of photochemical depletion, and explains why the pressure of photochemical depletion is sensitive to the top-of-atmosphere flux of photons that drive photodissociation. Figure 9 shows both sides of Equation (10) for the three planets modeled assuming a solar-abundance atmosphere. We can see that the pressure of photochem-ical depletion implied by Equation (10) for K2-18 b is ∼ 100 Pa, consistent with the pressure where the photochemical model starts to substantially deviate from the equilibrium cloud condensation model ( Figure 6). Figure 6 also shows that the mixing ratio NH 3 decreases very slowly near the pressure of the photochemical depletion, but the decrease becomes faster for lower pressures, where the J value and the 4JH 2 /K product become greater (see Equations 3 and 4). The mixing ratio of NH 3 eventually drops below 10 −6 at the pressure lower than the pressure of photochemical depletion by approximately one order of magnitude. The pressures of photochemical depletion implied by Equation (10) for PH2 b and Kepler-167 e are close to or below the cloud deck (i.e., 10 4 −10 5 Pa), which is consistent with numerical finding that NH 3 is photodissociated to the cloud deck on these planets (Figures 7 and 8). Therefore, although Equation (10) cannot replace the full photochemical calculation due to the underlying assumptions (e.g., no recycling or production, self-shielding only), it provides a guiding estimate of whether a gas is likely depleted by photodissociation in the middle atmosphere. Sensitivity to the eddy diffusion coefficient From the criterion of photochemical depletion (Equation 10), we see that when the eddy diffusion coefficient increases, the pressure of photochemical depletion decreases. In other words, stronger mixing would sustain a photodissociated gas (e.g., NH 3 ) to a lower pressure or higher altitude. We have used the photochemical model to conduct a sensitivity study of the eddy diffusion coefficient, and the results confirm this understanding (Figure 10). The most significant sensitivity happens with PH2 b: the standard model predicts the photodissociation would deplete NH 3 to the cloud deck, while with a 10-fold or 100-fold greater eddy diffusion coefficient, NH 3 would be depleted at 10 2 ∼ 10 3 Pa. For Kepler-167 e, with a 10-fold or 100-fold greater eddy diffusion coefficient, photodissociation can no longer deplete the NH 3 ice cloud, while the mixing ratio of NH 3 in the middle atmosphere remains small due to condensation and photodissociation above the cloud deck. The top of the sulfur haze layer moves up in the atmosphere when the eddy diffusion coefficient increases. For both K2-18 b and PH2 b, the top of the sulfur haze would be at ∼ 10 3 Pa and 10 2 Pa for 10-fold and 100-fold greater eddy diffusion coefficient (Figure 10). A haze layer that extends to 10 2 Pa would greatly interfere with transmission spectroscopy and also affect the spectra of the reflected starlight (see Section 3.5). This trend is consistent with the findings of Zahnle et al. (2016) and is produced by two effects acting together. First, the pres-sure of photochemical depletion of H 2 S, the feedstock of sulfur haze, decreases for a greater eddy diffusion coefficient. Second, a stronger eddy diffusion helps increase the lifetime of haze particles against falling (see the formulation in Hu et al. 2012). For PH2 b, the extended sulfur haze layer further keeps NH 3 from photochemical depletion by absorbing the ultraviolet photons that can dissociate NH 3 . The sensitivity of main photochemical gases' abundance to the eddy diffusion coefficient is complex (Figure 10), which indicates several factors at work. For N 2 and HCN (the dominant photochemical gases of nitrogen), their mixing ratios at the lower boundary decrease with the eddy diffusion coefficient. This is because, in our model, gases move across the lower boundary at a velocity that is proportional to the eddy diffusion coefficient, and the loss to the lower boundary is the main loss mechanism for both N 2 and HCN. Their mixing ratios in the middle atmosphere do not necessarily follow the same trend as that also depends on the photochemical production (see Section 3.3). The abundance of the photochemical gases of carbon does not depend on the eddy diffusion coefficient monotonically, and this is because the formation rates of CO, CO 2 , and C 2 H 6 largely depend on the abundance of H, OH, and O, which is in turn controlled by the full chemical network involving the photodissociation of CH 4 , H 2 O, and NH 3 (see Section 3.4). For example, in K2-18 b with the solar metallicity, both CO and CO 2 have very small mixing ratios in the middle atmosphere in the standard case; the two would be substantially more abundant in the middle atmosphere with a 10-fold greater eddy diffusion coefficient, and CO would become more abundant than CO 2 with a 100-fold greater eddy diffusion coefficient. These examples highlight the richness and complexity of atmospheric photochemistry in temperate and cold planets. Photolysis of NH 3 in the Presence of CH 4 A common phenomenon that emerges from the photochemical models is the synthesis of HCN in temperate and H 2 -rich exoplanets. The photodissociation of NH 3 in Jupiter leads to N 2 but not significant amounts of HCN, and this is mainly because NH 3 is dissociated at much higher pressures than CH 4 (e.g., Atreya et al. 1977;Kaye & Strobel 1983a,b;Moses et al. 2010). HCN in Titan's N 2 -dominated atmosphere mainly comes from the reactions between atomic nitrogen and hydrocarbons and the associated chemical network (Yung et al. 1984;Lavvas et al. 2008;Krasnopolsky 2014;Vuitton et al. 2019). Similar processes, as well as the reactions between CH and NO/N 2 O may also lead to for- Figure 10. Sensitivity of the abundance profiles of NH3 and main photochemical gases to the eddy diffusion coefficient. The profiles of H2O and CH4 are not shown because their abundance in the middle atmosphere is not sensitive to the eddy diffusion coefficient. The horizontal orange lines show the top of the sulfur haze layer. The solid lines show the standard model, and the dashed and dash-dot lines show the models with 10-fold and 100-fold greater eddy diffusion coefficients, respectively. These models assume the solar abundance. A greater eddy diffusion coefficient causes the photodissociation of NH3 to occur at a lower pressure. mation of HCN on early Earth or rocky exoplanets with N 2 -dominated atmospheres irradiated by active stars (Airapetian et al. 2016;Rimmer & Rugheimer 2019). In addition, the formation of HCN has been commonly found in warm and hot H 2 -rich exoplanets (e.g., Moses et al. 2011;Line et al. 2011;Venot et al. 2012;Agúndez et al. 2014;Mollière et al. 2015;Moses et al. 2016;Blumenthal et al. 2018;Kawashima & Ikoma 2018;Molaverdikhani et al. 2019;Hobbs et al. 2019;Lavvas et al. 2019), and the mechanisms identified include quench kinetics Venot et al. 2012;Agúndez et al. 2014) and photochemistry (Line et al. 2011;Kawashima & Ikoma 2018;Hobbs et al. 2019). Here we show that HCN can also build up to significant amounts in temperate exoplanets with H 2 -dominated atmospheres. Figure 11 shows the chemical network that starts with the photodissociation of NH 3 and ends with the formation of N 2 and HCN as the main photochemical products. The key condition for the synthesis of HCN is the photodissociation of NH 3 in the presence of CH 4 and at a temperature >∼ 200 K. This condition allows CH 3 , Figure 11. Chemical network from the photodissociation of NH3 in temperate and H2-dominated atmospheres. Not all reactions are shown, and the importance of the shown reactions changes from case to case. In the presence of CH4, HCN is one of the main photochemical products. one of the ingredients for the synthesis of HCN, to be produced locally by the reaction between CH 4 and H, and this H is produced by the photodissociation of NH 3 itself. We describe the details as follows. The photodissociation of NH 3 mainly produces NH 2 , and some of the NH 2 produced is returned to NH 3 via and Another channel of the photodissociation of NH 3 is to produce NH The NH 2 channel requires photons more energetic than 230 nm and the NH channel requires photons more energetic than 165 nm. Therefore, the photons that produce NH is more easily shielded by H 2 O and CH 4 . For the three planets modeled, the NH channel is important in K2-18 b and Kepler-167 e, but not in PH2 b. This is because the photodissociation of NH 3 occurs at higher pressures in PH2 b and is object to the shielding effect of both H 2 O and CH 4 . The NH channel mostly leads to N 2 (Figure 11). The NH 2 that is not recombined to form NH 3 can undergo and the N 2 H 4 produced (if not condensed out) can then become N 2 H 3 . N 2 H 3 can react with itself to form N 2 H 2 , whose photodissociation produces N 2 , or with H to return to NH 2 (Figure 11). The other loss of NH 2 is to react with CH 3 , followed by photodissociation to form HCN, Reaction (R6) is the critical step in this HCN formation mechanism, and it requires the CH 3 radical to be available. The CH 3 in Reaction (R6) is mainly produced by Note that the photodissociation of CH 4 , which also produces CH 3 , does not contribute significantly to the source of CH 3 in Reaction (R6) because the photodissociations of CH 4 and NH 3 typically occur at very different pressures. Another formational path of HCN is through followed by photodissociation The C 2 H 3 in Reaction (R9) is mainly produced by and C 2 H 2 is ultimately produced by the photodissociation of CH 4 and then transported to the pressure of the photodissociation of NH 3 . The HCN produced in Reactions (R7 and R10) is photodissociated to form CN but CN quickly reacts with H 2 and C 2 H 2 to return to HCN. Thus, HCN does not have significant net chemical loss and is transported together with N 2 through the lower boundary. The NH 3 -CH 4 coupling (Reactions R6-R8) dominates the formation of HCN over the NH 3 -C 2 H 2 coupling (Reactions R9-R11) in temperate H 2 -dominated atmospheres by several orders of magnitude. This is because the mixing ratio of C 2 H 2 at the the pressure of NH 3 photodissociation is typically very small on temperate planets like K2-18 b and PH2 b (Figures 6 and 7). On colder planets like Kepler-167 e, more C 2 H 2 is available and the NH 3 -C 2 H 2 coupling can contribute 1-10% of the HCN formation, consistent with the results for Jupiter . We also note that past models of warm and hot H 2 -rich exoplanets suggested different reactions to represent the NH 3 -CH 4 coupling, including NH + CH 3 (Line et al. 2011) and N + CH 3 (Kawashima & Ikoma 2018; Hobbs et al. 2019); in our models the contribution from N + CH 3 −−→ HCN + H 2 contributes to the formation of HCN less than Reactions (R6-R8) by > 3 orders of magnitude. The efficacy of the NH 2 path to produce N 2 and HCN and the branching between N 2 and HCN depend on the abundance of H and the temperature. Reaction (R8) has an activation energy of 33.60 kJ/mol (Baulch et al. 1992) and does not occur at very low temperatures. At the pressure of NH 3 photodissociation, the temperature is 220 -240 K in [120][121][122][123][124][125][126][127][128][129][130]and ∼ 110 K in Jupiter. This makes Reaction (R8) faster by six orders of magnitude in K2-18 b and PH2 b than in Kepler-167 e or Jupiter, eventually leading to an efficient HCN production and a high abundance in the middle atmosphere. This is why the HCN production mechanism (Reactions R6-R8) does not operate efficiently in giant planets in the Solar System but can build up HCN in warmer exoplanetary atmospheres. The abundance of H is another important control. From Figure 11, we can see that a higher abundance of H would enhance the recycling from N 2 H 4 to NH 2 , produce more NH 3 to react with NH 2 , and help the return of NH 2 to NH 3 . In other words, a higher abundance of H would reduce the overall efficacy of the NH 2 path but favor the branch that leads to HCN. At the pressure of NH 3 photodissociation, the main source of H is the combination of Reactions (R1 and R2), whose net result is the dissociation of H 2 but not NH 3 . The sink of H is mainly Reaction (R3) and the direct recombination H + H M −−→ H 2 . In high-metallicity atmospheres, another sink of H is Reaction (R5) followed by and The net result of Reactions (R5, R12, and R13) is H + H −−→ H 2 . Therefore, the chemical network that starts with the photodissociation of NH 3 is both a source and a sink of H, which feedback to determine the outcome of the network in a non-linear way. For example, the NH 2 channel is a minor pathway to form N 2 in the solar or 10×solar abundance atmosphere of K2-18 b but it becomes an important pathway in the 100× solar atmosphere. The abundance of H at the pressure of NH 3 photodissociation also explains the different sensitivity of the HCN mixing ratio on the inclusion of deep-tropospheric source of CO/CO and partial depletion of NH 3 . For K2-18 b, the reduction in the HCN mixing ratio is small or proportional to the reduction in the input NH 3 abundance, but more reduction in the HCN mixing ratio is Gladstone et al. 1996;Moses et al. 2005). The photodissociation of H2O provides oxidizing radicals such as OH. When CH4 is photodissociated together with H2O, CO and CO2 can be formed in addition to hydrocarbons. found for PH2 b (Figures 6 and 7). This is because the photodissociation of NH 3 occurs at higher pressures in PH2 b than in K2-18 b. When abundant CO exists, the reactions CO + H M −−→ HCO and HCO + H −−→ CO + H 2 efficiently remove H. Note that the first reaction in this cycle is three-body and only significant at sufficiently high pressures. This sink of H results in the reduction of CH 3 production (Reaction R8) and thus disfavors the branch in the NH 2 path that leads to HCN. To summarize, the numerical models and the HCN formation mechanism presented here indicate that HCN and N 2 are generally the expected outcomes of the photodissociation of NH 3 in gaseous exoplanets that receive stellar irradiance of approximately Earth's, regardless of the stellar type. Photolysis of CH 4 Together with H 2 O The formation of CO and CO 2 as the most abundant photochemical gases of carbon on K2-18 b and PH2 b is another significant finding of our numerical models. The photodissociation of CH 4 in colder H 2 -dominated atmospheres -such as the giant planets' atmospheres in the Solar System -produces hydrocarbons such as C 2 H 6 and C 2 H 2 but not oxygenated species (e.g., Gladstone et al. 1996;Moses et al. 2005). This is because H 2 O condenses out and is almost completely removed from the above-cloud atmosphere (such as in Kepler-167 e, Figure 8). External sources such as comets and interplanetary dust can supply oxygen to the upper atmospheres of Jupiter and the other giant planets (e.g., Moses et al. 2005;Dobrijevic et al. 2020), but we do not include this source in the present study. For warmer planets, however, H 2 O is only moderately depleted by condensation. The above-cloud water is photodissociated at approximately the same pressure as CH 4 (Figures 6 and 7). The photodissociations of CH 4 and H 2 O together in H 2 -dominated atmospheres produces a chemical network beyond hydrocarbons ( Figure 12) and eventually lead to the formation of CO and CO 2 . Even warmer atmospheres (e.g., the atmosphere of GJ 1214 b with an effective temperature of 500 -600 K) may also have CO and CO 2 as the most abundant photochemical gases of carbon (e.g., Kempton et al. 2011;Kawashima & Ikoma 2018). The photodissociation of CH 4 and the subsequent chemical reactions produce a wealth of hydrocarbons and radicals, and many of them (e.g., C, CH, CH 3 , C 2 H 2 , and C 2 H 4 ) lead to chemical pathways that form CO (Figure 12). Between K2-18 b and PH2 b and among the modeled metallicities, we do not see a monotonic trend regarding the relative contribution of these CO forming pathways, probably due to many chemical cycles and feedback in hydrocarbon photochemistry. CO is converted to CO 2 by the reaction with OH: Reaction (R14) is the dominant source of CO 2 in all models, and the only significant chemical loss of CO 2 is to form CO via either photodissociation or the reaction with elemental sulfur when available ( Figure 12). The CO 2 that is not returned to CO is then transported through the lower boundary. What are the sources of OH, O, and H that power the chemical pathways shown in Figure 12? At the pressure of CH 4 and H 2 O photodissociation, the source of OH is the photodissociation of water, and the main sink is the reaction with H 2 , Reaction (R16) is the main sink of OH in all models, which means that the use of OH in the chemical pathways shown in Figure 12 does not usually become the dominant sink of OH. Reactions (R15 and R16) together is equivalent to the net dissociation of H 2 , which overtakes the photodissociation of CH 4 and subsequent hydrocarbon reactions as the dominant source of H in temperate atmospheres. Lastly, the main source of O is the photodissociation of CO and CO 2 , which eventually traces to OH and the photodissociation of water. At this point we can explain the ratio between CO 2 and CO in the middle atmosphere, which is ∼ 1 on K2-18 b and ∼ 10 on PH2 b (Figures 6 and 7). Because Reaction (R14) is the main source of CO 2 and photodissociation is the main sink, the number density of CO 2 is ∼ k R14 [CO][OH]/J CO2 , where k is the reaction rate constant and [] means the number density of a molecule. Because Reaction (R15) is the main source of OH, the number density of OH is ∝ J H2O [H 2 O]. Therefore, the ratio between CO 2 and CO is ∝ J H2O [H 2 O]/J CO2 . For any given metallicity, the abundance of H 2 O in the middle atmosphere of PH2 b is 3 ∼ 5-fold greater than that in K2-18 b because PH2 b is slightly warmer (Figures 6 and 7). And, J H2O at the top of the atmosphere on PH2 b is approximately twice that on K2-18 b, while J CO2 is similar between the two planets ( Figure 2). Together, this causes the [CO 2 ]/[CO] ratio to be greater in the atmosphere of PH2 b than in K2-18 b by ∼ 10 folds. This trend to maintain the [CO 2 ]/[CO] ratio also controls how the atmosphere reacts to a deep-tropospheric source of CO and CO 2 that is applied as input at the lower boundary. The input CO 2 is always less than CO by one or more orders of magnitude (Figures 3-5). On PH2 b, photochemical processes convert CO into CO 2 in the middle atmosphere (∼ 10 2 Pa), and cause the steady-state mixing ratio of CO 2 to be greater than that of CO. This conversion even becomes a significant sink of H 2 O and causes H 2 O to be depleted in the middle atmosphere in the 100×solar metallicity case (Figure 7). The CO to CO 2 conversion is not so strong in the atmosphere of K2-18 b or Kepler-167 e, and their mixing ratios in the middle atmosphere are largely the input values at the lower boundary ( Figure 6). Finally, let us turn to the impact of H 2 O and NH 3 photodissociation onto the hydrocarbon chemistry. Compared with Kepler-167 e, the mixing ratio of C 2 H 6 -the dominant, supposedly long-lived hydrocarbon -in K2-18 b and PH2 b is smaller and sometimes features an additional peak near the cloud deck (Figures 6-8). Particularly, the atmospheres of K2-18 b and PH2 b have a strong sink of C 2 H 6 at ∼ 1−10 Pa, while the atmosphere of Kepler-167 e does not. This sink is ultimately because of the high abundance of H produced by the photodissociation of H 2 O (Reactions R15 and R16). The detailed reaction path involves the formation of C 2 H 5 from C 2 H 6 (by direct reaction with H or photodissociation to form C 2 H 4 followed by H addition), and then C 2 H 5 + H −−→ 2 CH 3 . Because of the abundance of H, CH 3 mostly combines with H to form CH 4 , rather than recombines to form C 2 H 6 . It is well known that the abundance of hydrocarbons is fundamentally controlled by the relative strength between H + CH 3 M −−→ CH 4 and CH 3 + CH 3 M −−→ C 2 H 6 (e.g. Gladstone et al. 1996;Moses et al. 2005). Here we find that the added H from H 2 O photodissociation results in a net sink for C 2 H 6 in K2-18 b and PH-2 b at ∼ 1 − 10 Pa and limits the abundance of hydrocarbons in their atmospheres. This sink does not exist in the atmosphere of Kepler-167 e, because little H 2 O photodissociation occurs in its atmosphere. Additionally, near the cloud deck, the temperature is warmer, and Reaction (R8) that uses H from the photodissociation of NH 3 provides an additional source of CH 3 , and some of the CH 3 becomes C 2 H 6 and thus its peak near the cloud deck. The formation of hydrocarbons is thus strongly impacted by the water and nitrogen photochemistry. 3.5. Spectral Features of H 2 O, CH 4 , NH 3 , and Photochemical Gases 3.5.1. Transmission spectra Figures 13-15 show the transmission spectra of the temperate and cold planets K2-18 b, PH2 b, and Kepler-167 e, based on the gas and sulfur haze profiles simulated by the photochemical models. These modeled spectra can be regarded as the canonical examples of a temperate (Earth-like insolation) planet irradiated by an M dwarf star (K2-18 b, and also TOI-1231 b), a temperate (Earth-like insolation) planet irradiated by a G/K star (PH2 b), and a cold (∼ 0.1× Earth insolation) planet irradiated by a G/K star (Kepler-167 e). Here we focus on the wavelength range of 0.5 -5.0 µm, where several instruments on JWST will provide spectral capabilities (e.g., Beichman et al. 2014). For K2-18 b, the equilibrium gases CH 4 , H 2 O, and NH 3 , as well as the photochemical gas HCN have potentially detectable spectral features in the visible to mid-infrared wavelengths ( Figure 13). Adding deeptropospheric source of CO, CO 2 , and N 2 and sink of NH 3 does not cause a significant change of the spectrum of a 10×solar metallicity atmosphere. However, a 100×solar metallicity atmosphere with deep-tropospheric source and sink would be free of the spectral features of NH 3 or HCN, but instead have potentially detectable features of CO 2 and CO. Strikingly, the models from 1× to 100× solar abundance and with the standard eddy diffusion coefficient provide good fits to the existing transit depth measurements by K2, Hubble, and Spitzer (Tsiaras et al. 2019;Benneke et al. 2019). The models with a 100-fold greater eddy diffusion coefficient would have the sulfur haze layer extending to 10 2 Pa and mute the spectral features in 1.1 -1.7 µm, at odds with the Hubble data. Both CH 4 and H 2 O contribute to the spectral modulations seen by Hubble, which may have caused the difficulties in the identification of the gases by spectral retrieval (Tsiaras et al. 2019;Benneke et al. 2019;Blain et al. 2021). HCN, one of the most abundant photochemical gases in the middle atmosphere, is likely detectable in K2-18 b via its spectral band at ∼ 3.0 µm. The HCN is produced from the photodissociation of NH 3 in presence of CH 4 . Also at 3.0 µm are the absorption bands of NH 3 and to a lesser extent C 2 H 2 . It would be possible to disentangle these bands with a reasonably wide wavelength coverage because NH 3 has multiple and more prominent bands in the mid-infrared (Figure 13), and because C 2 H 2 should have a minimal abundance in the middle atmosphere ( Figure 6) and contribute little to the transmission spectra. The spectral bands of CO 2 and CO can be seen in the modeled spectra (in 4 -5 µm) of K2-18 b only when the atmosphere has super-solar metallicity and the transport from the deep troposphere is taken into account ( Figure 13). In other words, the CO and CO 2 that are produced from the photodissociation of CH 4 together with H 2 O would have too low mixing ratios to be detected. The photodissociation of CH 4 also produces C 2 H 6 . While C 2 H 6 has strong bands at 3.35 and 12 µm, they would not be detectable due to its relatively low abundance and the strong CH 4 and NH 3 bands at the same wavelength, respectively ( Figure 13). For PH2 b, prominent spectral bands of CH 4 , H 2 O, and the photochemical gases CO 2 and HCN can be expected ( Figure 14). NH 3 is not detectable because it is depleted by photodissociation to the cloud deck (Figure 7). Even though its pressure of photochemical depletion can be reduced to ∼ 10 2 Pa for a large eddy diffusion coefficient, the sulfur haze in that case would mute spectral features that are generated from approximately the same pressure levels ( Figure 10) and thus cause NH 3 to be undetectable. HCN, CO 2 , and CO are the most abundant photochemical gases ( Figure 7); but the CO bands are intrinsically weaker and so CO 2 and HCN are the detectable photochemical gases via their spectral bands at 4.2 and 3.0 µm, respectively. Similar to K2-18 b, adding deep-tropospheric source of CO, CO 2 , and N 2 and sink of NH 3 does not cause a significant change of the spectrum of a 10×solar metallicity atmosphere. However, a 100×solar metellicity atmosphere with deep-tropospheric source and sink would not have the spectral features of H 2 O or HCN and have more prominent features of CO 2 and CO, as predicted by the photochemical model (Figure 7). Figure 13. Modeled transmission spectra of the temperate sub-Neptune K2-18 b for varied metallicities (a) and varied eddy diffusion coefficients at the solar metallicity (b). The dashed lines show model spectra with deep-tropospheric source of CO, CO2, and N2 and sink of NH3. All models with the standard eddy diffusion coefficient fit the observed transit depths. The equilibrium gases (CH4, H2O, and NH3) and the photochemical gas HCN are detectable in the wavelength range of 0.5 -5.0 µm. The 100×solar metallicity atmosphere with deep-tropospheric source and sink can have detectable features of CO2 and CO. Lastly for the cold planet Kepler-167 e, the transmission spectra will be dominated by the absorption bands of CH 4 (Figure 15), as H 2 O is completely removed by condensation and NH 3 by condensation and photodissociation. For a large eddy diffusion coefficient, the pressure of photochemical depletion of NH 3 can be reduced to ∼ 10 3 Pa ( Figure 10) and this can produce a spectral band of NH 3 at ∼ 3.0 µm. Thus, a search for this absorption band in the transmission spectra may constrain the eddy diffusion coefficient, although to distinguish it with a small peak due to the combined absorption of the photochemical gases HCN and C 2 H 2 (Figure 15) may involve quantification through photochemical models. The main photochemical gas in this cold atmosphere C 2 H 6 has spectral bands at 3.35 and 12 µm. The 3.35-µm band is buried by a strong CH 4 band, and while not shown in Figure 15, the 12-µm band might be detectable given appropriate instrumentation with the spectral capability in the corresponding wavelength range. Finally, the deep-troposphere-sourced CO 2 and CO in a 100×solar metallicity atmosphere may produce detectable spectral features in 4 -5 µm. To summarize, transmission spectroscopy from the visible to mid-infrared wavelengths can provide the sensitivity to detect the equilibrium gases CH 4 and H 2 O, and the photochemical gases HCN, and in some cases CO 2 in temperate/cold and H 2 -rich exoplanets. We do not expect C 2 H 6 to be detectable. NH 3 would be de- tectable on temperate planets around M dwarf stars but not detectable on temperate planets around G/K stars. The deep-tropospheric source and sink can have a major impact only on the transmission spectrum of a 100×solar metallicity atmosphere, where typically the features of NH 3 and HCN would be reduced and those of CO 2 and CO would be amplified. The detection and non-detection of these gases will thus test the photochemical model and improve our understanding of the photochemical mechanisms as well as tropospheric transport in temperate/cold and H 2 -dominated atmospheres. Spectra of the reflected starlight The temperate and cold planets around G/K stars are widely separated from their host stars and may thus also be characterized in the reflected starlight by direct imaging. Figure 16 shows the geometric albedo spectra of PH2 b and Kepler-167 e in the visible and nearinfrared wavelengths that approximately correspond to the Roman Space Telescope's coronagraph instrument (Kasdin et al. 2020) and its potential Starshade Rendezvous (Seager et al. 2019) and the HabEx concept (Gaudi et al. 2020). While PH2 b and Kepler-167 e themselves are not potential targets for these missions, their albedo spectra broadly resemble the targets in the temperate (PH2 b) and cold (Kepler-167 e) regimes. The spectral features of CH 4 and H 2 O can be seen in the reflected starlight of PH2 b. This ability to detect H 2 O in giant planets warmer than Jupiter is consistent with MacDonald et al. (2018). In addition to the ab- sorption features of CH 4 and H 2 O, the albedo spectra of PH2 b feature the absorption of the sulfur (S 8 ) haze layer at wavelengths shorter than ∼ 0.5 µm. This result is consistent with the findings of Gao et al. (2017). For a greater eddy diffusion coefficient, the sulfur haze layer is higher and the spectral features of CH 4 and H 2 O become weaker. Interestingly, the absorption features of H 2 O are the most prominent in the solar-abundance case, and they are somewhat swamped by the adjacent CH 4 features at higher metallicities. This is because, as H 2 O condenses out, the above-cloud mixing ratio of H 2 O only slightly increases with the metallicity, while that of CH 4 increases proportionally (Figure 7). Only the absorption of CH 4 can be seen in the albedo spectra of Kepler-167 e, as H 2 O is depleted by condensation. On both planets, the spectral features of NH 3 are not seen due to its weak absorption (Irwin et al. 2018) and photochemical depletion to the cloud deck (Figures 7 and 8). The deep-tropospheric source and sink has minimal impact on the albedo spectra, unless in the 100×solar metallicity atmosphere on PH2 b where a reduction of the CH 4 features can be seen. DISCUSSION The results and analyses presented in Section 3 indicate that the temperate and H 2 -rich exoplanets, partic- Patel et al. 2015), and the exoplanet observations may constrain the photochemical pathways for its formation in primordial planetary atmospheres. For K2-18 b, our model predicts that the spectral features of CH 4 can have a size of ∼ 80 ppm in the transit depth, and those of H 2 O, NH 3 , HCN, and CO 2 (from the deep troposphere) would have a size of 30 ∼ 60 ppm. These quantities are substantially above the cur-rent estimate of the potential "noise floor" of the nearinfrared instruments on JWST (<∼ 10 ppm, Schlawin et al. 2020Schlawin et al. , 2021, and are thus likely measurable. These spectral features may also be within the reach of ARIEL (Tinetti et al. 2018;Changeat et al. 2020). An an example, we have used PandExo (Batalha et al. 2017) to estimate the overall photometric uncertainties achieved by observing the transits of K2-18 b with the G235H and G395H gratings of the NIRSpec instrument on JWST. These two channels would cover the wavelength range of 1.7 − 5.2 µm and thus provide the sensitivity to the spectral features shown in Figure 13. We find that with two visits in G235H and four visits in G395H, the overall photometric precision would be ∼ 20 ppm per spectral element at the resolution of R = 100 in both wavelength channels, and this precision should enable the detection of CH 4 , H 2 O, NH 3 , the photochemical gas HCN, and possibly CO 2 . If reducing the spectral resolution to R = 50, the number of visits would be halved, but this could cause spectral ambiguity between NH 3 and HCN because they both have absorption bands at ∼ 3.0 µm (Figure 13). Spectral ambiguity in the transmission spectra with the resolution of R ∼ 50 or less has been recently shown with Hubble at 1.1−1.7 µm (Mikal-Evans et al. 2020). Hu The size of the transmission spectral features expected for temperate and cold gas giants around G/K stars, such as PH2 b and Kepler-167 e, is small but probably not prohibitive. For example, our model predicts that the spectral features of CH 4 can have a size of ∼ 50 ppm in the transit depth, and those of H 2 O, CO 2 , and HCN would have a size of 20 ∼ 30 ppm. Several visits may need to be combined to achieve the photometric precision to detect these gases. Complementary to transmission spectroscopy, future direct-imaging missions can readily detect CH 4 , H 2 O, and clouds (e.g., Damiano & Hu 2020), as well as the sulfur haze produced by atmospheric photochemistry. While we focus on temperate and cold planets in this paper, the photochemical mechanisms and the predictions on the gas formation and spectral features should remain applicable to the planets that are only slightly warmer than K2-18 b and PH2 b. This is because the results on these planets do not rely on the formation of water clouds. We suspect that the results should be applicable as long as the dominant O, C, N, S species in thermochemical equilibrium with H 2 are H 2 O, CH 4 , NH 3 , and H 2 S and the assumptions on other atmospheric parameters (e.g., the eddy diffusion coefficient) remain broadly valid. The eddy diffusion coefficient adopted in this work corresponds to that of Jupiter (Conrath & Gierasch 1984) and features a minimum at the bottom of the stratosphere. This minimum value is also close to the eddy diffusion coefficient at the tropospherestratosphere boundary of Earth's atmosphere (Massie & Hunten 1981). However, the adopted eddy diffusion coefficient at the bottom of the stratosphere is smaller than the values used in past photochemical models of warmer exoplanets (e.g., GJ 1214 b and GJ 436 b, Kempton et al. 2011;Hu & Seager 2014) or the values derived from a 3D particulate tracertransport model conditioned on hot Jupiters (Parmentier et al. 2013) by several orders of magnitude. We note that Earth, the cold giant planets in the Solar System, and the modeled K2-18 b Charnay et al. 2021) all have temperature inversion and thus a true stratosphere, while atmosphere models of the warm exoplanets GJ 1214 b and GJ 436 b do not predict temperature inversion (e.g., Kempton et al. 2011;. The lower temperature and the temperature inversion may both contribute to the lower eddy diffusion coefficient on temperate and cold exoplanets. Predictive models of the eddy diffusion coefficient in exoplanets are being developed (e.g., Zhang & Showman 2018a,b) and can be tested by future observations as shown in Figures 13-15. We have also shown in Section 3 that the deeptropospheric source of CO, CO 2 , and N 2 and sink of NH 3 can substantially change the composition of the observable part of the atmosphere -and the transmission spectrum -if the atmosphere has 100×solar metallicity. The main change is the reduction of NH 3 and HCN and the enhancement of CO and CO 2 in the spectrum. As such, detecting and measuring the abundance of these gases in the temperate H 2 -dominated atmosphere may provide constraints on the temperature and the strength of vertical mixing in the deep troposphere (e.g., Fortney et al. 2020). One should note that modification of the deep-tropospheric abundance of gases by photochemical processes will be important in this endeavor: NH 3 is expected to be depleted anyway and CO 2 should overtake CO as the main carbon molecule in the middle atmosphere of temperate and H 2 -rich exoplanets of G/K stars. A recently published study of atmospheric photochemistry in the atmosphere of K2-18 b (Yu et al. 2021) came to our notice during the peer-review phase of this work. The "no-surface" case in Yu et al. (2021) has a comparable physical picture as the 100×solar metallicity case with the deep-tropospheric source and sink presented in Figure 6. A common feature is that such an atmosphere would be rich in CO and CO 2 , and the difference in the profiles of HCN and other photochemical gases between the models may be due to the assumed profile of eddy diffusivity. Lastly, we emphasize that several effects of potential importance have not been studied in this work. A more accurate pressure-temperature profile from 1D or 3D models may improve the prediction on the extent of water vapor depletion by condensation. A temperature inversion would result in higher temperatures in the upper stratosphere than what has been adopted here, and this may have an impact on the efficacy and relative importance of chemical pathways. A more accurate pressure-temperature profile and vertical mixing modeling for the deep troposphere may improve the prediction and perhaps remove the need for the endmember scenarios as presented. On planets that are expected to be tidally locked, the transmission spectra are controlled by the chemical abundance at the limb (e.g., Steinrueck et al. 2019;Drummond et al. 2020), and thus the horizontal transport of long-lived photochemical gases such as HCN and CO 2 may be important. Finally, we have not included hydrocarbon haze in this study, while it can form with both C 2 H 2 and HCN in the atmosphere (Kawashima et al. 2019). We hope that the present work will help motivate future studies to address these potential effects. CONCLUSION We have studied the photochemical mechanisms in temperate/cold and H 2 -rich exoplanets. For the H 2 -rich planets (giants and mini-Neptunes) that receive stellar irradiance of approximately Earth's, we find that the main photochemical gases are HCN and N 2 . The synthesis of HCN requires the photodissociation of NH 3 in presence of CH 4 at a temperature >∼ 200 K. NH 3 is dissociated near the water cloud deck and thus has a minimal mixing ratio in the middle atmosphere (10 -10 3 Pa) if the planet orbits a G/K star, but NH 3 can remain intact in the middle atmosphere if the planet orbits an M star. Additional photochemical gases include CO, CO 2 , C 2 H 6 , and C 2 H 2 . CO and CO 2 are the main photochemical gas of carbon because of the photodissociation of H 2 O together with CH 4 . The photodissociation of H 2 O also strongly limits the abundance of photochemical hydrocarbons in the atmosphere. For the planets that receive stellar irradiance of approximately 0.1× Earth's, the formation of HCN is limited by the low temperature, CO 2 or CO is not produced due to nearly complete removal of H 2 O by condensation, and the main photochemical gases are C 2 H 6 and C 2 H 2 . The photochemical models of the temperate sub-Neptune K2-18 b assuming 1 − 100×solar abundance result in transmission spectra that fit the current measurements from K2, Hubble, and Spitzer. Both CH 4 and H 2 O contribute to the spectral modulation seen by Hubble. Transmission spectroscopy with JWST and ARIEL will likely provide the sensitivity to detect the equilibrium gases CH 4 , H 2 O, and NH 3 , the photochemical gas HCN, and in some cases CO 2 . C 2 H 6 is unlikely to be detectable due to its low mixing ratio and spectral feature overwhelmed by CH 4 . Transmission spectroscopy of the temperate giant planets around G/K stars will likely provide the sensitivity to detect CH 4 , H 2 O, and the photochemical gases HCN and CO 2 , complementing future spectroscopy in the reflected light by direct imaging. If the eddy diffusion coefficient is greater than that in Jupiter by two orders of magnitude, the sulfur haze layer would subdue the transmission spectral features -but this situation is unlikely for K2-18 b because of the detected spectral modulation. These results are also applicable to similarly irradiated H 2 -rich exoplanets, including TOI-1231 b and LHS-1140 b if they have H 2 -dominated atmospheres. The results here indicate that the temperate/cold and H 2 -rich exoplanets, which often represent a temperature and atmospheric composition regime that is not found in the Solar System, likely have rich chemistry above clouds that leads to a potpourri of photochemical gases, some of which will build-up to the abundance detectable by transmission spectroscopy soon. The detection of atmospheric photochemical products in K2-18 b and other temperate exoplanets would expand the types of molecules detected in exoplanet atmospheres and greatly advance our understanding of the photochemical processes at works in low-temperature exoplanets.
v3-fos-license
2017-09-21T17:28:09.000Z
2017-09-21T00:00:00.000
119472209
{ "extfieldsofstudy": [ "Physics" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "http://link.aps.org/pdf/10.1103/PhysRevD.97.075010", "pdf_hash": "34e2181a25454c7079676a7ce04c82f43e6ebd02", "pdf_src": "Arxiv", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43631", "s2fieldsofstudy": [ "Physics" ], "sha1": "34e2181a25454c7079676a7ce04c82f43e6ebd02", "year": 2017 }
pes2o/s2orc
A Comprehensive Renormalisation Group Analysis of the Littlest Seesaw Model We present a comprehensive renormalisation group analysis of the Littlest Seesaw model involving two right-handed neutrinos and a very constrained Dirac neutrino Yukawa coupling matrix. We perform the first $\chi^2$ analysis of the low energy masses and mixing angles, in the presence of renormalisation group corrections, for various right-handed neutrino masses and mass orderings, both with and without supersymmetry. We find that the atmospheric angle, which is predicted to be near maximal in the absence of renormalisation group corrections, may receive significant corrections for some non-supersymmetric cases, bringing it into close agreement with the current best fit value in the first octant. By contrast, in the presence of supersymmetry, the renormalisation group corrections are relatively small, and the prediction of a near maximal atmospheric mixing angle is maintained, for the studied cases. Forthcoming results from T2K and NOvA will decisively test these models at a precision comparable to the renormalisation group corrections we have calculated. Introduction Despite the impressive experimental progress in neutrino oscillation experiments, [1], the dynamical origin of neutrino mass generation and lepton flavour mixing remains unknown [2,3]. Furthermore, the octant of the atmospheric angle is not determined yet, and its precise value is uncertain. While T2K prefers a close to maximal atmospheric mixing angle [4], NOvA excludes maximal mixing at 2.6σ CL [5]. The forthcoming results from T2K and NOvA will hopefully clarify the situation. An accurate determination of the atmospheric angle is important in order to test predictive neutrino mass and mixing models. The leading candidate for a theoretical explanation of neutrino mass and mixing remains the seesaw mechanism [6][7][8][9][10]. However the seesaw mechanism involves a large number of free parameters. One approach to reducing the seesaw parameters is to consider the minimal version involving only two right-handed neutrinos, first proposed by one of us [11,12]. In such a scheme the lightest neutrino is massless. A further simplification was considered by Frampton, Glashow and Yanagida [13], who assumed two texture zeros in the Dirac neutrino mass matrix M D and demonstrated that both neutrino masses and the cosmological matter-antimatter asymmetry could be explained in this economical setup via the seesaw and leptogenesis mechanisms [14]. The phenomenology of the minimal seesaw model was subsequently fully explored in the literature [15][16][17][18][19][20][21]. In particular, the normal hierarchy (NH) case in the Frampton-Glashow-Yanagida model has been shown to be already excluded by the latest neutrino oscillation data [20,21]. An alternative to having two texture zeros is to impose constraints on the Dirac mass matrix elements. For example, the Littlest Seesaw (LS) model consists of two right-handed (RH) neutrino singlets N atm R and N sol R together with a tightly constrained Dirac neutrino Yukawa coupling matrix, leading to a highly predictive scheme [22][23][24][25][26][27]. Since the mass ordering of the RH neutrinos as well as the particular choice of the Dirac neutrino Yukawa coupling matrix can vary, it turns out that there are four distinct LS cases, namely cases A, B, C and D, as defined later. These four cases of the LS model will be discussed in detail in the present paper. In particular we are interested in the phenomenological viability of these four cases of the LS model defined at the scale of some grand unified theory (GUT) when the parameters are run down to low energy where experiments are performed. A first study of the renormalisation group (RG) corrections to the LS model was performed in [28]. The purpose of the present paper is to improve on that analysis and to focus on the cases where the RG corrections are the most important. It is therefore briefly reviewing the progress and limitations of the approach and results in [28]. In [28] the authors focussed on analytically understanding the RG effects on the neutrino mixing angles for cases A and B in great detail and threshold effects were discussed due to two fixed RH neutrino masses, taken as 10 12 GeV and 10 15 GeV, close to the scale of grand unified theories Λ GUT = 2 × 10 16 GeV [28]. These analytical results were verified numerically. Furthermore, cases C and D were investigated numerically. However, the RG running of neutrino masses and lepton flavour mixing parameters were calculated at low energies, always assuming phenomenological best fit values at high energies, which was justified a posteriori by the fact that in most cases the RG corrections to the neutrino mass ratio 3 as well as the mixing angles were observed to be rather small [28]. Such cases with small RG corrections lead to an atmospheric mixing angle close to its maximal value, which is in some tension with the latest global fits. To account for the running of the neutrino masses, Ref. [28] modified the Dirac neutrino Yukawa matrix by an overall factor of 1.25 with respect to the best fit values obtained from tree-level analyses. This factor was chosen based on scaling the neutrino masses for case A to obtain appropriate values at the EW scale, and subsequently used for all four LS cases. In other words, the numerical analysis of Ref. [28] chose input parameters that where extracted from a tree-level best fit, and adjusted them by an overall factor based on one specific case to include some correction for the significant running in the neutrino masses. There are several problems with the above approach [28], as follows: • The overall factor of 1.25 to the Dirac neutrino Yukawa matrix implies that only the running of the neutrino masses themselves is significantly affected by the choice of input parameters, while the neutrino mixing angles are still stable. Furthermore, it assumes that keeping the ratio of the input parameters unchanged when incorporating RG effects is reasonable. Both assumptions turn out to be incorrect. • Having modified the Dirac neutrino Yukawa matrix based on case A, Ref. [28] employs the same factor for cases B, C and D, although the running behaviour can change fundamentally with the LS case. • Most importantly, as mentioned above, the RG running of neutrino masses and lepton flavour mixing parameters were calculated at low energies, assuming phenomenological best fit values at high energies. Clearly the correct approach would be to perform a complete scan of model input parameters in order to determine the optimum set of high energy input values from a global fit of the low energy parameters. This is what we will do in this paper. As a consequence, the measure of the goodness-of-fit 4 yields less than mediocre results for the input parameters used in Ref. [28]: χ 2 A,B (Λ EW ) ≈ 50, and χ 2 C,D (Λ EW ) ≈ 175. In comparison, our complete scan here will reveal much improved best fit scenarios with χ 2 A (Λ EW ) = 7.1, χ 2 B (Λ EW ) = 4.2, χ 2 C (Λ EW ) = 3.2 and χ 2 D (Λ EW ) = 1.5. In the present paper, then, we will perform a detailed RG analysis of the LS model, including those cases where the RG corrections can become significant. As such it is no longer sufficient to fix the input parameters by fitting to the high energy masses and mixing angles. Consequently, we perform a complete scan of model parameters for each case individually, to determine the optimum set of high energy input values from a global fit of the low energy parameters which include the effects of RG running, and to reassess whether RG corrections might still be sufficient to obtain a realistic atmospheric mixing angle. We shall find that the largest corrections occur in the Standard Model (SM), although we shall also perform a detailed analysis of the Minimal Supersymmetric Standard Model (MSSM) 5 for various values of tan β for completeness, however, since the RG corrections there are relatively small, we relegate those results to an Appendix. In all cases we perform a χ 2 analysis of the low energy masses and mixing angles, including RG corrections for various RH neutrino masses and mass orderings. The layout of the remainder of the paper is as follows. In Sec. 2 we review the LS model and define the four cases A,B,C,D which we shall analyse. In Sec. 3 we discuss qualitatively the expected effects of RG corrections in the LS models. We focus on some key features that will help understand the findings in later sections, instead of aiming at a complete discussion of the RG effects. In Sec. 4 we introduce the χ 2 function that we use to analyse our results. In Sec. 5 we discuss the SM results in some detail, since this is where the RG corrections can be the largest, serving to reduce the atmospheric angle from its near maximal value at high energy to close to the best fit value at low energy in some cases. Sec. 6 discusses the results for the RG analysis of the LS model in the MSSM. In Sec. 7 we compare the MSSM results to those of the SM, and show that the RG corrections in the SM are more favourable. Sec. 8 concludes the paper. Appendix A introduces the notation needed to discuss benchmark scenarios for the LS model in the MSSM, and Appendix B displays tables with the results of all MSSM scenarios investigated. Littlest Seesaw The seesaw mechanism [6][7][8][9][10] extends the standard model (SM) with a number of righthanded neutrino singlets N iR as, where L andH ≡ iσ 2 H * stand respectively for the left-handed lepton and Higgs doublets, E R and N R are the right-handed charged-lepton and neutrino singlets, Y l and Y ν are the charged-lepton and Dirac neutrino Yukawa coupling matrices, M R is the Majorana mass matrix of right-handed neutrino singlets. Physical light effective Majorana neutrino masses are generated via the seesaw mechanism, resulting in the light left-handed Majorana neutrino mass matrix The Littlest Seesaw Model model (LS) extends the SM by two heavy right-handed neutrino singlets with masses M atm and M sol and imposes constrained sequential dominance (CSD) on the Dirac neutrino Yukawa couplings. The particular choice of structure of Y A,B,C,D ν and heavy mass ordering M A,B,C,D R defines the type of LS, as discussed below. All four cases predict a normal mass ordering for the light neutrinos with a massless neutrino m 1 = 0. In the flavour basis, where the charged leptons and right-handed neutrinos are diagonal, the Cases A,B are defined by the mass hierarchy M atm M sol , and hence M R = Diag{M atm , M sol }, and the structure of the respective Yukawa coupling matrix: with a, b, η being three real parameters and n an integer. These scenarios were analysed in [28] with heavy neutrino masses of M atm = M 1 = 10 12 GeV and M sol = M 2 = 10 15 GeV. Considering an alternative mass ordering of the two heavy Majorana neutrinos -M atm M sol , and consequently M R = Diag{M sol , M atm } -we have to exchange the 4 two columns of Y ν in Eq. (3), namely, which we refer to as Cases C,D. For M atm = M 2 = 10 15 GeV and M sol = M 1 = 10 12 GeV, both these cases were studied in [28]. We apply the seesaw formula in Eq. Note the seesaw degeneracy of Cases A,C and Cases B,D, which yield the same effective neutrino mass matrices, respectively. Studies which ignore renormalisation group (RG) running effects do not distinguish between these degenerate cases. Of course in our RG study the degeneracy is resolved and we have to separately deal with the four physically distinct cases. The neutrino masses and lepton flavour mixing parameters at the electroweak scale Λ EW ∼ O(1000 GeV) can be derived by diagonalising the effective neutrino mass matrix via From a neutrino mass matrix as given in Eqs. (5) and (6), one immediately obtains normal ordering with m 1 = 0. Furthermore, these scenarios only provide one physical Majorana phase σ. As discussed above, we choose to start in a flavour basis, where the righthanded neutrino mass matrix M R and the charged-lepton mass matrix M l are diagonal. Consequently, the PMNS matrix is given by U P M N S = U † νL . We use the standard PDG parametrisation for the mixing angles, and the CP-violating phase δ. Within our LS scenario, the standard PDG Majorana phase ϕ 1 vanishes and −ϕ 2 /2 = σ. The low-energy phenomenology in the LS model case A has been studied in detail both numerically [22,23] and analytically [24], where it has been found that the best fit to experimental data of neutrino oscillations is obtained for n = 3 for a particular choice of phase η ≈ 2π/3, while for case B the preferred choice is for n = 3 and η ≈ −2π/3 [22,26]. Due to the degeneracy of cases A,C and cases B,D at tree level, the preferred choice for n and η carries over, respectively. The prediction for the baryon number asymmetry in our Universe via leptogenesis within case A is also studied [25], while a successful realisation of the flavour structure of Y ν for case B in Eq. (3) through an S 4 × U (1) flavour symmetry is recently achieved in Ref. [26], where the symmetry fixes n = 3 and η = ±2π/3. With the parameters n = 3 and η = ±2π/3 fixed, there are only two remaining real free Yukawa parameters in Eqs. (3) and (4), namely a, b, so the LS predictions then depend on only two real free input combinations m a = a 2 v 2 /M atm and m b = b 2 v 2 /M sol , in terms of which all neutrino masses and the PMNS matrix are determined. For instance, if m a and m b are chosen to fix m 2 and m 3 , then the entire PMNS mixing matrix, including phases, is determined with no free parameters. Using benchmark parameters (m a = 26.57 meV, m b = 2.684 meV, n = 3, η = ±2π/3), it turns out that the LS model predicts close to maximal atmospheric mixing at the high scale, θ 23 ≈ 46 • for case A , or θ 23 ≈ 44 • for case B [26], where both predictions are challenged by the latest NOvA results in the ν µ disappearance channel [29] which indicates that θ 23 = 45 • is excluded at the 2.5 σ CL, although T2K measurements in the same channel continue to prefer maximal mixing [30]. Since no RG running is included so far, Case C and D predict the same atmospheric angles upon inserting the benchmark parameters. RGE Running in Littlest Seesaw Scenarios Although the best-fit input parameters in the present paper were determined by means of numerically solving the RGEs, we will briefly recap some features of the LS' RG running to facilitate comprehending the distinctive behaviour of the different cases. This qualitative discussion is based on the more thorough analytical approaches in Refs. [28,31]. We switch from denoting the heavy right-handed neutrino masses by M atm , M sol to labeling them by M 1 , M 2 to avoid mixing up the different cases and their opposite ordering of heavy neutrino masses. That is to say that irrespective of the case discussed, M 2 always denotes the higher scale and M 1 the lower. For the LS, there are three different energy regimes of interest. Starting at the GUT scale, we can use the full theory's parameters and RGEs to describe the evolution down to µ = M 2 . At µ = M 2 , the heavier N R is integrated out, and the light neutrino mass matrix as well as the RGEs have to be adapted. It is important to carefully match the full theory on the effective field theory (EFT) below the seesaw scale, denoted by EFT 1. Using the modified RGEs, the parameters are further evolved down to µ = M 1 , where the remaining N R is integrated out, and the parameters of this intermediate EFT 1 are matched to the EFT below M 1 , denoted by EFT 2. Once again, the light neutrino mass matrix along with the RGEs have to be determined anew. As we assume a strong mass hierarchy M 2 >> M 1 , it is important to decouple the heavy neutrinos subsequently, and describe the intermediate RG behaviour accordingly. Taking a closer look at the highest regime, we specify the LS input parameters at the GUT scale, and additionally choose the flavour basis, i. e. both Y l (Λ GUT ) and M R (Λ GUT ) are diagonal. For now, we are interested in the evolution of the neutrino mixing parameters, which implies narrowly watching how the mismatch between the basis, where the charged-lepton Yukawa matrix Y l is diagonal, and the one, where the light neutrino mass matrix m ν is diagonal, unfolds. Consequently, we track the RG running of Y l and m ν . Above the seesaw threshold µ = M 2 , the evolution of the flavour structure of m ν is mainly driven by Y ν Y † ν . Consequently, the varying flavour structures of the Dirac neutrino Yukawa matrix need to be examined more thoroughly: • Case A: Whether we take the benchmark input parameters as stated in Sec. Consequently, Ref. [28] only considers the dominant 9b 2 term and thereby solves the simplified RGE for m ν analytically. • Case B: In analogy to Case A, there is a hierarchy with respect to the input param- Therefore, the simplified RGE of m ν , which only takes the dominant (33)-entry into account, can be solved analytically. • Case C: Due to the opposite ordering of heavy neutrino masses, the hierarchy arising 7 from either the benchmark or the global-fit input parameters is also reversed, namely Even when considering only the dominant contributions arising from a 2 , the resulting simplified RGE of m ν cannot be solved analytically anymore due to the non-diagonal elements strongly affecting the flavour structure of m ν . • Case D: In analogy to Case C, there is a hierarchy to the input parameters a Thus, even the simplified RGE of m ν turns out to be too involved to be solved analytically. Consequently, Case C and Case D are both investigated via an exact numerical approach in Ref. [28]. Note that, as apparent from the discussion below Eqs. (5) and (6) However, due to the inverted hierarchy with respect to a, b (stemming from the inverted heavy neutrino mass ordering), different entries dominate the RG evolution of m ν , leading to different RG running behaviour. Thus, the degeneracy of the cases is resolved. This means that although (in case of starting from the same set of benchmark input parameters) the neutrino masses and mixing angles of Case A, B, C, and D at the GUT scale are all identical, the running behaviour of the mixing angles, which is mainly governed by Y ν Y † ν , is quite different. Moreover, the discussion above uncovers a deeper connection among the cases A ↔ B and cases C ↔ D manifest in the shared respective input parameter as well as the similar/same structure of Y ν Y † ν dominating the running of m ν . Having determined m ν (M 2 ) from either the analytical or numerical RG evolution, we need to diagonalise the light neutrino mass matrix. That way, we obtain not only the neutrino masses m 2,3 (M 2 ) but also the transformation matrix U ν . The latter in combination with the unitary transformation U l , diagonalising Y l , yields the PMNS matrix, and thereby the neutrino mixing parameters at the scale µ = M 2 . Thus, still within the high-energy regime, we focus on the charged-lepton Yukawa matrix. Since we are interested in the flavour mixing caused by the running of Y l , flavourindependent terms are neglected. But besides that, the RGE for Y l can be solved analytically without further simplifications, meaning that once again Y ν Y † ν drives the flavour mixing. Finally, at µ = M 2 , Y l is diagonalised by means of the unitary transformation U l . Consequently, one would have all necessary parameters at hand to extract approximations for the mixing angles, see Ref. [28]. Taking a closer look at the intermediate energy regime, M 2 > µ > M 1 , we need to employ EFT 1 to describe the parameters and RG running. At the threshold µ = M 2 , the effective light neutrino mass matrix can be written as where κ (2) ∝Ŷ ν M −1 2Ŷ T ν stems from decoupling the heavier right-handed neutrino with mass M 2 . The expressionỸ ν (Ŷ ν ) is obtained from Y ν by removing the column corresponding to the decoupled heavy neutrino of mass M 2 (the right-handed neutrino of mass M 1 ). Please note that the two terms on the right-hand side of Eq. (12) are governed by different RGEs, leading to so-called "threshold effects". The RGEs of κ (2) andỸ ν M −1 1Ỹ T ν have different coefficients for the terms proportional to the Higgs self-coupling and gauge coupling contributions within the framework of the SM [31]. In combination with the strong mass hierarchy of the heavy right-handed neutrinos, which enforces a subsequent decoupling, the threshold effects become significant, and thereby enhance the running effects on the neutrino mixing parameters 6 . From the discussion in Ref. [28], we learn that the threshold-effect-related corrections to the neutrino mixing angles between M 2 and M 1 are dominated by an expression proportional to κ (2) . Hence, we examine the combination Y ν M −1 2Ŷ T ν for the four cases: 6 This can be understood by assuming that if the expression T ν U is only diagonal for x =x. Since this is not the case here, meaning the two terms scale differently, there is an additional "off-diagonalness". • Case C and Case D: It is evident that the different order of the heavy neutrino decoupling once again evokes distinct flavour structures. Thus, demonstrating that the connection between Case A,B and Case C,D carries on to lower energy regimes as well. Note that, although the flavour structure of κ (2) drives the mixing parameter's running from threshold effects, its contribution comes with a suppression factor. Moreover, bare in mind that we only considered the threshold effects arising in EFT 1, but no further contributions from both neutrino and charged-lepton sector. These additional contributions may compete with the threshold effects in some cases, and lead to deviations from the similar features of Case A,B and Case C,D. Going below the lower threshold, µ < M 1 , the running effects of the mixing angles become insignificant. This is not the case for the running of the light neutrino masses, which is too complicated to describe analytically in all regimes, and therefore was not discussed above. Nevertheless, there are a few details of the neutrino matrix running that we want to briefly mention: depending on the size of the Y ν entries, the sign of the flavour-independent contribution to the RGE of m ν can switch; and the coefficients of the flavour-dependent contributions for the SM and MSSM differ including a sign switch in some. As a consequence, a parameter can run the opposite direction for the framework of the SM in contrast to the MSSM. This feature is most apparent for the light neutrino masses that exhibit strong overall running in opposite directions when comparing the LS in the context of the SM and in the context of the MSSM. In order to access all parameters -neutrino masses, mixing angles and phases -at all scales, we turn to an exact numerical treatment using the Mathematica package REAP [31]. There are two conclusions to be emphasised from the discussion above: • Despite yielding identical neutrino masses and mixing parameters at the GUT scale (for identical input parameters (a, b)), Case A,C and Case B,D show fundamentally different running behaviour. • There is an intrinsic connection between the evolution of Case A ↔ B (Case C ↔ D) which is reflected in the parameter b (a) dominating the running as well as Y ν Y † ν being mainly diagonal (being driven by the same block matrix). This distinction between Case A,B versus Case C,D properties becomes even more evident when taking a closer look at the energy regime M 2 > µ > M 1 . The χ 2 Function In the following, we fix n = 3 and η = ±2π/3. Consequently, there are only two free real parameters remaining to predict the entire neutrino sector. In order to find the best-fit input parameters m a and m b while keeping η = ±2π/3 and n = 3 fixed, we perform a global fit using the χ 2 function as a measure for the goodness-of-fit [23], Here, we collect our model parameters in x = (m a , m b , n, η), and predict the physical values P i (x) from the Littlest Seesaw Model. The latter are compared to the µ i that correspond to the "data", which we take to be the global fit values of [32], Furthermore, σ i are the 1σ deviations for each of the neutrino observables. In case the global fit distribution is Gaussian, the 1σ uncertainty matches the standard deviation, which is the case for several of the neutrino parameters depicted in Tab. 1. However, there are a few cases where the deviations are asymmetric. To obtain conservative results, we assume the distribution surrounding the best fit to be Gaussian, and choose the smaller uncertainty, respectively. That way, we slightly overestimate the χ 2 values. Since the CP-violating phases δ and σ are either only measured with large uncertainties or not at all, we define two different χ 2 functions: • χ 2 for which N = 5, i.e., δ is not included in Eq. (17), • χ 2 δ for which N = 6, i.e., δ is included when performing the global fit. A χ 2 function is required to have a well-defined and generally stable global minimum in order to be an appropriate measure for the goodness-of-fit. This is the case for all CSD(n) models under the assumption that the sign of η is fixed [23]. From former analyses of the LS [23,28], we know in which ballpark the best-fit values of m a,b are to be expected, Parameter from [32] best-fit-values ±1σ Table 1: Best-fit values with 1σ uncertainty range from global fit to experimental data for neutrino parameters in case of normal ordering, taken from [32]. respectively. That way, we can define a grid in the (m a , m b )-plane over which we scan -meaning that we handover the respective input parameters x = (m a , m b , n = 3, η = ±2π/3) at each point of the grid to the Mathematica package REAP [31]. REAP numerically solves the RGEs and provides the neutrino parameters at the electroweak scale, i. e. the P i (x) in Eq. (16). The latter are used to determine how good the fit is with respect to the input parameters (m a , m b ) by giving an explicit value for χ 2 (δ) . In the next step, we identify the region of the global χ 2 (δ) minimum, chose a finer grid for the corresponding region in the (m a , m b )-plane and repeat the procedure until we determine the optimum set of input values. As we will use the Mathematica package REAP [31] to solve the RG equations numerically, it is important to mention that the conventions used in REAP slightly differ from the ones discussed in Sec. 1. First of all, with the help of Ref. [31], we can relate the two neutrino Yukawa matrices, which leads toỸ ν = Y † ν . This needs to be taken into account when entering explicit LS scenarios into REAP. Secondly, note that REAP also uses the PDG standard parametrisation which means that the mixing angles are identical to ours, and the Majorana phase is given by SM Results We investigate the running effects on the neutrino parameters m 2 , m 3 , ϑ 12 , ϑ 13 , ϑ 23 , δ and σ numerically by means of REAP [31]. Our analysis involves not only the four different cases A, B, C, and D but also four settings for the heavy RH neutrino masses, namely (M 2 , M 1 ) = (10 12 , 10 10 ), (10 15 , 10 10 ), (10 15 , 10 12 ), (10 14 , 10 13 ). For each case and RH mass setting, we furthermore perform vacuum stability checks which validate all scenarios under consideration. As we fixed two of the four input parameters of the LS, namely (n, η), depending on the case, we minimalise χ 2 (δ) with respect to the free input parameters (m a , m b ). From the scan of the free input parameters, we determine the optimum set of (m a , m b ) at the GUT scale, which are presented in Tab. 2 together with their corresponding χ 2 (δ) values (obtained at the EW scale). Overall, it turns out that the values for χ 2 δ are only slightly inferior to the ones for χ 2 -by about a few percent at most -and both measures for the goodness-of-fit point towards the same input values (m a , m b ). Thus, we will refer to χ 2 in the following discussion. • The first and foremost observation is that the RH mass setting (10 15 , 10 12 ) makes for the best fit to the global fit values given in Tab. 1 for each of the LS cases individ-ually; closely followed by the mass setting (10 15 , 10 10 ). The scenario (10 14 , 10 13 ) is already significantly poorer, and the goodness-of-fit further deteriorates for (10 12 , 10 10 ). This shows that it is beneficial for the running effects to have M 2 closer to the GUT scale. In addition, the mass of M 1 barely -as long as still viable for a seesaw scenario -changes the outcome which is to say that the heavier of the RH neutrinos plays the dominant role regarding RG running behaviour and the goodness-of-fit. The detailed results for the RH mass setting (10 15 , 10 12 ) are shown in Figs. 1 to 4. The results for the remaining three mass settings are displayed in Tabs. 5 and 6. • For case A the best-fit values for m b for mass settings (10 15 , 10 12 ) and (10 15 , 10 10 )which yield nearly identical χ 2 's -are almost the same, while the m a differ notably. Furthermore, m b decreases with M 2 . The same is true for case B. For cases C and D, respectively, it is the best-fit values for m a that are almost identical for the comparatively good RH mass settings (10 15 , 10 12 ) and (10 15 , 10 10 ), and m b that does vary. Moreover, m a lowers with M 2 . Recalling the qualitative discussion in Sec. 3, these observations can most likely be traced back to the deeper connection between Case A ↔ B as well Case C ↔ D. For A ↔ B, the parameter b ∝ √ M 2 m b dominates the RG effects of the mixing angles, whereas for C ↔ D, the parameter a ∝ √ M 2 m a does so. This already hints towards the overall importance of the running of the mixing angles in order to predict feasible neutrino parameters at the EW scale, which we will come back to when investigating the different LS cases. This line of reasoning also explains the first observation, namely that the mass of the heavier RH neutrino impacts the goodness-of-fit predominantly. • Case A and B yield a nearly identical input parameter m a for each RH neutrino mass setting individually, which hints towards yet another correlation between Case A and B. The same holds true for Case C and D with slightly more deviation in m a in comparison to Case A ↔ B. For the input parameter m b , there does not seem to be a correlation between the different LS cases. While the discussion above did feature equivalent RG behaviour of two LS cases, respectively, this observation shows a correlation with respect to the absolute value of m a . The reason behind this connection, however, proves more elusive because m a is related to the lighter RH neutrino scale for Case A,B but to the heavier scale for Case C,D. Nevertheless, we will return to discussing this feature towards the end of this section. To emphasise the importance of performing global fits to the experimental data at the EW scale for each LS case separately, we compare the χ 2 values of the modified benchmark scenarios from Ref. [28] with the best-fit scenarios obtained from our analysis. As already Table 3: χ 2 values for the four cases, where the subscript old denotes the input parameters used in Ref. [28], namely m a = 41.5156 meV and m b = 4.19375 meV. In order to compare these to the results from this paper's analysis, we also include their χ 2 values in the two right-handed columns of this table. Please bare in mind that the latter are based on varying input parameters m a,b , which are specified in Tab. 2. mentioned in the Sec. 1, the input values (m a , m b ) in Ref. [28] are taken from a treelevel best fit, and adjusted by an overall factor of 1.25, which was obtained from Case A and aims at including the significant running of the neutrino masses 7 . In contrast, our analysis scans over the model input parameters in order to determine the optimum set of high energy input values from a global fit of the low energy parameters. The χ 2 values for the input parameters used in Ref. [28] are listed in Tab. 3. Comparing these to the χ 2 values presented in Tab. 2, there are two striking characteristics. First of all, the overall values for the goodness-of-fit improve drastically moving the χ 2 values from "in tension with experimental data" to "predict experimental data nicely". Secondly, the χ 2 values listed in Tab. 3 suggest that Case A is most compatible with experimental data, followed closely by Case B and after a significant gap by Case D and C. It turns out that quite the opposite is true when performing global fits for each case individually, resulting in the order: Case D yields best fit, followed by Case C, Case B and Case A. Both these features can be traced back to Ref. [28] superficially modifying the input parameters to fit Case A. As we have already seen in the discussion above, the cases A and B are connected intrinsically while displaying detached behaviour from the also connected cases C and D, which does not only concern the running effects but also the absolute value of a suitable input parameter m a . Consequently, the input parameters from Ref. [ • Next, we analyse the mixing angle ϑ 13 . From Case A, we obtain ϑ A 13 = 8.42 • . From Case B, the reactor angle is predicted to be ϑ B 13 = 8.51 • , while we obtain ϑ C 13 = 8.44 • for Case C and ϑ D 13 = 8.48 • for Case D. The first and somewhat unexpected observation is that for ϑ 13 , there seems to be no clear correlation between the cases from the predicted angle at the EW scale. Second of all, the measured value ϑ exp 13 = 8.46 • , see Tab. 4, is right in the middle of the range of predicted angles. Including the 1σ deviations from the measured best-fit angle, one obtains a region of [ where either all cases where within the 1σ region or at least at close proximity, only Case C is somewhat close to the 1σ region for the atmospheric angle, whereas cases A and B are well beyond the upper margin. Furthermore, ϑ 23 also differs from the other mixing angles in terms of its connections between the LS cases. Not only do the best-fit scenarios for the different LS cases predict quite distinct values at the EW scale, but they also exhibit no connection with respect to the values at the GUT scale and RG running behaviour. The latter manifests in Case A displaying a decrease by 0.33 • in between the GUT and the EW scale, whereas Case B has an increase by 0.24 • . It is striking that Case A does not only differ from Case B in running strength but in direction. Case C, moreover, displays a decrease by 1.46 • while Case D shows an even stronger decrease of 2.77 • . In combination with the already dissimilar GUT scale values, we obtain a strong preference towards Case D based on its predicted ϑ 23 . Thus, the atmospheric angle plays the decisive role with regard to the compatibility of the LS cases with experimental data. • When including the CP violating Dirac phase in the goodness-of-fit analysis, we also need to discuss its predicted values with respect to the measured value and the 1σ region. Since the 1σ region encompasses values within [−158 • , −48 • ] around a best-fit experimental value of δ = −99 • , all Dirac phases derived from the LS cases lie within this range. Moreover, they are also within a -relative to the 1σ region -narrow band above the best-fit value, namely δ A = −92.11 • , δ B = −87.14 • , δ C = −85.97 • , and δ D = −90.35 • . This explains why the difference between χ 2 and χ 2 δ is negligible. The running behaviour with respect to δ differs among the four LS cases. While Case B and C have δ increasing with decreasing energy scale, Case A and D display a decreasing δ. Nonetheless, the strength of the running differs with running effects in between 1 • and 7 • . So, overall, there is no hint towards a relation between any of the four LS cases in δ -neither in the starting values at the GUT scale or the values obtained at the EW scale, nor in the total running behaviour. Since involving the Dirac phase in the global fit does not alter the results, we will focus on the other five neutrino parameters in the discussion that is to follow. From the discussion of the neutrino parameters, we can summarise the following. First of all, the absolute value predicted for the parameters ϑ 12 , ϑ 13 and m 3 at the GUT scale are nearly identical for Case A,B as well as for Case C,D. As opposed to this, the predictions for ϑ 23 and m 2 at the GUT scale are without case induced pattern. Second of all, the RG running of m 3 and ϑ 13 are similar for Case A,B and Case C,D, respectively. On top of that, m 2 and ϑ 12 exhibit the same RG running behaviour for all four LS cases. The only parameter not showing any case-dependent pattern is ϑ 23 . What can we learn from these observations and where do they come from? As we already realised when investigating the different RH neutrino mass settings, there are two additional connections between Case A and B as well as Case C and D, namely the absolute value of the input parameter m a for the best-fit scenario and the predominant dependence on either m b or m a of the RG running of the mixing angles. In order to understand the reasoning behind the above observations, we need to briefly recap some basic features of the LS and its RG running: • From Ref. [28], we can extract the following estimates for the neutrino parameters at the GUT scale derived for Case A: with tan 2θ ≈ √ 6m b (n−1)/|m a +m b e iη (n−1) 2 | and ω = arg[m a +m b e iη (n−1) 2 ]−η. Without running effects, these estimations also hold true for Case C. The mixing parameters for Case B, and since we do not need to consider running effects at the GUT scale also Case D, are m B 2,3 = m A 2,3 , ϑ B 12 = ϑ A 12 , ϑ B 13 = ϑ A 13 and ϑ B 23 = π/2 − ϑ A 23 . Although, we have only drawn a connections between Case A,B and Case C,D with respect to input parameter m a , one has to bare in mind that the input values m b are all within a close range, namely within [4.16 meV, 4.20 meV]. A variation of only 0.04 meV does not alter tan 2θ or ω significantly. Consequently, Case A and B yield similar tan 2θ or ω. As do Case C and D. These estimates already answer why for similar m a , as given for Case A,B and Case C,D the neutrino parameters m 3 , ϑ 12 and ϑ 13 are almost identical at the GUT scale. It also explains, why the GUT scale values for parameter m 2 -predominantly depending on input parameter m b -are within a close range without exhibiting a clear case-dependent structure. And at last, it unveils why the ϑ 23 values at the GUT scale do not show any indication of the connection between the different cases. The connection between the cases appears in the choice of m a , and would suggest similar atmospheric angles for Case A and B (or analogously for Case C and D). However, due to the relation between the atmospheric angle for A and B, as given above, there is an offset of a few degrees. The same is true for Case C and D. • Furthermore, from Ref. [28]'s derivation of the mixing angles RGE running for Case A and B, we know that for µ > M 2 only the running of ϑ 23 differs for Case A and B. The latter is significant as for the atmospheric angle most of the running occurs within that region. Moreover, the corrections to the GUT scale value of ϑ 23 come with opposite signs for Case A and B, which explains why one decreases and the other increases its atmospheric angle. The running behaviour of the mixing angles in EFT 1 differs for cases A and B but is still quite similar since only the coefficients in front of a few terms are different. As the same structure is responsible for the running above M 2 for cases C and D, there is no sign change, which agrees with our numerical observations. On the other hand, our numerical results indicate, that for the regime M 2 > µ > M 1 ϑ 23 increases for Case C but further decreases for Case D, which gives an edge to Case D regarding the global fit to data. A more in-depth investigation of this feature, however, it is beyond the scope of this work. In summary, the connection between Case A↔ B and C ↔ D stems from a combination of two features. Due to the similar running in most of the five neutrino parameters, the parameters at the GUT scale have to be similar. On top of that, we know from the estimates in Eqs. (18) and (19) that similar GUT scale neutrino parameters enforce similar input parameters. Take for example the neutrino masses: from our numerical analysis, we learn that m 2 and m 3 exhibit nearly identical running for Case A and B. Since each of the neutrinos masses is directly linked to an input parameter, this already determines the suitable range of said input parameters, which is refined by means of including the mixing angles to the fit. The same can be done for Case C and D. As the running of m 3 is stronger in comparison to the one in Case A,B, the input parameter m a has to be higher for Case C and D, which can be observed in our results. Due to the intrinsic features of the LS cases and their connections among each other, it is possible to obtain comparably good values for m 2 , m 3 , ϑ 12 23 , however, both the running behaviour and the relation between GUT scale value and input parameters does not follow the other neutrino parameter's connection between cases. As a consequence, the EW scale atmospheric angles show the widest spread depending on the case, and thus are most important with respect to the compatibility with experimental data. It is therefore not surprising that the hierarchy with respect to how well a scenario predicts ϑ 23 is reflected in the goodness-of-fit values χ 2 . Thereby favouring Case D with a remarkable χ 2 = 1.49 over also excellent goodness-of-fit results between 3.24 and 7.14 for Cases A, B and C. MSSM Results In this section we examine the LS within the framework of the MSSM. We vary the SUSY breaking scale, considering M SU SY = 1, 3, 10 TeV. For each MSSM setting with fixed M SU SY , we furthermore investigate how tan β as well as the threshold effects, comprised in the parameter η b and explained in Appendix A, affect the goodness-of-fit. To this end, we consider tan β = 5, 30, 50 and η b = −0.6, 0, 0.6. The results are collected in Tab. 7 and Tab. 8 with the corresponding predictions for neutrino masses and PMNS parameters in Figs. 5 to 8 and in Appendix B, Tabs. 11 to 14. Note that we display detailed results for the setting with M SU SY = 1 TeV, tan β = 5 and η b = 0.6 in Figs. 5 to 8. We choose this MSSM setting for a more detailed representation of the neutrino parameters' running behaviour because it yields the most compatible results with experimental data for cases B, C and D. The MSSM results indicate the following: • Independent of the SUSY breaking scale and/or tan β, Case B yields the best fit to experimental data. The next best scenario with respect to the goodness-of-fit is Case D, which depending on the specific settings can follow Case B closely. The compatibility with experimental data deteriorates for Case A and further for Case C. How strongly the four cases vary in terms of χ 2 depends on the choice of M SU SY and tan β. • Looking at the influence of M SU SY on the overall performance of a scenario, we keep tan β fixed and compare the goodness-of-fit measure χ 2 for the three SUSY breaking scales. Performing this task for each LS case individually, we find that changing M SU SY barely affects the compatibility with data. There are only slight changes in χ 2 . We observe an increase in the absolute value of χ 2 with higher M SU SY for tan β = 5. For tan β = 30, Case A prefers higher M SU SY while cases B, C and D prefer lower ones. And for tan β = 50, the goodness-of-fit increases with the SUSY breaking scale -meaning χ 2 declines. • Moreover, we find that -for each M SU SY and LS case -the higher tan β the higher χ 2 , which means the poorer the overall agreement with experimental data. • Just as we have ascertained for the SM, all MSSM settings yield only slightly poorer values when including the Dirac phase δ into the measure for the goodness-of-fit than they do without. Their difference is below 1 % due to the comparably large uncertainty on the Dirac phase. On these grounds, we will refer to the χ 2 values when further discussing the fundamental behaviour with respect to the different MSSM settings. • By including observations from Tabs. 7a to 8c, we learn that for each LS case and setting, i. e. fixed SUSY breaking scale and tan β, it is always the highest value of η b under consideration that yields the best fit. How strongly the goodness-of-fit, and thereby its measure χ 2 , vary with η b depends predominantly on tan β. The higher tan β, the more variation with η b one observes in χ 2 . • When taking a closer look at Tabs. 7a to 8c displaying the varying threshold effects for tan β = 30, we observe unusually large values for χ 2 for the threshold effects η b = −0.6. The latter can be explained by considering that this setting is at the border to the region where we run into trouble regarding non-perturbativity, which means that at least one of the Yukawa couplings becomes non-perturbative. As discussed later in Sec. 7, we know that most neutrino parameters do not only exhibit connections between cases A↔B and C↔D for the SM but also for the benchmark MSSM scenario with M SU SY = 1 TeV, tan β = 5 and η b = 0.6. The analogous behaviour observed among the cases is connected to their similar input parameter m a , which we examine in Sec. 5 for the SM. In Sec. 5, we learn that the connection for Case A↔B and C↔D originates from a combination of the similar running behaviour in most neutrino parameters, see also Sec. 3, which enforces similar starting values at the GUT scale, and the way the GUT scale parameters are linked to the two input parameters m a and m b . The line of reasoning employed for the SM caries over to the MSSM -with minor modifications, see Ref. [28]. We, thus, expect similar m a for Case A, B and C, D, respectively, within a fixed MSSM setting, as well as an overall narrow range for m b . This can indeed be observed in Tab. 7 and Tab. 8, where we give the input parameters m a and m b (in meV). In the following, we briefly discuss how varying M SU SY and tan β influences these connections: • As already discussed above, we expect the input parameter m a to reflect the connections between Case A↔B and C↔D. As well, we expect that the input parameter m b does not display any such connections but lies in a narrow region for all cases. Both projections prove to be right. How close the input parameter m a for Case A is to the one for Case B, however, depends on tan β. The same is true for Case C and D. In other words, the higher tan β, the further apart are the m a of the connected cases. This can be traced back to the RG running, which depends on tan β 8 . That is to say that there is -in general -more running for higher tan β, and consequently, more deviation in GUT scale values depending on the case, which translates most directly to m a . • Fixing tan β to either of the three settings, one can observe an increase in both m a and m b with M SU SY . • Fixing M SU SY , on the other hand, does not yield any such clear tendency for neither m a nor m b . • The overall range of values obtained by varying the SUSY breaking scale and tan β is similar for all four LS cases, namely about 1 meV for m a and roughly 0.11 meV for m b . This means that a variation in the MSSM setting has a nearly identical impact on all four LS cases, which is further supported when taking a closer look at the relative changes in m a in between the settings studied. One could in principle elaborate further on the discussion above, and also study the correlations of the LS cases on the level of neutrino parameters and that way confirm the key role of the atmospheric angle for the goodness-of-fit for all MSSM settings. This is, however, beyond the scope of this work. Comparing SM and MSSM Results The purpose of this section is to compare the SM and MSSM behaviour. To this end, we choose one benchmark MSSM scenario with the SUSY breaking scale at M SU SY = 1 TeV and a threshold effect parameter of η b = 0.6. The meaning of the latter is explained in Appendix A. A more thorough discussion of the MSSM behaviour including different SUSY breaking scales and varying threshold effects can be found in the previous Sec. 6. Note that we employ the RH neutrino mass setting (10 15 , 10 12 ) GeV throughout the following analysis. In Tab. 9, we collect the goodness-of-fit values for the SM and the benchmark MSSM scenario with varying tan β. There are several observations worth mentioning: • First and foremost, we note that the SM scenarios make for significantly better fits to the experimental data for each LS case individually. In fact, the poorest fit from the SM, namely Case A at χ 2 = 7.14, outperforms the best for the MSSM, namely Case B with tan β = 5 at χ 2 = 8.56. • While for the SM, the goodness-of-fit deteriorates from Case D via C and B to Case A, the order changes for the MSSM benchmark scenario, leading to Case B being most compatible with experimental data -followed somewhat closely by Case D, and then by Case A and C. • The four LS cases of the MSSM benchmark scenario all yield a χ 2 δ value that is only marginally poorer than the one for χ 2 -by below 1 %. The difference between χ 2 and χ 2 δ for the SM, on the other hand, can be up to a few percent. To understand why the SM does yield better agreement with experimental data than the MSSM scenario as well as to understand the distinct characteristics with respect to the relative suitability of the different LS cases, we investigate and compare the behaviour of the neutrino parameters. As we strive to compare SM and MSSM, we focus the discussion on generic differences in the initial values (meaning at the GUT scale) and the RG running behaviour of the neutrino parameters without delving into the specifics of the MSSM. Since tan β = 5 makes for the most suitable predictions from the MSSM benchmark scenario, we use its predicted neutrino parameters when comparing to the SM. From the upper left panels of Figs. 1 to 4 for the SM in combination with Figs. 5 to 8 for the MSSM benchmark scenario, we can condense the following characteristics with respect to the neutrino parameters: • The mixing angle ϑ 12 is predicted to be in between [ . Consequently, the SM predictions for Case A,B are encompassed in and those for Case C,D close to the standard deviation, whereas the MSSM predictions for Case C,D lie about as close as the SM's Case C,D and the MSSM's Case A,B are further above. Thereby, the solar angle has a bias towards the SM for cases A and B, while there is no preference when considering cases C and D. As observed in the previous section, there is an intrinsic connection between Case A ↔ B and Case C ↔ D for the SM, which also appears for the MSSM benchmark scenario. That is to say, that -in case of this MSSM scenario -cases A and B generate quite similar values at the GUT scale, display an overall identical but minor increase based on the RG running between the GUT and the EW scale, and thus predict similar ϑ 12 at the EW scale. For the MSSM benchmark scenario, cases C and D behave analogously apart from a decline in the solar angle with the decrease of the energy scale and deviating absolute values at the GUT scale. • Analysing the predictions for the mixing angle ϑ 13 , we obtain a LS-case-dependent range of [8. , both predicted ranges are centered around the measured value and fully encompassed within the 1σ region. Thus, there is no general bias towards either the SM or the MSSM scenario from the reactor angle. From the SM discussion in Sec. 5, we recall that cases A and B generate similar initial values at the GUT scale, undergo the same overall decline with the energy scale and thereby predict similar values at the EW scale. The same holds true for cases C and D, but with an increase in ϑ 13 from the GUT to the EW scale and absolute values that differ from Case A,B at the GUT scale. Nevertheless, all four cases converge to a narrow region and predict similar reactor angles within the framework of the SM. Since the MSSM scenario displays a nearly identical range of predicted ϑ 13 , one might assume that the underlying behaviour is equivalent. This, however, does not stand up to scrutiny. From Figs. 5 to 8, we learn that the starting values at the GUT scale are spread. The RG running, on the other hand, does yet again display the connection between the cases; leading to hardly any alteration of ϑ 13 due to running effects for cases A and B, and an increase by 0.17 • from the GUT to the EW scale for cases C and D. This allows for the EW scale values of cases A and C to be close. The same is observed for the EW scale reactor angles of cases B and D. Since the measured mixing angle lies centered in between the different LS cases, there is no strong preference for any case to be discerned within the framework of the MSSM -which is also true for the SM. Furthermore, the spread of the predicted values depending on the LS case is large in comparison to the other two mixing angles, which is true for both frameworks, SM and MSSM. The atmospheric angle also differs from the other mixing angles in terms of connections between different LS cases. For neither the SM nor the MSSM framework, there are connections for the prediction at the GUT scale, or the RG running behaviour. Consequently, the atmospheric angle plays a decisive role with respect to the compatibility of a scenario with experimental data -and it favours the SM over the MSSM as framework for the respective LS cases. It is, therefore, not surprising that the goodness-of-fit, measured by χ 2 , reflects the order of how well a case and/or scenario predicts ϑ 23 . As an example of this feature take the atmospheric angles predicted by the SM's Case A, ϑ SM,A Since the magnitude of the increase varies slightly, we obtain a marginally wider region of m 2 values at the EW scale than we do for the SM. The opposite direction of the RG running can be traced back to the coefficients in the RGEs that differ for the SM and the MSSM, including a relative sign [31,33]. Despite the fundamental differences in terms of RG behaviour, the prediction of m 2 only gives a narrow edge to the SM over the MSSM for Case A. For the remaining three LS cases, there is no preference for either the SM or the MSSM from the light neutrino mass. Although there is no bias towards any scenario or case from the heavier of the light neutrino masses, the features leading to the EW scale value differ. As already observed for the lighter neutrino mass m 2 , m 3 undergoes different alterations due to the RG effects. Recall that for the SM cases A and B start from roughly the same value at the GUT scale, as do cases C and D. The initial GUT scale values are significantly higher for the latter. All four LS cases exhibit a decrease of m 3 with the energy scale -with stronger effects for Case C,D. Taking a closer look at the MSSM, we note that both Case A,B and Case C,D start from nearly identical values at the GUT scale -with the latter being a bit higher. The RG running effects are opposite to those of the SM, meaning that m 3 increases from the GUT to the EW scale, which in analogy to m 2 is attributed to the coefficients of the RGEs [31,33]. Nevertheless, both frameworks and all four scenarios within predict the measured value perfectly, and thus give no bias regarding the goodness-of-fit. Intriguingly, both the SM as well as the MSSM framework can generate comparably good values for the neutrino parameters ϑ 13 , m 2 and m 3 , which are the parameters that have the lowest spread with respect to the LS case. Note that for ϑ 13 and m 3 all four LS cases in both frameworks are within the 1σ region, and for m 2 there is only one outlier, namely the MSSM's Case A. The latter allows for a slight preference of the SM over the MSSM but only when considering case A. A more important distinction stems from the mixing angle ϑ 12 . First of all, ϑ 12 has a bias towards the SM for the cases A and B while it does not display a bias for cases C and D -giving an overall edge to the SM. Secondly, the reshuffled order with respect to how well the different LS cases do hints towards the observation that the hierarchy among the LS cases changes depending on the framework. The most decisive role with respect to compatibility with data, however, falls to the atmospheric angle ϑ 23 once again. For ϑ 23 , there is not only the widest spread regarding the different LS cases but also the most explicit gap between the values predicted by the SM and those derived from the MSSM. In addition, the ordering of LS cases by means of how well they predict the atmospheric angle directly translates to the overall performance. It is therefore, once again, the atmospheric angle that is most significant and makes for the substantially better fits of the SM scenarios to the experimental data. Conclusions We have performed a detailed RG analysis of the LS models, including those cases where the RG corrections can become significant. Unlike a previous analysis, where the input parameters were fixed independently of RG corrections, we have performed a complete scan of model parameters for each case individually, to determine the optimum set of high energy input values from a global fit of the low energy parameters which include the effects of RG running. In all cases we perform a χ 2 analysis of the low energy masses and mixing angles, including RG corrections for various RH neutrino masses and mass orderings. We have made complete scans for each LS case individually within the framework of the SM and the MSSM to determine the optimum set of input values (m a , m b ) at the GUT scale from global fits to experimental data at the EW scale. Perhaps not surprisingly, the values of χ 2 that we obtain here are significantly lower than those obtained in the previous analysis where the input parameters were determined independently of RG corrections. We have found that the most favourable RG corrections occur in the SM, rather than in the MSSM. Amongst the three mixing angles, we find that the atmospheric angle is often the most sensitive to RG corrections in both the SM and the MSSM, although in the latter the corrections are relatively small. Without including RG corrections the LS predictions are in some tension with the latest global fits, mainly because the atmospheric angle is predicted to be close to maximal. The sensitivity of the atmospheric angle to RG corrections in the SM then allows a better fit at low energies, corresponding to an atmospheric angle in the first octant, close to the current best fit value for a normal hierarchy. For the SM, we have performed the analysis with various RH neutrino masses and for the MSSM we investigated different SUSY breaking scales, tan β and threshold effects. In the case of the SM, it turns out that its beneficial for the running effects if the heavier of the RH neutrinos is closer to the GUT scale, with masses (10 15 , 10 12 ) GeV yielding the best results. In this case we found for the SM: χ 2 A = 7.1, χ 2 B = 4.4, χ 2 C = 3.2 and χ 2 D = 1.5, corresponding to exceptionally good agreement with experimental data, especially for case D. We emphasise that the atmospheric angle plays a key role in our analysis, and is the crucial factor in obtaining low χ 2 values for a given set up. While it is possible to obtain comparably good results for m 2 , m 2 , ϑ 12 and ϑ 13 at the EW scale for all LS cases, it is ϑ 23 that varies most for different cases within the SM or the MSSM. While the SM and MSSM can generate comparably good m 2 , m 3 and ϑ 13 , and there is some preference of ϑ 12 in favour of cases A and B of the SM, the most decisive parameter is ϑ 23 for which the SM predictions are significantly better. This is partly a result of the fact that RG corrections in the MSSM are relatively small, compared to the SM, and so the prediction of near maximal atmospheric mixing is maintained at low energies in the MSSM. Forthcoming results from T2K and NOvA on the atmospheric mixing angle will test the predictions of the LS models. The inclusion of RG corrections in a consistent way, as done in this paper, will be crucial in confronting such theoretical models with data. [34] in the following. The first step is to derive the Yukawa couplings at M Z from the experimental data. The latter are handed over to REAP, which calculates their RG running to M SU SY . At the SUSY breaking scale, the SM has to be matched to the MSSM. As the radiative corrections can be tan β enhanced, and therefore even exceed the one-loop running contributions, we must include them at the matching scale. This leads to a correction to the down-type quark as well as the charged lepton Yukawa matrix, which can be simplified to [35] Here, one chooses a basis where the up-type Yukawa matrix is diagonal. Note that only contributions enhanced by tan β are included which is accurate up to the percentlevel. Furthermore, the threshold corrections to the first two generations of down-type quarks and charged leptons are assumed to be of the same size, respectively. This is a good approximation in many SUSY scenarios provided that the down and strange squark as well as the selectron and smuon are of nearly the same mass. The corrections in Eq. (20) depend on the specific SUSY scenario under consideration, and need to be computed correspondingly. The parameters η q and η q originate predominantly from gluino contributions in combination with some Wino and Bino loop corrections, whereas η and η are caused by electroweak gauginos. The correction from η A is related to the trilinear soft SUSY breaking term A u [35]. Note that all parameters η contain the factor tan β. The six parameters used in Eq. (20) can be combined into four, namely η ≡ η − η , and cos β ≡ 1 + η cos β . Starting from the basis, where the SM Yukawa matrices Y u and Y are diagonal at M SU SY , the expressions for the MSSM Yukawa matrices at the SUSY breaking scale are given by [34] Y MSSM with the CKM parameters fully included in the down-type quark matching condition. As the parameters η and η q only affect the first two generations of Y d and Y , which are small in comparison, their effect on the RG running can be neglected to good approximation. In other words, there are four parameters needed for the matching procedure at the SUSY breaking scale, but only two out of these, namely η b and tan β, in order to perform the RG evolution to the GUT scale. The authors of Ref. [34] derived the GUT scale MSSM quantities for three different SUSY breaking scales, namely M SU SY = 1, 3, 10 TeV, and provided them in form of data tables at http:/particlesandcosmology.unibas.ch/RunningParameters.tar.gz. From these tables, one can extract the GUT scale values depending on the choice of the parameters η , η q , η b and tan β. The proper translation between the data made available and the Yukawa couplings as well as CKM parameters we employ as input at the GUT scale is given in the captions of Figs. 1 to 3 and 5 of Ref. [34]. In order to further reduce the number of possible MSSM settings, we assume that the leptonic corrections η and η can be neglected. As a consequence, it is η = 0. For tan β ≥ 5, it is − −− → tan β. By this approximations only, we can extract the charged lepton Yukawa couplings, the up-type Yukawa couplings as well as the coupling of the bottom quark. In order to also extract the strange and down Yukawa couplings, we also need to specify η q . Since the RG running of the neutrino parameters, which is the ultimate goal of this work, depends mostly on the bottom quark's coupling, and not on the down and strange quark, we can neglect η q . We could have used a similar argument when setting η to zero as we mostly care for the effect of the τ lepton on the RG running of the neutrino parameters. As a consequence of these simplifications, we are left with the parameter η b comprising the threshold effects and tan β when fixing the MSSM setting. Note, furthermore, that the CKM mixing angle θ 12 and the CP violating phase δ are barely affected by threshold effects and RG running. As a consequence, we use their REAP default values. The CKM mixing angles θ 13 and θ 23 , on the other hand, depend on η q and η b . With the simplifications discussed above, we also extract their GUT scale values from the data tables in http:/particlesandcosmology.unibas.ch/RunningParameters.tar.gz. Based on the data provided by the authors of Ref. [34], we investigate MSSM scenarios with the SUSY breaking scales M SU SY = 1, 3, 10 TeV. Furthermore, we choose tan β = 5, 30, 50 and threshold effects within the range of η b = −0.6 → 0.6. For the latter, the range needs to be adapted depending on tan β to avoid non-perturbative Yukawa couplings. The MSSM settings investigated throughout this work are supposed to be benchmark settings that give an overview on the LS's RG behaviour within the framework of the MSSM. The corresponding initial values extracted as discussed above and handed over to REAP are given in Tab. 10. In case one has a more specific MSSM scenario in mind and aims at a more precise analysis of its SUSY threshold corrections, there is a software extension to REAP called SusyTc that generates the appropriate input values from the SUSY breaking terms [36].
v3-fos-license
2020-12-03T09:07:29.492Z
2020-12-01T00:00:00.000
227253282
{ "extfieldsofstudy": [ "Psychology", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0243106&type=printable", "pdf_hash": "1a099ce1c81e860128ab5e4cbf70a427753bde2e", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43636", "s2fieldsofstudy": [ "Sociology", "Medicine" ], "sha1": "fd2ef13d9395633ddf6887a30ac22b35a9aeb475", "year": 2020 }
pes2o/s2orc
Violence and hepatitis C transmission in prison—A modified social ecological model Background Transmission of hepatitis C virus (HCV) among the prisoner population is most frequently associated with sharing of non-sterile injecting equipment. Other blood-to-blood contacts such as tattooing and physical violence are also common in the prison environment, and have been associated with HCV transmission. The context of such non-injecting risk behaviours, particularly violence, is poorly studied. The modified social-ecological model (MSEM) was used to examine HCV transmission risk and violence in the prison setting considering individual, network, community and policy factors. Methods The Australian Hepatitis C Incidence and Transmission Study in prisons (HITS-p) cohort enrolled HCV uninfected prisoners with injecting and non-injecting risk behaviours, who were followed up for HCV infection from 2004–2014. Qualitative interviews were conducted within 23 participants; of whom 13 had become HCV infected. Deductive analysis was undertaken to identify violence as risk within prisons among individual, network, community, and public policy levels. Results The risk context for violence and HCV exposure varied across the MSEM. At the individual level, participants were concerned about blood contact during fights, given limited scope to use gloves to prevent blood contamination. At the network level, drug debt and informing on others to correctional authorities, were risk factors for violence and potential HCV transmission. At the community level, racial influence, social groupings, and socially maligned crimes like sexual assault of children were identified as possible triggers for violence. At the policy level, rules and regulations by prison authority influenced the concerns and occurrence of violence and potential HCV transmission. Conclusion Contextual concerns regarding violence and HCV transmission were evident at each level of the MSEM. Further evidence-based interventions targeted across the MSEM may reduce prison violence, provide opportunities for HCV prevention when violence occurs and subsequent HCV exposure. Introduction Hepatitis C virus (HCV) infection is a major public health threat with estimated global prevalence of 1% chronic infection [1]. HCV is a blood-borne virus (BBV), frequently transmitted through unsafe injection practices such as sharing of contaminated equipment, especially in high-income countries [2,3]. Among people who inject drugs (PWID), estimated 52% have detectable antibodies against HCV, and 58% have a history of imprisonment across the world [4]. Multiple factors contribute to the higher prevalence of HCV infection in prison than those in the community with criminalisation of drug use being the major contributor [5]. Globally, HCV antibody prevalence among the prisoner population is estimated to be 15% [6], with surveillance of the Australian prisoner population revealing a 22% prevalence [7]. Injecting drug use within the prison carries a high per injecting episode risk of HCV exposure [8]. This is largely attributed to the lack of access to sterile injecting equipment which leads to frequent sharing of injecting equipment [5,9,10]. Beside injecting risk exposures, several non-injecting risk behaviours including physical violence in which blood-to-blood contact occurs [11], tattooing [12], reuse of barber's shears [13], sexual transmission among males via anal sex [14], and in rare occasions vaginal intercourse [15] have been linked to transmission of HCV within the prison setting. However, the contextual concerns around transmission of HCV associated with violence has been poorly studied [16,17]. Exposure to bleeding caused by intimate partner violence were independently associated with transmission of HCV [18]. The evidence of HCV transmission following bloody fist fight has been reported [19]. A qualitative study in Australian prisons pertaining to economics of drug use and blood borne virus transmission examined physical violence, drug debt and potential HCV transmission as a complex interrelated issue [20]. Holistic understanding beyond individual risk factors for HCV transmission creates an opportunity to craft appropriate organisation-wide preventive strategies. The modified social-ecological model (MSEM) is a comprehensive approach to identifying contextual concerns regarding disease transmission considering individual, network, community, public policy levels, and the stage of epidemic [21,22]. The individual level of the model includes biological or behavioural characteristics associated with the vulnerability to acquire or transmit the pathogen and developing infection [23,24]. At the network level, social networks are considered which include interpersonal relationships such as family, friends, neighbours and others that directly influence health and health behaviours that might predispose the transmission [24]. Community level of the framework considers the cultural, economic, religious, geographic lines, prison walls, community norms, stigma, race, code of conduct in prison, or any combination that may bind communities [25]. Policy level examines policies and laws from the stakeholders' perspectives and subsequent decision on programmes to prevent disease transmission [26][27][28]. Ultimately, the epidemic level is determined by disease burden across different settings [26,29]. The framework has been utilised to characterise BBV infections, including HIV infection, with the aim of guiding development of HIV prevention strategies [21]. Centres for Disease Control and Prevention, United States (US CDC) has developed a technical package for violence prevention based on the social ecological framework [30]. The MSEM was previously adopted to identify the concerns among vulnerable population groups (i.e., injecting drug users and men who have sex with men) for HIV prevention [21]. Identification of concerns might help to implement further research on specific levels for increased understanding and intervention responses. The MSEM framework enabled useful insights into the complex public health problem of HCV transmission in prison. The objective of this study was to describe the context and concerns among prisoners regarding HCV transmission in prison associated with violence using the MSEM framework. Methods This qualitative study was conducted as part of a broader prospective cohort study, the Hepatitis C Incidence and Transmission Study in prisons (HITS-p) [31,32]. The objective of HITS-p was to estimate HCV incidence and identify risk factors for transmission in the prison setting. Participant enrolment commenced in 2005 across 30 correctional centres in New South Wales (NSW), Australia and concluded in 2014. A total of 590 persons in prison were enroled in this cohort study [9]. Participants enrolled in the HITS-p study were eligible for the qualitative sub-study. The objective of the qualitative sub-study was to understand the broader contexts and concerns regarding HCV transmission in prison. Thirty participants were recruited in the sub-study. Among them, a subset of participant interviews describing contexts and concerns regarding violence in prison and HCV transmission was analysed for this study. The remaining participant interviews focused on decisions about hepatitis C treatment. Corrective Services NSW, Justice Health and Forensic Mental Health Network, and University of New South Wales human research ethics committees provided approval for the HITS-p cohort study, including the qualitative sub-study. Prisoners aged 18 years or above who reported either a history of ever injecting drug use or had non-injecting risk behaviours (including tattooing, piercing or fighting), and had a documented negative anti-HCV test result in the 12 months prior to enrolment were eligible to participate in HITS-p cohort study. Prisoners with detectable antibodies against HCV, insufficient English or current psychiatric disorder to preclude consent, or were pregnant, were excluded from the study. The qualitative data explored the complex and inter-related nature of practices and environments surrounding HCV risk and potential prevention strategies among prisoners. An interview-based method was chosen to allow participants to fully discuss and explore the context of violence and potential blood-borne viruses including HCV transmission by physical violence in prison. During HITS-p cohort study, all participants were screened for HCV antibodies and viraemia; and then monitored every 3 to 6 months via blood testing. An interviewer-administered questionnaire was completed at each visit to record both injecting (e.g., frequency of injecting, frequency of sharing/use of personal equipment) and non-injecting risk behaviours (e.g., tattooing, piercing and physical violence). During 2013-2014 HITS-p study participants were invited to be enrolled in this qualitative study. The qualitative study methods and results adhere to the Consolidated Criteria for Reporting Qualitative Research (COREQ) [33]; see Supporting information S1 COREQ checklist. The psychology-trained HITS-p research nurse (LM), who was engaged in the HITS-p study for several years during which he collected blood samples and conducted behavioural surveys with HITS-p participants, also informed the prisoners about the qualitative study and offered the opportunity to participate. The interviewer (LM) was trained and supervised by the experienced qualitative social researcher (CT) of the HITS-p study. All participants were selected purposively to represent injecting and non-injecting drug use, and risk exposures among prisoners with and without HCV infection. The nurse explained the purpose of the study and the prisoner's rights to accept or decline the offer (a decision to not participant in the qualitative component had no bearing on their involvement in the larger HITS-p study or their relationships with Corrective Services NSW or Justice Health & Forensic Mental Health Network). When the in-depth interviews were scheduled and the prisoners attended, the nurse reiterated the ethical principles of informed consent and confidentiality, withdrawal without penalty, and the importance of avoiding discussion of specific seriousincidents which would require legislated mandatory reporting to authorities. Written informed consent was obtained. To protect participants' privacy, interviews were conducted in a private clinic room in the absence of correctional officers. Participants received AU$10 into their inmate account for their participation in the interview through the approvedprison inmate banking system to compensate for their time and effort in completing the research interview. This amount was recommended by the research ethics committee as being sufficient to constitute 'reimbursement for time and convenience' (as was stated on the consent form), but insufficient to provide a strong or coercive incentive to participate. In practice, these monies are typically expended for 'buy-ups'-that is purchase of food or toiletry items not otherwise available in the prison. An interview guide was developed by the authors who are experienced in prison-based health research. Probes were used to facilitate discussions. The interview schedule included topics such as HCV risk perceptions (what risks are perceived by inmates; what risks can be compromised or negotiated and what cannot); participant's knowledge and perception of susceptibility of HCV infection, as a highly prevalent BBV, especially among PWIDs in the prison setting; and injecting and non-injecting risk behaviours, including tattooing and violence. During specific discussion regarding violence, the type of violence, and situations that lead to violence, as well as prisoners' concerns around violence were explored (S1 Appendix). Importantly, how HCV transmission risk was configured in relation to this violence in prison was also discussed. Demographic information was collected from all participants. The duration of the interviews ranged from 30 to 70 minutes. At the conclusion of each interview, participants were provided with written information about HCV, an opportunity to discuss any further issues with the research nurse, and information about access to the Prison Hepatitis Infoline (a toll free service connecting people in custody with the state's communitybased hepatitis organisation). Interview transcripts have been assessed for data saturation, with no new themes emerging in the final interviews. The responses regarding violence in prison from this subset of participants achieved saturation, hence the interview focus conducted among remaining prison participants shifted to HCV treatment. Interviews were audio-taped and transcribed verbatim. Transcripts were checked for accuracy against recordings, de-identified and cleaned. The data was then read closely, and a number of themes were identified as relevant to the research questions, specifically relating to violence in prison. The research team then collaborated on the construction of a "coding frame"-a set of organising, interpretive themes to aid analysis [34]. CT and LM developed the first-round coding framework. The coding frame was then used to organise interview data within NVivo 12 (QSR International Pty Ltd. Version 12, 2018). Memos were written between close reading of the transcripts and development of the MSEM coding framework. HS and LL developed the secondary coding framework to elicit responses pertaining to the MSEM. LL conducted secondary coding of random samples to ensure consistency and to establish interrater reliability. The primary or initial coding frame and the MSEM coding frame are separate projects with distinct analyses to interpret different aspects of HCV risks within the prison setting. However, the importance of violence was a primary interest of the project and is examined here in a secondary analysis using the MSEM. The analysis was informed by both a deductive and inductive approach to cover the contexts and concerns regarding HCV transmission following violence considering the MSEM [35]. Specifically, each participant's response was reviewed to examine the specific circumstances of violence in prison, and their concerns regarding HCV transmission through violence at every stage of the MSEM. Consideration of the stage of the epidemic in the social ecological framework highlights the fact that the high burden of HCV infection in the prison setting, impacts on the consequences of prisoner violence (with a heightened risk of transmission) and the importance of recognising this context in prevention strategies. Each aspect of the thematic analysis, that is, the interpretations and meanings drawn from the interview data was critically examined and summarised along with supporting quotes. Quotes are presented by participant number, gender, and nature of risk behaviours-injecting and/or other risk behaviours, including tattooing, piercing and physical violence. Results Twenty-three people in prison participated in this study, eight (35%) of whom were female. During June 2014, 2591 (7.7%) prisoners were female in Australian prisons [36]. The median age of participants was 27 years; range 22-51 years. Thirteen (57%) participants were white and 9 (39%) participants identified as Indigenous. One participant's racial background was not provided. Of this group, 10 had no detectable HCV antibody (not exposed to HCV) at the time of interview, 5 had chronic HCV infection (persistent infection for greater than six months) and 8 had recent HCV infection (Table 1). All acute and chronic HCV infections had been acquired during incarceration. Among 30 participants of the broader qualitative study, the final seven participant interviews focused on decisions about HCV treatment and did not include discussion of violence as a risk factor for HCV transmission. Information is not available in relation to participants refusing invitation to participate. Risk factors and behaviours varied among the participants. Ten participants reported only injecting drug use; nine participants reported both injecting drug use and other possible exposure to blood (via tattooing, piercing, violence or haircuts) and four participants reported only other non-injecting related possible exposure to blood (via tattooing, piercing, violence, or haircuts). Physical violence among participants was a clear concern for potential transmission of blood-borne viruses, including HCV. Importantly, the risks and concerns were varied across different levels of the social ecological model-individual, network, community, public policy and stage of epidemic. The framework provided understanding of risk contexts, and of concerns around violence and HCV and other blood-borne pathogen transmission following violent episodes. Individual-level contexts At the individual level of the socio-ecological framework, the concerns regarding HCV transmission expressed were related to the nature of violence encountered by the prisoners. Participants reported different types of violence in prison. The minimum level of violence was verbal aggression among prisoners, which was reported by 18 participants. In the context of increasing disagreement of opinion, 9 participants reported that verbal aggression escalated to physical violence. A minority of participants (n = 3) reported seeing others involved in almost daily physical violence in any part of the prison (wing, yard, and cell). Sometimes violence involved only two protagonists, while other episodes involved several prisoners. Physical violence included boxing, stabbing, slashing, or 'blading' (with a razor). These events occasionally required hospital admission for one or more of those involved. In rare occurrences, some participants reported being aware of violent altercations that had resulted in the death of a prisoner. In addition, a small number of respondents expressed their concern regarding permanent physical harms, including loss of teeth and facial disfiguration. Sexual violence, such as rape (which may result in damage to the skin or mucosa and subsequent blood exposure), was not raised by participants when discussing violence. Yeah, I've seen all sorts. I've seen, you know, I've seen blokes get stabbed. I've seen all-in brawls. I've seen one-on-one fights. I've seen arguments. I've seen, you know, I've seen fights between officers and inmates. I've seen pretty much all forms of violence that there is. I busted his mouth, smashed his tooth in, and it ripped up all my knuckles. Women participants reported either observing or hearing about particularly violent acts between prisoners. I've seen scissors go into a girl's temple. I've seen people get stabbed. I've seen hot water get thrown on somebody. I've seen a lot of bashings. I've seen broomsticks get thrown over somebody's head. Vacuum cleaners get thrown. I've seen chairs get thrown over somebody's head. Obviously just fists. A lot of fists. A screwdriver. Home-made shivs [a knife made in prison]. A few months ago here there was three ladies who went in. Girl got a drop and she-[She got drugs brought in]. They held her in the room, bashed her and then went up inside her to get the drop but there was no drop up there. She already had a mate holding onto it because there was out in the air that these girls were going around doing this to women in here. [Interviewer asked whether she was holding the drugs anally or vaginally]-Vagina. A majority of participants reported verbal disagreements quickly escalating into physical violence as a result of anger. With such altercations occurring sporadically and 'in the heat of the moment', people in prison do not have opportunity to contemplate whether their opponent has any blood borne viral infection or to consider the potential risk of acquiring infection. However, a few participants expressed self-restraint, holding back from fighting with other prisoners, because of their concern regarding HCV exposure. Everyone makes an angry decision or snap [participant clicks fingers] on the spur of a second, but it takes insight to, you know, to, to sit there and go, "Fuck, if I [fight] this bloke, I know I'm gonna hurt this bloke. But, but if I split him [cause an open wound] then I split meself in the process and I get, does he have, does he have a blood-borne virus that I may then contract?" you know. It doesn't normally get to that point. People in here aren't, I can't say aren't capable of getting to that second level of thinking but it's, people let their anger control their, their level of thinking. Like I've been in situations before where, where I'm arguin' with a bloke that I know is a known scumbag, you know. And I mean he's mouthin' off and carryin' on. I'd love more than anything to just go and punch him in the head but, even to look at him, he looks like a disease-riddled scumbag, you know. So, you know, I'll hold back 'cause I don't, wouldn't want his blood to get-some scabby-lookin', fuckin' toothless fuckin' thing, you know. Like I don't want it, you know. I don't want his blood anywhere near me. So just try and shake it off or try and ignore it the best way you can. The concern regarding BBV transmission, including HCV, through physical violence was variable. Some participants were concerned about the potential transmission. One respondent described an incident in which he understood two people had contracted HCV following fighting. However, concerns of risk of HCV transmission through fighting were not consistent among all prisoners. Some prisoners had not considered, or were unaware, that HCV transmission can occur through fighting. I've only known two guys [who] have contracted hep C through punch-ups. And, or through blood-to-blood contact through combat, yeah. So it's not that, you know, like needles take the cake when it comes to hep C spread. Like needles is the top, the top one but you do run the risk. If you bust your knuckles or your mouth gets busted by someone and blah, blah, blah, blood transference. If you get hit hard enough and both people are bleedin', it's gonna push the blood into your open wound. It's gonna get into your system. You're gonna get infected. There's a high chance you won't but there's a high, you know, there's still the chance that you will. I thought about that before I come to jail. I had a friend that always worries about getting into a fight and catching hepatitis C. I don't know. I never really, it never really crossed my mind that you could catch it from fighting. Use of protective gear during fighting was inconsistent among participants. There was some mention of wearing protective gear during physical altercations. However, this required preparedness and planning. Planned physical altercations were much less frequent than spontaneous altercations. The majority of the blokes I've seen fight, they've always got a couple of pairs of gloves on and, you know, they'll always take precaution. And 90 per cent of them don't have hepatitis you know. And they're good blokes but they're big men that will fight the right battle. The decision about fighting with an individual was occasionally influenced by the HCV status of the opponent, with at least one participant indicating that a person with known HCV infection may be immune to violent altercations as potential opponents seek to protect themselves from possible exposure. At the time, when you're punching on, I don't think girls even think twice about it. But there are a lot of fights that don't happen because of the fact that you know that girl's got hep C and you don't wanna be splitting your knuckle on their teeth or anything like that, or splitting them open and, you know, damaging your knuckles or hands in any way, or, to contract it. And so having hep C sometimes saves girls from getting a beating. I mean, but then again, I do know girls that have gone and put several pairs of gloves on before they've gone up and hit a certain person because they know she's got hep C. They usually just go and put the gloves on, and go and have a fight, yep. Network level At the network level, the interpersonal relationships and social network among participants influenced some activities and perceptions that made them vulnerable to violence and subsequent HCV transmission. Participants' networks included those both within, and outside, prison. The major issues that led a participants to be vulnerable to violence in prison, included drug involvement, and reporting other prisoners' infractions to correctional officers, known associates or adversaries of incoming prisoners (including from community and prison transfers). In addition to the physical injury to participants, these activities ultimately framed the social standing of that participant, making them vulnerable to be 'stood over' or subjected to intimidation including violence throughout their incarceration. Drug use in the prison is costly and reliant on social networks [37]. As such, a strong coherent network is crucial to maintain drug and equipment supply in prison. However, drug dealing in prison makes people in prisons vulnerable to violence. A majority of the participants (n = 15) reported drug debt and the associated need for intimidation as a major cause of violence in prison. Drug debts is a big one, you know. Drug debts arise, people can't pay the debt. The guys know they're not gonna get the money-they smash him-but it's out of ego and pride. They have to do it 'cause, if they don't do it, they'll be perceived as weak. If they let him get away with it, even though he's a raging junkie, the guy's just, he's a fuckin' mess, it's like, "Oh well, he can't pay. He's gotta get smashed." And like guys might not go on with it so hard. Reporting others' wrongdoings in prison to correctional officers (such as drug dealing inside prison or an incidence of fighting among persons in prison) was regarded by participants as warranting physically violent punishment within the network. This behaviour was considered as a breach of trust to the other prisoners, which was therefore a violation of the largely unspoken inmate code of conduct [38]. The number two [considering number one cause of violence in prison as drug debt] is somebody, you know, dobbin' on someone. Talkin' out of school or, or givin' up somebody else. The network level context of violence was also influenced by factors outside the prison, such as previous disputes and grievances between existing and incoming prisoners. These ongoing conflicts may escalate into physical violence. Now say if something happened on the outside that you got into an altercation with and you come in jail, now that person's in jail. You worry about, "Is he here? Am I gonna get into a confrontation? Are we gonna fight?" Do you know what I mean? That's one way of worrying about violence. Community level The prison community was described as being stratified into groups based on racial identity and social status of the prisoners. People of specific community groupings, from same geographic region or cultural backgrounds were often bonded together inside prison. Within these groupings, people often followed a similar code of conduct. However, hierarchies and conflicts existed between groups, whereby the influence of one group or community over another made some groups more vulnerable to violence. Interestingly, some respondents indicated that the initial cause of violence may be issues like drug dealing, which then escalated to become racial conflicts after involvement of people in prison of the same racial group, typically their peers, with whom they had already formed allegiances. However, in a few occasions, the racial influence was so strong that one respondent reported to be victimised for having previously participated in racially-motivated riots in the community [39]. It was like an all-in brawl. Brothers [Aboriginal people] against Asians. Aboriginals against Asians. All over drugs that was. Like three of 'em got taken out of here on stretchers. Three or four of 'em. [. . .] There could have been about 13, 14 people involved. When [violence] breaks out, it breaks out. (Respondent 13, male, incident HCV positive, IDU). I nearly got killed over the [racially motivated riots in the community], so I nearly got stabbed in the throat-because they knew that I was a rioter. The groups inside prison were influenced by some motorcycle gang members, who were engaged in drug deal. The network level drug dealing was linked to the community level influence of the groups which predisposed physical violence in prison. Guys will come into the wing and it's like, "Oh yeah, well he's from a different bike group," or, "He's from a rival group around. . ." Like, cause there's a lot of gangs around south-west Sydney and, and around Sydney itself, and a lot of it's gang-related and drug-related. Most are bashings and stabbings again related, drug-related in gaol. A social hierarchy exists within the prison based on the person's crime, whereby people who have committed specific crimes considered abhorrent, notably sexual assault of children, are vulnerable to violence from other prisoners. I've seen a few punches thrown but here it's lethal. They're, they're wild men. They'll jump on you, even 'til you're not movin'. They'll kick the shit out of ya. I seen it, you know. A week ago that happened to two blokes at once. And 10 blokes just got into 'em. And they didn't walk out. They found out they were in for child sex offences or something. Plus you get-heard that, you know, in the main, they'd have been hidin' there for 16 months and it's only just come out, but it come out. And yeah, they didn't want them in the yard so they got rid of them the only way they could. They could go and ask the officers to move 'em. The officers aren't gonna move 'em. So it comes to violence to get rid of 'em, and they leave their mark on them. Policy and law level Policies and laws relating to the prison setting are designed to control liberties, and can be enacted as a means to dissuade violence between people in prison and towards others. As punishment for perpetrators of repeated violence in prison, correctional officers often lock inmates in their cells for a period (lock-down) or place them in segregation (i.e. a separate cell with no opportunity to interact with other prisoners) and the security status (classification) made more restrictive. Participants in segregation are also deprived of their standard prison privileges including phone communication with family and friends. These increased restrictions, resulting from violent altercations, can have significant social and emotional implications for people in prison. Oh if it kicks up again, there could be tension in the yard. Can be anything from gettin' locked in . . . You know what I mean? If it's a big enough altercation, they could, we could get locked in, locked down for a week. You know what I mean? You don't get phone calls-Oh well there goes your privileges like phone calls, anything. You know what I mean? Just in general. Participants with an imminent parole hearing responded that they did not want to get into physical altercations, because such interactions might affect their chance of being released on parole (thereby reducing the time they are incarcerated in prison). A spurious allegation of fighting could also be used to defame a participant. [What are the things people worry about violence?] Gettin' tipped. Losin' their classification. Yeah, like gettin' sent to another gaol, you know. That's a big thing. The prison that you're at currently could be close to family, close for your girlfriend, you know. It could be good for visits. You get into a fight and then get sent somewhere like [a regional prison], then you're up shit creek. You get no visits then. So that could be a main thing. Like you could have parole coming up, you know. More charges laid on you could mean that you could have difficulties comin' up for parole, you know. But the main thing is, is people just don't want the drama. They don't wanna get hurt or whatever, or they don't want the, you know, the bullshit that comes afterwards. Without the screws [correctional officers] finding out [about violence against other prisoners] because it will go against my name when I go up for parole in five months. You know, having a violent charge already it won't look good, you know. The physical structures of prisons regularly include surveillance equipment such as cameras. This had implications for where and when violence occurred. You sort of get it [the violent incident] over and done with. The guys who are real, real proper, they won't sit there, "Well come in the cell, come in the cell." They don't care where the guy is. If he's sittin' in the middle of the wing in front of the cameras, whatever, he'll just run straight down. If he's got a problem with him then and there, he'll just go straight up and bang! (Respondent 18, Male, HCV negative, other risk exposed). Prison policies mandate that disinfectant be used for cleaning up blood spills, such as spills following violence [40]. However, the supply and availability of proper cleaning equipment and disinfectant chemicals were not always optimal across all prisons. One time we had the crystals [granules to aid in solidifying fluid for clean-up], another time we didn't-we just had to use bloody toilet paper to sop up most of the blood that was on the ground. Like this guy got stabbed once and then we were sweepers [prisoner cleaners] in the wing and the screws [officers] come in, and they said, "All right, here's the blood clean up kit." And you have to go and put little crystals [granules] on all the blood that's on the ground. And you've gotta glove up, sweep it up then Fincol [a disinfectant, bleach alternative] it out. See that's the other time is. . . inmates are expected to clean up the mess after. Discussion This qualitative study has identified contextual insights regarding violence at different levels of the social-ecological framework, describing perceptions of HCV transmission risks among those who are incarcerated. Our findings showed that physical violence in particular was inextricably intertwined with unique socio-ecological factors in the prison setting. The risk factors across the framework were complex, and inter-related with individual level risk factors impacting on, and impacted by, interpersonal network, community, and policy levels. Our study provides a unique integrated opportunity to frame the intricate context of HCV transmission in prison with violence as a key factor. At the individual level of the framework, there was a variable degree of awareness and concern about the risks of HCV transmission associated with violent behaviours. There was considerable concern among some participants, evident in the practice of using protective gear where possible, such as in fights which were planned ahead. By contrast, participants also raised concerns about the impromptu nature of fights, which frequently occurred on the spur of the moment without the potential for personal protective equipment such as gloves. Our findings were consistent with another qualitative study in NSW correctional centres where people in prison were asked about concerns around reinfection of HCV whilst incarcerated [16]. The participants in that study perceived the risks of acquiring HCV infection through blood exposure during physical violence as being comparable in risk to injecting drug use. Previous studies have identified higher rates of victimization of violence of female prisoners than male prisoners [41,42]. Without our research, women reported greater episodes of violence than male participants. One instance described highlighted the vulnerabilities of women who smuggle in drugs via insertion. No similar occurrences were described among male participants. Several previous studies have explored the contexts and risk factors for engaging in violence in the prison setting irrespective of genders, which include younger age (�21 years), being unmarried, prior incarceration, prior violent behaviours, use of drugs, and those who had depression or personality disorder, and gang involvement [43][44][45][46][47][48]. The context portrayed at the individual level of our framework represents similar socio-demography. Our findings suggest fighting is viewed as inevitable, and without means to adequately protect oneself against HCV transmission. Consequently, it is likely that HCV screening should be routinely offered to all people in prison who have engaged in physical violence in which blood exposure likely occurred, irrespective of other risk factors. At the network level, drug debt was the major risk context that predisposed to violence. This is in-line with other studies which have found drug debts to be a major contributor to violence in prisons [49][50][51][52][53]. Illicit drug use and drug dealing is inherently linked to criminalisation activity that leads people to prison [54]. In this regard, coupled with risk factors for engaging in violence (e.g., incarceration, use of alcohol and/or drugs, history of violence), prison acts as a hotspot for violence and potential blood borne virus transmission [50]. Our data revealed specific instances of violence where the network of influence involved multiple prisoners and possibly also individuals external to the prisons. Other risk behaviours, such as disclosing information to correctional officers ('dobbing' on others), made people in prison vulnerable to violence due to violation of the unspoken prisoner code of behaviour [38]. These findings suggest that interventions against violence to reduce HCV transmission through networks should be different than those targeting individual level factors. For example, legislations regarding supply of drug and needle syringe exchange in prison setting should be intervened at policy level, rather than individual or network level. At the community level, racial and other social influences on violence were evident, apparent across multiple ethnic groups such as Aboriginal or Arabic backgrounds, as well as motorcycle gang members. Previous studies have identified comparable ethnic and socially driven violence amongst people in prison [55]. To prevent racial conflict, ethnic clustering (also known as 'yarding') has occasionally been implemented in NSW prisons. For example, at Goulburn Correctional Centre, yard 6 is allocated for Asian background prisoners, yard 7 for Islanders, and yard 8 for Arabic prisoners [56]. However, this ethnic clustering is done on ad hoc basis and not maintained in a well organised manner. Similar racial segregation has been applied in California prisons [57]. However, separating people in prison based on race is an ethical concern; a court order in California stipulated that racial segregation can only be imposed for an intermittent period in special circumstances, such as when an imminent racial conflict in prison seems likely [58]. Moreover, previous research which explored differences in social capital among Aboriginal and non-Aboriginal men in prison found bonding and linking social capital varied between the groups [37]; hence, bonding social capital, particularly among Aboriginal men in prison, could be utilised to promote appropriate health interventions. Collectively, these findings suggest that race can be a source of violence within prison, yet racial connections can also be a resource for health promotion. Prisons should consider the ways in which race manifests within individual prisons to ensure health benefits over harms to health and wellbeing (such as may be compromised by unnecessary segregation). In addition, there was clear recognition that some crimes which had led to incarceration, notably sexual assault of children, were a provocateur for inciting violence. In addition, as per the inmate code of prison sub-culture, sex offenders are ranked at the bottom of the prisoner hierarchy, and were supposed to be beaten or killed; among those offenders, child molesters are ranked lowest [59,60]. These findings suggest that interventions against violence to reduce HCV transmission through community level should be different than those targeting individual level factors. For example, people in prison convicted of sexual assault of children should be housed separately (network) versus enhanced provision of gloves and disinfectant to reduce blood exposure during fights. Although the correctional authorities in the NSW prisons occasionally practice separation of sex offenders, segregation of these prisoners upon prison entry might reduce violence within the prison. At the policy legislative level of the framework, previous studies have explored the utility of rewards in the form of shortened sentences, for which people in prisons must meet some minimum standard of good behaviour [61,62]. Fulfilling this standard imposes an obligation on prisoners-such as abstaining from drug use or avoiding violence in prison [61]. By contrast, the prison regulations regarding people in prisons implicated in violent behaviours which impose additional punishments, such as segregation, deprivation of privileges, and deferred parole (and hence prolonged incarceration) may reduce the occurrence of violence among some prisoners [63,64]. In addition, provision of personal protective gear, including disposable gloves for cleaning and supply of disinfectants for possible blood borne virus transmission prevention in different prisons are widely varied across the globe [65]. The variable nature of choosing protective gloves during physical violence in prison identified in our study suggests that adequate supply of protective gloves and disinfectants may reduce potential HCV transmission from bloody violence. However, as indicated by participants, the use of protective gear was not always an option as some fights were spontaneous, occurring on-the-spot without forward planning or opportunity for preparation against possible blood exposure. Policy-level decisions regarding supply of disinfectant may help the prevention of blood borne virus transmission in prisons. There should be constant supply of proper disinfectants and protective gear in prisons in a location accessible to prisoners. This study had limitations. Although participants did not discuss the stage of the HCV epidemic in the prison setting, several HIV researchers have demonstrated how the epidemic stage of HIV is reflected through the individual, network and community level of the framework [21,26,66]. As expected, there was limited discussion by participants of the policy level implications on HCV transmission as this study only included prisoner participants and did not include other stakeholder participants such as correctional administrators and policymakers. Stakeholders, including prison healthcare providers and policymakers might consider appropriate strategy from the broader context of the intricate framework of violence in prison, e.g. prison health education and strengthening harm reduction programmes could be beneficial. At the time of data collection, women represented slightly less than 8% of the Australian prisoner population [36], though women participants comprised 35% of participants within our study. The small qualitative study findings are not generalisable to other countries of the globe, especially considering the individual, network and community practice and concerns in multicultural Australian prisons. However, stratified purposeful sampling including prisoners who either had a history of ever injecting drug use or had non-injecting risk behaviours (including tattooing, piercing or fighting) and acute, chronic and negative HCV infection status of the prisoners might ensure better representation of the prisoners' perspective on violence and HCV transmission risk in the Australian prison setting. Intervention strategies considering the complex risk contexts should be integrated at the policy level to improve correctional facilities' responses. Hepatitis C screening should routinely be offered to the people engaged in fighting. Like any part of the globe, illicit drug use and drug dealing, factors associated with violence, are unavoidable. The correctional authorities should ensure adequate supply and access of bleach or other disinfectant preparations to the people in prison, which might de-contaminate blood spills following violence, and injecting equipment during sharing. Enhanced communication and engagement between the prison population and correctional authority and timely segregation might reduce violence. Further studies might inform appropriate segregation. Programmes organised by correctional services and represented by the inmates, such as an inmate development committee [67] and violence prevention programmes should take initiative to prevent violence based on the outlined framework. Ultimately, the evidence-based interventions, particularly cost-effective public health interventions targeting violence at every level of MSEM should be aimed to reduce transmissions of HCV in the prison setting.
v3-fos-license
2022-03-11T16:17:57.746Z
2022-03-05T00:00:00.000
247368882
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.ajol.info/index.php/aas/article/download/222424/209867", "pdf_hash": "5eea9887375f7f45b3e799277ae46b33dc3f9fce", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43639", "s2fieldsofstudy": [ "Medicine" ], "sha1": "433c156f8e61cd856376d14396ecd258e39b2da5", "year": 2022 }
pes2o/s2orc
Transarterial Chemoembolization and Microwave Ablation for Early Hepatocellular Carcinoma in a Nigerian The West African subregion has a high number of cases of hepatocellular carcinoma (HCC), and this is partly because of a lack of expertise and health infrastructure for the delivery of effective locoregional therapies for patients who present with early disease. This report documents the successful treatment of a case of early HCC in a Nigerian patient using a combination of transarterial chemoembolization and microwave ablation techniques. We showed that, despite difficulties, such techniques are possible. It is our hope that this publication will help stimulate discussion, policy changes, and other alterations necessary to establish beneficial high-end techniques for the alleviation of the health burden of HCC patients in Nigeria. Introduction Serum sodium abnormalities are common in traumatic brain injury (TBI), and are usually associated with the primary brain injury or interventions such as hyperosmolar therapies used in the management of raised intracranial pressure (1,2). Hypernatremia, defined as serum sodium ion concentration >145 mmol/L, can result from a primary brain injury resulting in central diabetes insipidus or as a result of hyperosmolar therapies such as the use of hypertonic saline (3,4). Hypernatremia is associated with increased mortality, longer hospitalization and greater hospital costs (3)(4)(5). Hyponatremia, serum sodium ion concentrations <135 mmol/L, may also occur after TBI and contributes to secondary brain insults by causing cerebral edema, seizures, and depression of consciousness (6). Hyponatremia in TBI is usually caused by cerebral salt wasting syndrome and syndrome of inappropriate secretion of antidiuretic hormone (7). Severe TBI, defined as Glasgow Coma Scale (GCS) ≤8, is a major cause of death and incapacity worldwide and is associated with huge direct and indirect costs to the public (8)(9)(10). In addition, the World Health Organization projected that by 2020, TBI would be the main cause of death and disability (11). TBI is more prevalent in developing nations because of the increasing number of road traffic accidents (12,13). In our setup, most hospital-based studies have revealed that severe head injury is associated with mortality of >50% and poor functional outcomes (14)(15)(16). These bad outcomes may be associated with secondary brain insults such as electrolyte abnormalities that arise from inflammatory and biochemical cascades initiated by the primary injury insult to the brain (9,17,18). This study aimed at determining the incidence of serum sodium ion abnormalities in severe TBI patients, and their association with specific clinical and radiological parameters. Case report A 60-year-old man with a 5-month history of unintentional weight loss, anorexia, and malaise was referred to the hepatology clinic after an abdominal ultrasound scan had revealed a hyperechogenic, intrahepatic lesion on a background of cirrhotic disease. He had no family history of liver ailment nor was there any history of alcohol ingestion. A 15-year history of well-controlled type 2 diabetes mellitus was noted. Two years earlier, he had been informed of a liver "condition" after he underwent computed tomography (CT), but was not followed up with any consultation or therapy at that time. A review of the previous CT scan showed that it had reported features of cirrhosis and portal hypertension as well. On examination, he was lucid and not obese. Hepatitis A, B, and C screens were negative, and his hemoglobin, transaminase, and alphafetoprotein levels were normal. Further evaluation with a repeat CT image confirmed the cirrhotic disease and a 1.5x1.5x1.9-cm arterially enhancing lesion in segment VIII with associated portal venous washout on delayed imaging ( Figure 1). Figure 1. CT scan showing cirrhotic disease and an arterially enhancing lesion in segment VIII The Liver Imaging Reporting and Data System score of the mass was LR-5. There was no ascites. His Child-Pugh score was 6, indicating the least severe, compensated cirrhotic liver disease. In view of the Barcelona Clinic Liver Cancer (BCLC) classification of stage 0 (early HCC), the option of liver resection along with the possibility of other locoregional interventions that had potential for cure, were discussed with the patient. The patient refused surgery. Thus, an interventional radiology (IR) specialist was consulted, who recommended dual therapy with simultaneous TACE and MWA for this case. The intraoperative selective arteriogram of the segment VIII branch of the right hepatic artery confirmed that the tumor was highly vascularized. This access allowed for the lipiodol and doxorubicin mixture to be injected under fluoroscopic visualization to this branch of the right hepatic artery. Post-embolization arteriography of the treated vessel and real-time ultrasound scan of the liver showed stasis of blood flow within the treated vascular territory and staining of tumor with lipiodol. Subsequently, a 14-gauge ECO medical MWA needle was advanced into the tumor from a left approach. Ablation was then performed initially for 3 minutes, then for an additional 1 minute. Track cauterization was performed. Appropriate cloud-type pattern in real-time post-ablative ultrasound was seen following ablation ( Figure 2). The patient returned after 10 weeks with a follow-up magnetic resonance imaging (MRI) (Figure 3), which showed a pre-contrast ablation cavity in the same area of the previously treated lesion and T1 hyperintensity around the tumor, representing hemorrhage and ablation margins. The post-contrast image shows an ablation cavity in the same area as the previously treated lesion, with no evidence of residual enhancement. Discussion In Africa, a major contributor to the high health-related morbidity and mortality associated with HCC is the prevalent phenomenon of late-disease presentation along with problems related to the availability, accessibility, and affordability of interventions for the few who do present early enough (2,4). This, to the best of our knowledge, is the first time that such a potentially curative combination therapy was administered, and with successful results, in a patient with early HCC in Nigeria. Hepatitis B infection and alcohol consumption are the commonest factors associated with HCC causality in our environment, but these two elements were not found in this patient (5). The long-term history of diabetes mellitus might point us to an alternate underlying cause of his cirrhosis being non-alcoholic fatty liver disease, which has been playing an ever-increasing role in HCC on the continent (5). Whatever the underlying etiology, the finding of HCC on a cirrhotic background usually carries poor prognosis for many black Africans (4). The importance of early detection on the impact of disease cannot be overemphasized, as pointed out by guidelines for HCC management from major hepatology/oncology bodies (6,7). Surveillance remains key to the detection of early HCC, which is usually asymptomatic and whose clinical picture is deceitfully protean. Studies among populations in developed countries have shown that surveillance is associated with improved overall survival through detection of HCC at early stages, when patients' conditions were amenable to potentially curative treatments (8,9). Some researchers have thus advocated for its widespread adoption in Africa (4). The authors note that Nigeria, as is the case for many other sub-Saharan countries, has yet to develop any such structured surveillance schemes. The great impact of such health policy gaps is illustrated in this index patient who had been diagnosed with cirrhosis 2 years earlier but had not done anything to address the potential for HCC. Various schemes and modalities have tried to classify HCC using different parameters, but perhaps the most widely deployed is the BCLC classification scheme (10). A particular benefit of its widespread use is that it is more of a treatment guide and clinical directory as to what therapeutic options are available for use in which stage of HCC (11). The good news for our patient was that he qualified for liver resection surgery and other possible locoregional therapies. Since he refused the surgery, management for potential curative therapy was offered with locoregional interventions, which, when carried out properly, have been shown to be equally effective (10,12). It is noted that the ablative therapies are being used as first choice in many centers around the world for such small lesions (13). Either of the two local ablative methods, MWA or, more commonly, radiofrequency ablation (RFA), is usually deployed for BCLC stage 0 lesions. Although it appears that MWA shows a more favorable profile in terms of duration of procedure and ability to generate higher and more efficient temperatures, neither modality has much superiority to the other (3,14). However, specific scenarios appear to favor the use of one over the other. The presence of a large vessel near the hepatic mass is one such scenario. In these cases (as in our patient), the likelihood of heat dissipation by conduction reduces the efficiency of RFA, and thus, MWA is usually preferred (14). MWA or RFA can be used in combination with TACE, but views on this topic differ. On the one hand, some consider that the addition of RFA has no advantage to TACE therapy for HCC lesions <3cm perhaps because RFA alone can achieve complete necrosis, making TACE a redundant addition (15). On the other hand, Chen et al. showed that the TACE-MWA combination was superior to TACE alone in terms of total ablative rates in lesions ≤3 cm as well as those 3-5 cm (16). Several other studies as well as a recent meta-analysis have confirmed the superiority of combined use of TACE and RFA/MWA (17). The science was valid, and since the necessary tools were available, the choice was made to administer both procedures in the index case. The 10-week post-procedure MRI results show no evidence of residual enhancement in the tumor, and the patient also reported improvement in his clinical condition. These brought us great professional satisfaction, as, in our country, we hardly ever get to see lesions as early as this or when we do find such early HCC masses, we usually do not have the capacity or finances to deliver definitive therapy locally. Conclusion Dual locoregional therapy was successfully administered to an early HCC lesion in a cirrhotic Nigerian patient. This case report documents how that such a pioneering work was carried out here in Lagos State. We acknowledge that the price of such procedure is prohibitive and that such services and the necessary expertise are scarcely available in the country. Nonetheless, we sought to clarify that a handful of our patients present with early HCC disease, when such minimally invasive interventions are still viable. We also hope to emphasize that such high-end and technically demanding life-saving procedures are possible in the country, despite the many challenges.
v3-fos-license
2023-10-30T15:06:00.033Z
2023-10-27T00:00:00.000
264566677
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.frontiersin.org/articles/10.3389/fmed.2023.1277194/pdf?isPublishedV2=False", "pdf_hash": "1c6c50567b9324e70cc2fa8f9384276fb4192f9d", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43640", "s2fieldsofstudy": [ "Medicine" ], "sha1": "66958a6f8c93c939382b35a6f5b3fa9054412fe9", "year": 2023 }
pes2o/s2orc
Development and validation of a diagnostic model for the identification of chronic ocular graft-versus-host disease (oGVHD) Purpose To verify the International Chronic Ocular Graft-Versus-Host Disease (ICCGVHD) Group diagnostic criteria and establish an easy-to-use and reliable diagnosis model for quick identification of chronic oGVHD. Methods This study included 180 patients (355 eyes) who underwent allogeneic hematopoietic stem cell transplantation (allo-HSCT) and visited the Peking University Third Hospital Cornea and Ocular Surface Disease Specialist Clinic from July 2020 to February 2021. The proportion of chronic oGVHD was 76.06% (279/355). Results Five complaints, including eye dryness, photophobia, foreign body sensation, eye redness, and burning sensation; six ophthalmic examinations, including Ocular Surface Disease Index (OSDI) score, corneal fluorescein staining (CFS), tear break-up time (TBUT), Schirmer’s test score without anesthesia, conjunctival score, tear meniscus height, and non-ocular GVHD-involved organs were significantly different between patients with chronic oGVHD and control group (p < 0.05). Binary logistic regression (backward LR algorithm) selection demonstrated that three variables retained diagnostic significance for chronic oGVHD: CFS (OR = 2.71 (1.92–3.81), p < 0.001), Schirmer’s test score without anesthesia (OR = 0.83 (0.76–0.91), p < 0.001), and conjunctival score (OR = 1.96 (1.13–3.42), p = 0.031). A nomogram for the identification of chronic oGVHD was developed, and its performance was examined using an internal validation cohort (118 eyes). The areas under the curve (AUCs) for the three-variable-based nomogram were 0.976 (95% CI (0.959–0.992), p < 0.01) and 0.945 (95% CI (0.904–0.986), p < 0.01) in the development and internal validation cohorts, respectively. Conclusion This concise three-variable-based nomogram based on ICCGVHD criteria could serve as an easy-to-use and reliable tool for rapid screening of chronic oGVHD. Introduction Allogeneic hematopoietic stem cell transplantation (allo-HSCT) has been generally accepted as the ultimate treatment for malignant hematologic diseases, aplastic anemia, mucopolysaccharidosis, lysosomal storage diseases and other metabolic diseases (1,2).After more than 20 years of development, an increasing number of patients have received allo-HSCT and benefited from it.There are more than 20,000 HSCTs performed in the United States each year.With the improvement of the patient survival rate and survival time, many complications after transplantation have gradually been discovered and recognized (3,4). Graft-versus-host disease (GVHD) is the main complication of allo-HSCT, which occurs in 30-70% of post-HSCT patients.In addition, 60-90% of GVHD patients experience eye involvement (5)(6)(7).When ocular GVHD (oGVHD) occurs, a large number of inflammatory cells and inflammatory factors infiltrate the ocular surface tissues, such as the meibomian glands, lacrimal glands, conjunctiva, and cornea, causing acute and chronic inflammation, which can cause a large amount of normal tissue necrosis and apoptosis in a short time.The quality and quantity of tear fluid and the stability of tear film are all seriously affected.If not treated in time, it may cause severe pain and vision loss significantly reducing patients' quality of life (QOL) (8)(9)(10)(11)(12)(13). There are currently two diagnostic metrics of oGVHD.The 2014 National Institutes of Health (NIH) criteria define a symptom-based diagnosis of oGVHD (8,9), where oGVHD is the new onset of dry, gritty, or painful eyes with decreased values in the Schirmer's test without anesthesia in a patient after allogeneic HSCT.This criteria is more concise and facilitates the determination of oGVHD by transplant clinicians. In 2013, the International Chronic Ocular GVHD Consensus Group (ICOGCG) proposed new diagnostic metrics to increase objectivity in the diagnosis and follow-up of chronic GVHD (5).The ICOGCG identified four subjective and objective variables to measure in patients following HSCT: Ocular Surface Disease Index (OSDI) score, Schirmer's score without anesthesia, corneal fluorescein staining (CFS) and conjunctival injection.Each variable is scored 0-2 or 0-3, with a maximum composite score of 11.Taking the presence or absence of systemic GVHD into consideration as well, patients are eventually assigned to one of three diagnostic categories: no, probable, or definite oGVHD.The ICCGVHD diagnostic criteria considered multiple clinical test parameters, were noted to be better at differentiating oGVHD patients from dry eye disease (DED).In 2022, Yoko Ogawa validated the ICCGVHD criteria.Good sensitivity, specificity, predictive value and correlation were found between ICCGVHD and NIH2014 (10). There are also other diagnostic criteria, which have been used in studies on oGHVD.An extension of the Tear Film and Ocular Surface Society Dry Eye Workshop II (TFOS DEWS II) criteria required ocular surface discomfort symptoms with OSDI score ≥ 13 along with any one of the following: TFBUT <10 s; tear osmolarity >308 mOsm/L in either eye (or an inter-eye difference > 8 mOsm/L); ocular surface staining (>5 corneal spots, >9 conjunctival spots or lid wiper epitheliopathy of ≥2 mm in length and/or ≥ 25% sagittal width) (11).The Japanese Dry Eye Society criteria for diagnosing dry eye with a greater focus on unstable tear film (tear film breakup time [TFBUT <5 s]) and subjective symptoms (12). The different sets of criteria present great subjectivity and variability in the best clinical practices (BCPs), diagnosis, staging, and treatment of chronic oGVHD.Which indicators exceed what criteria for a more effective diagnosis of chronic oGVHD and what are the weights?The diversity and complexity of diagnostic criteria make it difficult for some community ophthalmologists and hematologist to become familiar with and accurately recognize chronic oGVHD.However, the burden of treating more life-threatening complications often prevents patients from visiting specialty ophthalmology clinics for routine examinations, which may lead to misdiagnosis and delay of treatment of chronic oGVHD.Therefore, we hope to summarize an easy-to-use and reliable diagnostic method to screen chronic oGVHD.It can be mastered by most ophthalmologists and hematologist to recognize chronic oGVHD rapidly.In this way, more patients can still receive accurate diagnoses and timely management when they cannot visit specialty ophthalmology clinics for routine examinations. According to the diagnostic list above, we choose all six ophthalmic examination variables, five subjective complaints, and non-ocular GVHD-involved organs into consideration.The purpose of this study was to determine which of the above indicators are predictive of oGVHD diagnosis, verify the ICCGVHD criteria and establish a practical and reliable tool based on the ICCGVHD criteria for the rapidly identification of chronic oGVHD, minimizing the subjectivity and variability of clinical diagnosis. Data sources and characteristics In this study, we aimed to establish and validate a simple and practical tool for the early identification of chronic oGVHD in China by using onset symptoms and simple ophthalmic examination.This study was approved by the ethics committee of the Peking University Third Hospital.A total of 233 patients (413 eyes) who visited the Peking University Third Hospital Cornea and Ocular Surface Disease Specialist Clinic after HSCT were enrolled in our study from April 2021 to November 2021.To test the generalization of our model, we split the general cohort according to the order of patients' visit times and used 67% for model training and 33% for internal model validation; we ensured a balanced data distribution. Patients were excluded according to the following criteria: (1) signs of allergy, infection, glaucoma, retinopathy, or other immune diseases, (2) lack of complete medical records, and (3) inability to be followed up and interviewed in the clinic.The study was approved by the Peking University Third Hospital Medical Science Research Ethics Committee (protocol number: M20200489) and conducted in accordance with the Helsinki Declaration, and all participants in this study provided written consent (7,13). Demographic variables collected for the study Patient characteristics included demographics, type and reason for the transplant, HLA compatibility match, and non-ocular GVHD organ involvement.The general ocular conditions included (1) 2) best observed vision acuity (measured in logMAR).The patients complained of symptoms including eye dryness, eye redness, photophobia, lacrimation, foreign body sensation, and burning sensation, which were graded by the patients themselves.Higher scores represent more severe symptoms, with scores ranging from 0 to 5. We also used the OSDI to assess the subjective ocular symptoms of dry eye symptoms.The total OSDI was calculated using the following formula and ranges from 0 to 100: OSDI = (sum of scores for all questions answered × 100)/(total number of answered questions × 4) (14). Outcomes We defined the occurrence of chronic oGVHD on the basis that patients' ocular and systemic manifestations and referred to both the 2014 NIH Chronic GVHD diagnostic criteria and the ICCGVHD criteria for chronic oGVHD.Diagnoses were confirmed independently by three experts in our center (5,18). Statistical methods and model establishment Univariate factor logistic regression was used in the model training group to analyze and calculate the related factors of chronic oGVHD.Three features with statistically significant odds ratios (ORs) were identified through binary logistic regression (LR) (backward LR method).On this basis, we established a nomogram based on the variables selected from the model. The accuracy of the chronic oGVHD diagnostic score was assessed using the area under the receiver operator characteristic curve (AUC).We also used the calibration curve to validate the generalizability of the chronic oGVHD diagnostic score.Statistical tests were done with R software (version 3.6.0)and SPSS (version 22.0).Statistical significance was set at two-sided p values less than 0.05. Results Two hundred and thirty-three oGVHD patients (413 eyes) were initially recruited, among which 31 patients (52 eyes) were excluded for other ocular diseases and 22 patients (26 eyes) were complete medical records.Eventually, a total of 180 patients (355eyes) who underwent HSCT were enrolled in our study (Figure 1).The median age of the patients was 55.53 years (IQR: 37, 63); The demographic and clinical characteristics of the study population are presented in Table 1. The patients were divided into development and validation cohorts, according to the order of visit dates.The characteristics of the patients in the development cohort (n = 237) and internal validation cohort (n = 118) were similar (Table 1). We established a nomogram based on the variables selected from the model (CSF, Schirmer's test score and conjunctival score) to diagnose chronic oGVHD (Figure 3).In the development cohort, the AUC for the nomogram was 0.976 [95% CI (0.959, 0.992), p < 0.01] (Figure 4). Discussion OGVHD has at least three important biological processes: lacrimal gland dysfunction, meibomian gland dysfunction, and corneal and conjunctival inflammatory infiltration (19,20,21).The pathogenesis of oGVHD can currently be summarized as a threephase model.The initial stage is considered to be an inflammatory process mediated by T cells, and the subsequent stage is the result of the immune cascade (13,22).The host's immune regulatory response is insufficient to control early inflammation.As a result, chronic inflammation and immune disorders occur, leading to changes in glandular fibrosis and ineffective tear film, leading to ocular surface damage (23).The medium-sized ducts of the lacrimal gland are preferentially targeted by T cells and other inflammatory cells in the initial stage (24).The ducts of the lacrimal and meibomian glands and the nasolacrimal duct are often blocked by immune-mediated fibrosis.Other areas that may be affected include the cornea, limbus, and conjunctiva.Confocal microscopy of patients with oGVHD showed that the infiltration of globular immune cells and dendritic cells around the basal nerve in the central cornea and limbus area increased, indicating the infiltration of active immune cells into the eye with GVHD in avascular corneal disease (25). The current internationally recognized diagnostic criteria for chronic oGVHD are divided into the diagnostic criteria for chronic GVHD proposed by the NIH in 2005, which were improved in 2014, and the diagnostic scoring criteria proposed by the ICCGVHD in 2013 (26).The NIH's diagnostic criteria are simple and easy to implement, and the NIH score combined with the Schirmer's test shows >90% sensitivity and specificity for the diagnosis of oGVHD (27), but the diagnostic criteria are challenging for several reasons, including the limited understanding of the pathophysiology, the lack of validated measurement tools and scoring systems etc. Inamoto also demonstrated that the Schirmer's test did not correlate well with the change in oGVHD severity (28).In past studies (29,30), it has been suggested that the ICCGVHD diagnostic criteria have good consistency and repeatability, and perform better at differentiation oGVHD patients from non-oGVHD DED.There are numerous criteria for the diagnosis of ocular GVHD, with different criteria focusing on different indicators. Based on the BCP of chronic oGVHD, our study diagnosed 180 patients 355 eyes after allo-HSCT, divided the eyes into a disease group and a non-disease group, and evaluated the clinical manifestations and clinical examination parameters of the patients. Our results showed that eye dryness, photophobia, foreign body sensation, eye redness and burning sensation were significantly different between the patient group and the control group.The characteristics are generally similar to those mentioned in Shikari's study, with slight differences (burning, irritation, pain, redness, blurry vision, foreign body sensation, and photophobia) (31).There were also significant differences in OSDI score, CFS, TBUT, Schirmer's test score without anesthesia, conjunctival score and tear meniscus height.This result confirms that the clinical manifestations of patients with oGVHD are a reliable indicator for the diagnosis of chronic oGVHD, which is consistent with the diagnostic criteria of the NIH.Simultaneously, we also verified that OSDI score, CFS, TBUT, Schirmer's test score without anesthesia, conjunctival score and tear meniscus height were statistically significant, further verifying the ICCGVHD diagnostic criteria. We then selected these statistically significant indicators for further analysis.According to the results of logistic regression analysis, we ultimately found three indicators with significant differences between the two groups, including CFS [OR = 2.71 ( Allo-HSCT, allogeneic hematopoietic stem cell transplantation; ALL, acute lymphoblastic leukemia; AML, acute myeloid leukemia; CLL, chronic lymphocytic leukemia; CML, chronic myeloid leukemia; MM, multiple myeloma, MDS, myelodysplastic syndrome; AAA, acute aplastic anemia.a p value < 0.05 was considered statistically significant. Complains at onset of illness in chronic ocular GVHD and non-chronic ocular GVHD cases (n = 237); COGVHD, chronic ocular graft-versus-host-disease. diagnose chronic oGVHD (Figure 3).According to the nomogram, CFS and Schirmer's test are the most effective in diagnosing chronic oGVHD.This result further validates the ICCGVHD diagnostic criteria and simplifies the diagnostic criteria.We also found that when CFS was greater than 3 points, almost all cases were diagnosed as chronic oGVHD.When CSF was less than 3 points, the conjunctival score and Schirmer's test were needed to calculate the total score.When the total score was greater than 11, the probability of being diagnosed with chronic oGVHD was as high as 95%.In summary, our research results indirectly reflected the sensitivity and specificity of the 2014 NIH diagnostic criteria and the 2013 ICCGVHD diagnostic criteria, and through further analysis of clinical symptoms and clinical parameters, a more simplified diagnostic model was obtained.Compared to NIH, this diagnostic model adds objective ocular examination scoring systems.But more concise than the ICCGVHD diagnostic criteria, facilitates the determination of chronic oGVHD by transplant clinicians and community ophthalmologists. This diagnostic model was proven to have good sensitivity and specificity to help ophthalmologists and hematologist rapidly diagnose chronic oGVHD.However, there are still some limits in this study.A large-scale multicenter study is needed to verify the diagnostic model determine the best combination of clinical indicators to maximize the diagnostic sensitivity and specificity and determine the severity of chronic oGVHD, which will be the direction of our subsequent research. Conclusion Our research results indirectly reflected the sensitivity and specificity of the 2014 NIH diagnostic criteria and the 2013 ICCGVHD diagnostic criteria, and through further analysis of clinical symptoms and clinical parameters, a more concise diagnostic model based on ICCGVHD criteria was obtained.This diagnostic model was proven to have good sensitivity and specificity to help ophthalmologists and hematologists rapidly diagnose chronic ocular GVHD. FIGURE 3 FIGURE 3 Nomogram and calibration curves for diagnosis chronic ocular GVHD.(A) Nomogram model for diagnosis chronic ocular GVHD.(B) Calibration curves for the nomogram in the development cohort.(C) Calibration curves for the nomogram in the validation cohort; CFS, corneal fluorescein staining; Sit, Schirmer's I test (mm); Conj, conjunctival score; COGVHD, chronic ocular graft versus host disease. TABLE 1 Flow diagram of patient selection, COGVHD, chronic ocular graft-versus-host-disease. Demographic and clinical characteristics of patients after allo-HSCT. TABLE 2 Diagnostic factors for chronic ocular GVHD in the development cohort: univariable and multivariable models.p value < 0.05 was considered statistically significant.In the multivariable model, the significant variables were selected using a backward LR procedure from three complains including photophobia, foreign body sensation, burning sensation and six objective ocular examinations including OSDI score, CFS, TBUT, Schirmer's test score without anesthesia, conjunctival score, tear meniscus.
v3-fos-license
2016-05-04T20:20:58.661Z
2015-08-05T00:00:00.000
240701
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0130533&type=printable", "pdf_hash": "4a5dbaab8ab0d69688ccc036e13680c84c037b48", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43641", "s2fieldsofstudy": [ "Biology", "Medicine" ], "sha1": "4a5dbaab8ab0d69688ccc036e13680c84c037b48", "year": 2015 }
pes2o/s2orc
Modification of β-Defensin-2 by Dicarbonyls Methylglyoxal and Glyoxal Inhibits Antibacterial and Chemotactic Function In Vitro Background Beta-defensins (hBDs) provide antimicrobial and chemotactic defense against bacterial, viral and fungal infections. Human β-defensin-2 (hBD-2) acts against gram-negative bacteria and chemoattracts immature dendritic cells, thus regulating innate and adaptive immunity. Immunosuppression due to hyperglycemia underlies chronic infection in Type 2 diabetes. Hyperglycemia also elevates production of dicarbonyls methylgloxal (MGO) and glyoxal (GO). Methods The effect of dicarbonyl on defensin peptide structure was tested by exposing recombinant hBD-2 (rhBD-2) to MGO or GO with subsequent analysis by MALDI-TOF MS and LC/MS/MS. Antimicrobial function of untreated rhBD-2 vs. rhBD-2 exposed to dicarbonyl against strains of both gram-negative and gram-positive bacteria in culture was determined by radial diffusion assay. The effect of dicarbonyl on rhBD-2 chemotactic function was determined by chemotaxis assay in CEM-SS cells. Results MGO or GO in vitro irreversibly adducts to the rhBD-2 peptide, and significantly reduces antimicrobial and chemotactic functions. Adducts derive from two arginine residues, Arg22 and Arg23 near the C-terminus, and the N-terminal glycine (Gly1). We show by radial diffusion testing on gram-negative E. coli and P. aeruginosa, and gram-positive S. aureus, and a chemotaxis assay for CEM-SS cells, that antimicrobial activity and chemotactic function of rhBD-2 are significantly reduced by MGO. Conclusions Dicarbonyl modification of cationic antimicrobial peptides represents a potential link between hyperglycemia and the clinical manifestation of increased susceptibility to infection, protracted wound healing, and chronic inflammation in undiagnosed and uncontrolled Type 2 diabetes. Introduction Human β-defensin peptides (hBDs) are an evolutionarily conserved group of cationic, low molecular weight, unglycosylated peptides crucial to the antimicrobial and cell signaling functions of the innate immune system [1,2]. In mammals, including humans, the principle classes of the peptide are the αand β-defensins. Members of both classes exhibit structural similarities, the presence of six cysteine residues forming three intramolecular disulfide bonds, and an unusually high number of arginine and lysine residues [3,4]. Whereas α-defensins are expressed by blood-borne cells such as neutrophils [5], the hBDs are secreted predominantly by integumentary, lung, urogenital, oral and intestinal epithelium [4][5][6]. The hBD-2 peptide is inducible, and is expressed by epithelium and epithelial-derived cells [7], although recently expression of the peptide by vascular endothelium associated with oral squamous cell carcinoma has been reported [8]. In addition to their antimicrobial function hBDs also are important to regulation of innate and adaptive immune response and inflammation [9][10][11]. HBD-2 responds primarily to gram-negative bacteria, and modulates inflammation in humans [12] by inducing cellular expression of both pro-and anti-inflammatory cytokines and chemokines, including IL-6, IL-10, MIP-3α, MCP-1, and RANTES through activation of G protein-coupled CCR6 and PLC-dependent pathways [13]. Moreover, hBD-2 is instrumental in promoting immune cell migration and proliferation, angiogenesis, chemotaxis, and wound repair through phosphorylation of the epithelial cell growth factor receptor (EGFR), and signal transducer and activator of transcription (STAT) 1, and STAT3 [13,14]. Chronic hyperglycemia is most often associated with the onset of Type 2 diabetes mellitus, but reduced tissue utilization of glucose and resultant elevation of blood glucose can also occur with normal aging [15,16], chronic disease, such as Alzheimer's disease [17,18], and with chronic wounds [19]. A consequence of chronically elevated glucose is the increased production, and reduced degradation of dicarbonyl molecular species, including methylglyoxal (MGO) and glyoxal (GO) [20]. Although both are highly reactive α-oxoaldehydes that target arginine (Arg), lysine (Lys), and cysteine (Cys) residues of susceptible proteins, MGO is generally the more reactive [21,22]. These reactions however are selective, thus the presence of Arg, Lys or Cys residues within a protein does not necessarily lead to adduction [23]. All studied hBD peptides contain multiple Arg, Lys, and Cys residues. In hBDs the Cys residues form three intramolecular disulfide bonds considered fundamental to maintenance of hBD tertiary structure, and resistance to attack by proteases [24], while Arg and Lys residues are believed to contribute to disruption of the bacterial cell wall [25]. Dicarbonyl-induced modification of these residues through the formation of irreversible Advanced Glycation End products (AGEs) has the potential to significantly impair both antimicrobial and/or immunomodulatory functions of not only the hBDs, but other cationic antimicrobial peptides containing multiple arginine or lysine residues. In an earlier study on the effect of scratch-wounding and high glucose on Fusobacterium nucleatum-induced epithelial cell expression of hBD mRNA we observed by RT-qPCR that the addition of 30 mM glucose to cultured cells significantly reduced expression of hBD-2 mRNA (unpublished, S1 Fig). Our findings were subsequently confirmed by Lan et al. [26]. These initial observations led us to hypothesize that since dicarbonyl molecular species, such as MGO have been shown to form crosslinks between the guanine residue of template DNA and susceptible amino acid residues of DNA polymerase [27], and that MGO may also cause mRNA instability [28], perhaps dicarbonyls could also affect hBD function by direct modification of the peptide. We reasoned that since dicarbonyls exhibit high reactivity with Arg and Lys residues [29], and cationic antimicrobial peptides, including hBDs contain unusually high numbers of both residues there would be a higher likelihood that irreversible modification of these peptides would occur. In the present study recombinant hBD-2 (rhBD-2) was exposed to approximated physiological concentrations of MGO and GO and the peptide tested for antimicrobial activity and chemotactic function. We show that exposure of rhBD-2 to dicarbonyls significantly attenuates both antimicrobial and chemotactic function in vitro. Our findings suggest that under hyperglycemic conditions in vivo functionality of cationic antimicrobial peptides, in general, may be impaired as a result of carbonyl adduction and irreversible modification of susceptible amino acid residues. Our findings describe a previously unreported mechanism by which chronic hyperglycemia may increase susceptibility to chronic infection, and delayed wound repair in cases of uncontrolled or poorly controlled hyperglycemia. Materials and Methods Materials Recombinant hBD-2 (rhBD-2) (cat. no. 300-49) was purchased from PeproTech (Rocky Hill, NJ). Purity of the recombinant was 98% as determined by SDS-PAGE gel and HPLC analysis. Activity of the peptide was determined by its ability to attract immature dendritic cells when tested by the manufacturer within a concentration range of 10 to 100 ng/ml. Glyoxal (cat. no. 50660) was purchased from Sigma-Aldrich (St. Louis, MO). Methylglyoxal was a kind gift courtesy of Dr. Ram Nagaraj (Case Western Reserve University, Cleveland, OH). CEM-SS cells were obtained through the AIDS Research and Reference Reagent Program, Division of AIDS, NIAID, NIH: CEM-SS (Cat # 776) from Dr. Peter L. Nara [30,31,32]. This human (Caucasian) acute T4-lymphoblastoid leukemia cell line was initially derived by G.E. Foley et al. and biologically cloned by P.l. Nara et al, as cited. Methods Preparation of rhBD-2-dicarbonyl adducts for MALDI TOF MS and LC/MSMS Analysis. rhBD-2 (200 ng/10 μl) was incubated at 37°C with GO or MGO diluted in phosphatebuffered saline (pH 7.5) to a final concentration of 1, 10, or 100 μM. Recombinant hBD-2 (200 ng/10 μl) incubated with phosphate-buffered saline only was used for comparison of mass spectra obtained from samples exposed to the dicarbonyl molecular species. Incubation times were set at 2, 24, 48 and 72 hr. After incubation the mass spectra for each sample was determined by MALDI-TOF MS, and compared for presence of dicarbonyl-induced adducts. MALDI-TOF mass spectroscopy. Samples containing rhBD-2 protein was incubated with MGO or GO for 2, 24, 48 or 72 h, and analyzed by matrix-assisted laser desorption/ionizationtime of flight (MALDI-TOF) mass spectrometry (MS). To quench MGO and GO chemical reactions, each sample was purified by solid-phase extraction using C18 pack disposable pipette tips (ZipTip C18 Millipore, Co.). Extracted samples were then mixed with α-Cyano-4-hydroxycinnamic acid matrix solution (5mg/ml in 50% acetonitrile containing 0.1% TFA) at the protein to matrix ratio of 1:5 (v/v). MALDI-TOF MS analysis was performed on a prO-TOF2000 time-of-flight mass spectrometer (PerkinElmer Co., Boston, MA) equipped with a 337 nm nitrogen laser operating in the positive ion mode with an accelerating voltage of -16 kV, de-clustering potential of 30V, 20 Hz laser rates, and cooling and focusing gas flow rates of 190 and 212 ml/min, respectively. Spectra were acquired by averaging the scans of 500 laser shots to improve data quality and ion statistics. Mass spectra were calibrated externally using the singly protonated ions of angiotensin II and human adrenocorticotropic hormone fragment 7-38. Reduced and alkylated MGO-and GO-rhBD-2 incubates were analyzed by liquid chromatography-tandem mass spectrometry (LC-MS/MS) to identify the specific site(s) of MGO and GO modification. Proteolysis and MS Analysis. Samples containing rhBD-2 protein and incubated with MGO or GO, were purified by solid-phase extraction using packed disposable ZipTip C18 pipette tips (Millipore, Billerica, MA) according to the manufacturer's protocol. Extracted samples were then dried and re-suspended in 25 mM ammonium bicarbonate buffer (pH 7.8). The protein samples then were reduced with 5 mM dithiothreitol at 56°C for 45 min, followed by alkylation with 10 mM iodoacetamide in the dark for 45 min to reduce disulfide bridges. Subsequently reduced and alkylated samples were subjected to proteolysis overnight at 37°C by modified trypsin (Promega, Madison, WI) at an enzyme-to-protein ratio of 1:20 wt/wt. The digest mixtures (approximately 400 ng) were loaded onto a 300 μm × 5 mm C18, PepMap reverse phase trapping column to preconcentrate and wash away excess salts using a nano HPLC Ulti-Mate-3000 (Dionex, Sunnyvale, CA) column switching technique. The reverse phase separation was performed on a 75 μm × 15 cm (3um, 100A) Acclaim C18 column (Dionex) using a linear gradient of 5-50% B over 60 min [Buffer A: 100% water/0.1% formic acid (FA); Buffer B: 80% water CAN/0.1% FA]. Proteolytic peptides eluting from the column were directed to an LTQ-FT mass spectrometer (Thermo Fisher Scientific, Fremont, CA) equipped with a nanospray ion source and with the needle voltage of 2.4 kV. All mass spectra were obtained from data-dependent experiments. MS and tandem MS spectra were acquired in the positive ion mode with a full scan MS recorded in the FT analyzer at resolution R of 100,000 followed by MS/MS of the eight most-intense peptide ions in the LTQ analyzer. The resulting MS2 data were searched against hBD-2 protein database using Mass Matrix software to identify all specific sites of modification [33]. In particular, MS2 spectra were searched for tryptic peptides of hBD-2 using mass accuracy values of 15 ppm and 0.8 Daltons for MS1 and MS2 scans respectively, with the allowed variable modifications including carbamidomethylation for cysteines, MGO and dehydrated MGO modifications for arginines, lysines and N-terminal amino acid, and three missed cleavage sites. In addition, all detected MS2 spectra for each site of modification were manually verified. The fraction of modified Arg 22 and Arg 23 (peptides 11-23 and 23-36, respectively) were calculated from the ratio of the area under ion signals for the modified peptides to the sum of the areas for the unmodified peptides and their modified products. Radial diffusion assay and determination of CFU and rhBD-2 bactericidal activity. Since MGO was found to be more reactive than GO in adducting to hBD only the effect of MGO on rhBD-2 antimicrobial function was tested. The agar-based radial diffusion assay described by Steinberg and Lehrer [34] was used to determine bactericidal and bacteriostatic function of the peptides. Bactericidal activity of rhBD-2 (0.5 μg/5 μl) was determined by assays performed on gram negative Escherichia coli (BL21 DE3, Invitrogen) and Pseudomonas aeruginosa (ATCC strain 27853). Bacteriostatic activity was determined against gram positive Staphylococcus aureus (NCTC strain 8325). Bacterial isolates in appropriate media were grown overnight to mid-log phase, diluted to 4 x 10 6 CFU/ml before further dilution and dispersion in a sodium phosphate-buffered trypticase soy broth-based low EEO agarose "underlay" (pH 7.4). A small diameter (2.5 mm) well for each sample was subsequently punched in the underlay gel, and a 5 μl aliquot of each sample applied to the gel. Samples contained 100 to 200 μg/ ml rhBD-2 previously incubated in 100, 50 or 25 μM MGO, or phosphate-buffered saline (control). Plates were incubated aerobically at 37°C for 3 hours before returning the plates to room temperature and applying the trypticase soy broth-based agarose "overlay". Plates were then incubated overnight at 37°C, and CFU within the zone of inhibition counted the following morning. Chemotaxis assay. Chemotaxis assays were performed using the ChemoTx System from NeuroProbe with modification (NeuroProbe, Gaithersburg, MD). Briefly, samples of rhBD-2 were prepared at 3 concentrations (100 ng/μl, 200 ng/μl and 400 ng/μl) selected following kinetic analysis of optimum chemotaxis for CEM-SS cells, then incubated for 72 hr at 37°C in either 10μM MGO or 0.0067M PBS, pH 7.5. CEM-SS cells were initially grown in RPMI media, then resuspended in "chemotaxis media" (serum-free HG-DMEM with 1% BSA). The chemotactic mix containing rhBD-2 at final concentrations of 0, 10, 20 and 40 μg/ml, with or without MGO, or 10 nM stromal cell-derived factor 1 (SDF-1) as positive control (shown previously to induce a chemotactic response in CEM-SS cells, unpublished) were added to lower wells (30 μl) of a ChemoTx 96-well chamber. CEM-SS cell suspension was loaded into top wells of the chamber for a final cell concentration of 1 x 10 6 cells/ml, and the chamber incubated for 2 hr in 5% CO 2 at 37°C. After incubation CyQuant cell dye (Life Technologies, Grand Island, NY) was injected into the lower sample wells, and developing fluorescence quantitated using a BioTek microtiter plate reader (BioTek, Winooski, VT). MALDI-TOF MS mass detection of MGO-and GO-induced adduct formation on the rhBD-2 peptide We used MALDI-TOF MS to determine if incubation of rhBD-2 with MGO or GO could induce mass changes in the peptide reflective of MGO-or GO-derived adducts. Untreated rhBD-2 incubated at 37°C in phosphate-buffered saline (PBS) for 72 h exhibited peaks at m/z of 4327.6 and 2164.5 ( Fig 1A). These peaks corresponded to singly (1+) and doubly (2+) protonated rhBD-2 ionic species. Incubation of rhBD-2 in 100 μM MGO for 72 h, however, resulted in the appearance of additional peaks that corresponded in mass to molecular species of MGO (Fig 1B). These mass changes were equivalent to the mass of rhBD-2 protein adducted by intact MGO (m/z 4399.6), a dehydrated species (m/z 4381.7), or by a combined intact + dehydrated molecular species (m/z 4453.9) of MGO ( Fig 1B, shaded area). Incubations less than 72 h and ranging from 2 h to 48 h also resulted in the appearance of additional peaks with increasing intensities at m/z of 4399.5, 4435.7 and 4453.6 (Fig 2a-2c). These masses corresponded to increases of +72 (intact), +108 (2 dehydrated), and +126 (1 dehydrated + 1 intact) representing MGO-derived adduction to rhBD-2 peptide. We observed a change in both peak profile and intensity when the molar concentration of MGO was reduced from 100 μM to 10 μM with incubation times of 24, 48 and 72 h (Fig 2d-2f). At 2 h incubation times, peaks corresponding to MGO-derived adducts were indistinguishable from background (data not shown). GO was far less reactive with the rhBD-2 peptide than MGO (S2 Fig). Nevertheless, mass increases of +40 (m/z 4367.7) and + 58 (m/z 4385.5), corresponding to dehydrated and intact GO-derived adducts were detected after incubation of the peptide to 100 μM GO for 48 and 72 h. MS/MS site identification of MGO modifications of hBD-2 To determine the site of MGO modification, rhBD-2 was treated with dithiothreitol and then with iodoacetamide to reduce the three intramolecular disulfide bonds. Samples were then digested with trypsin and analyzed by LC-MS/MS as described in Methods. The ion signals at m/z 530.277 (2+), 467.556 (3+), 519.589 (3+), 526.951 (3+) and 566.294 (3+) that correspond to the (Fig 3b) relative to the corresponding control sample spectra of unmodified peptide 23-36 (Fig 3a). In contrast, all the observed y-ions, including doubly protonated y 13 were unchanged. A similar fragmentation pattern was observed for triply protonated ion signal (at m/z 550.958) corresponding to peptide 23-36 with a mass shift of +72 Da. Furthermore, it was observed that all the b2-b7 and b10-ions were shifted by +72 Da (S3b Bactericidal and bacteriostatic activity of rhBD-2 is inhibited by MGO We show by radial diffusion assay [34] that bactericidal activity of rhBD-2 against gramnegative P. aeruginosa and E. coli is significantly reduced when either strain is grown in the presence of rhBD-2 exposed to the highest concentration of MGO (100 μM), and 25 μM MGO, the lowest shown (Fig 4). In several experiments we exposed the peptide to 5 uM MGO and still observed loss of bactericidal activity (data not shown). Bacteriostatic activity of rhBD-2 against gram-positive S. aureus was also adversely affected, but reduced activity reached statistical significance only when this strain was grown in the presence of rhBD-2 incubated with 100 μM MGO. Bacterial viability and growth rate were unaffected by the presence in the growth media of 100 μM MGO alone (Fig 4, inset). Discussion Beta-defensin-2 is an inducible member of the β-defensin family of antimicrobial/ immunomodulatory peptides. These peptides are a prominent component of the human innate and adaptive immune systems not only through their ability to inhibit microbial invasion, but also through the capacity to modulate the adaptive immune response [37]. We show that antimicrobial function of rhBD-2 is severely compromised following in vitro exposure of the peptide to α-dicarbonyls at concentrations equivalent to tissue levels reportedly present under diabetic conditions [38]. Non-enzymatic glycation or Maillard reaction is known to be a significant contributor to the onset of hyperglycemia-induced pathologies associated with diabetes, and perhaps the chronic pathologies associated with aging and neurodegeneration [39]. Under physiological conditions production of dicarbonyl molecular species, notably methylglyoxal (MGO) and glyoxal (GO) occurs slowly, yet adduction to susceptible arginine (Arg) and lysine (Lys) residues can and does occur in both short-lived intracellular and long-lived extracellular proteins [40]. The impact of glycation on protein function depends, in part on the turnover rate of the protein, and the activity of the glyoxalase system responsible for degradation of unadducted dicarbonyl [41]. HBD peptides secreted by epithelial cells are generally resistant to degradation by proteases due to the presence of 3 intramolecular disulfide bonds, and appear to remain active for up to 4 days ( [42], Ganz, personal communication). Chronic hyperglycemia associated with uncontrolled Type 2 diabetes, or an age-related disease will promote increased production of α-dicarbonyl molecular species, including highly reactive MGO. Normally, tissue levels of dicarbonyl remain low, limited primarily by activity of the glyoxalase system. In this way interaction between carbonyl and susceptible amino acid residues is largely prevented. However, depletion of glutathione by ageing, chronic hyperglycemia or associated oxidative stress reduces the ability of glyoxylase 1, a glutathione-dependent enzyme, to effectively limit dicarbonyl level [43,44]. As a result, uncontrolled production of reversible Schiff's base and Amadori intermediates that ultimately can convert to irreversible AGEs is a likely contributor to modification of the hBD-2 peptide with aging and in uncontrolled Type 2 diabetes. Under these conditions then α-dicarbonyl molecular species represent the dominant participants in protein modification. Modification of insulin with a plasma half-life of 5 to 10 minutes, considerably shorter than the apparent half-life of hBD-2, has been reported [45], although it has been proposed that modification of the protein most likely occurs within the pancreatic beta cell, and not subsequent to release into plasma. We have shown that in vitro exposure of rhBD-2 to MGO at a concentration as low as 10 μM, and for incubations as short as 2 h, results in detectible modification of Arg residues at positions 22 and 23. Despite the verified presence in vivo of MGO-modified proteins in serum, and hBD-2 secretion in human tissues, detection of the structurally modified peptide in vivo is difficult, most likely due to rapid removal of the adducted protein by mechanisms that recognize the modification as a signal for protein removal from the cellular environment. One such mechanism involves the irreversible modification of a single arginine residue by MGO, as we report here, and activation of a degradation process by monocytic cells that involves receptor recognition of the MGO-induced arginine derivative Ndelta-(5-hydro-5-methyl-4-imidazolon-2-yl)ornithine [46]. There is also evidence that the N-terminal residue may contribute to protein degradation in short-lived proteins [47]. However we did not determine whether the N-terminal glycine (Gly 1 ) or its modification by MGO, as we observed, contributes to removal of rhBD-2. Studies conducted both in vitro, and in vivo, show that in the presence of glycating agents like MGO, the mere presence of Arg and/or Lys residues within a protein is not predictive of non-enzymatic glycation at these sites. This point is well illustrated by Jia et al. [23], who report that in vitro insulin is susceptible to glycation by MGO, but glucagon is not, even though glucagon is of similar mass and contains two Arg and a single Lys. Additional support for the selective nature of non-enzymatic glycation of proteins comes from studies of human [48], mouse and rat [49] plasma proteins obtained from aged populations. In both these studies glycation is limited to relatively few proteins. Relative quantification of the extent of both the +54 and the +72 Da modification from LC-MS data shows that of the two Arg residues present in rhBD-2 Arg 23 is slightly more reactive (21%) with MGO than Arg 22 under our in vitro study conditions, this despite the close structural proximity of the two Arg residues. We note that in the 3-dimensional structure (S4A Fig) of the hBD-2 peptide Arg 22 , Arg 23 and Gly 1 are each found at or near the protein surface. A graphic comparison of residue solvent accessibility for hBD-2 (PDB 1FD3) shows both Arg 22 and Arg 23 , as well as Gly 1 located in the outer ring of the spiral plot, indicating these residues are readily accessible to MGO (S4B Fig). Protein tertiary structure, and therefore accessibility of the glycating agent to susceptible residues is a primary determinant of the glycation process. This is clearly shown by Gao and Wang [50] who correlated selective modification of specific arginine residues within the hemoglobin molecule with solvent accessibility, as determined by the 'relative surface exposable area' calculated for each arginine within the native molecule. In rhBD-2 both arginine Arg 22 and Arg 23 , extending from the surface of the β1-β2 loop, are presumed to be equally solvent accessible. In native hBD-2 both residues are necessary to interactions between the peptide and prokaryotic membrane channel pores [51]. Our finding that adduction of both residues Arg 23 and Arg 22 contributes to reduced antimicrobial function of rhBD-2 against both gram negative (bactericidal activity) and gram positive bacteria (bacteriostatic activity) confirms the importance of these basic residues to antimicrobial function. The selective targeting of Arg residues by reactive carbonyl species, such as MGO [52] results in neutralization of the positively charged arginyl guanidino group [53] and the overall localized positive surface charge. Although we found that the N-terminal glycine (Gly 1 ) also was modified by MGO, this region of the peptide does not appear to be a significant contributor to rhBD-2 antimicrobial function [54]. Several model membranes have been proposed to explain the antibacterial effects of hBD peptides, but common to each is the proposed insertion of a portion of the peptide into the bacterial lipid bilayer and dispersion of cellular integrity [55]. As shown by solid-state NMR distance measurements the positively charged Arg residue assumes a stabilizing configuration within the membrane through complexation of the positively charged Arg with the negatively charged phosphate groups of bacterial membrane lipids [56]. We therefore propose that modification of the guanidino group of the Arg residues, especially Arg 23 by dicarbonyl contributes to reduction of positive surface charge, and destabilization of peptide-bacterial membrane lipid interaction. Relevant to our findings are those of Pazgier and colleagues [57] demonstrating the critical nature of residues in the N-terminal α-helical region to CCR6-mediated chemotactic activity. In the Pazgier study mutations within the N terminus resulted in significant loss of hBD-1 affinity for the G protein coupled CCR6 receptor. Thus, although modification of the Nterminal glycine (Gly 1 ) may not significantly influence antimicrobial function it might contribute to our observed reduction in rhBD-2 chemoattraction for CEM-SS cells. As pointed out by Pazgier et al. some residues located near the C terminus, such as Arg 29 in hBD-1 also appear to influence chemotactic activity. Thus the Arg 22 and/or Arg 23 residues in rhBD-2 are likely participants in chemotactic activity as well, perhaps by complexing with glyosaminoglycans and dimerization of hBD-2 to initiate binding to the CCR6 receptor [58]. Adduction of these residues by dicarbonyls would effectively prohibit this interaction. HBD-2 is one of several hBDs (1)(2)(3)(4) important to the effective function of the human innate and adaptive immune systems. Utilizing mass spectral analysis of a recombinant antimicrobial peptide we have demonstrated that hBD-2 is susceptible to function-altering modification of critical Arg and Gly residues by dicarbonyl molecular species. Not reported here are additional findings from ongoing studies, indicating that modification of basic residues also occurs in other members of the hBD family, hBD-1 and hBD-3. In light of these findings we believe that in addition to hBDs, other cationic antimicrobial peptides with multiple Arg, such as cathelicidin (LL-37), may be equally prone to modification by carbonyls. The in vitro nature of our study findings are highly relevant to the in vivo conditions that exist in diabetes and other chronic conditions such as senescence and neurodegeneration where increased tissue levels of carbonyl contribute to protein modification. We describe here a previously unreported mechanism by which individuals with diabetes and other chronic diseases presenting with altered glucose metabolism may be at increased risk of chronic infection and tissue injury due to loss of effective antimicrobial and immunomodulatory protection. The clinical impact of MGOmodification on antimicrobial peptide function, and the inflammatory response in vivo will depend predominantly upon the frequency (probability) of dicarbonyl-antimicrobial peptide interaction. In vivo, these interactions will be influenced by multiple factors, including half-life of the peptide, concentration of the dicarbonyl in proximity to hBD-2, and competition for the dicarbonyl by other proteins. The clinical relevancy of our findings can only be determined by additional study in vivo. The spiral view shows amino acid residues of hBD-2 (PDB ID 1FD3), in the order of their solvent accessibility (determined using online server http:// www.abren.net/cgi-bin/asaview/plot.cgi, Ahmed et al, 2004). Spiral plots are generated by sorting all residues by their relative solvent accessibility. The radius of the sphere representing each residue is proportional to the accessible surface area of that residue, thus enabling a visual estimate of more accessible residues. These residues are then arranged in form of a spiral, such that the inner residues in this spiral represent buried residues and more and more exposed residues come nearer to the outer ring of the spiral. Gly 1 , Arg 22
v3-fos-license
2022-01-14T16:22:42.030Z
2022-01-11T00:00:00.000
245921720
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/2071-1050/14/2/758/pdf?version=1641892234", "pdf_hash": "6100ebc4b7e4b6a50dfe18d1725d8f80a3afc055", "pdf_src": "ScienceParsePlus", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43642", "s2fieldsofstudy": [ "Environmental Science" ], "sha1": "12a9ffcdca221da6085c9e9dc17df7c05d9b8b2d", "year": 2022 }
pes2o/s2orc
What Is the Impact of Mass Timber Utilization on Climate and Forests? : As the need to address climate change grows more urgent, policymakers, businesses, and others are seeking innovative approaches to remove carbon dioxide emissions from the atmosphere and decarbonize hard-to-abate sectors. Forests can play a role in reducing atmospheric carbon. However, there is disagreement over whether forests are most effective in reducing carbon emissions when left alone versus managed for sustainable harvesting and wood product production. Cross-laminated timber is at the forefront of the mass timber movement, which is enabling designers, engineers, and other stakeholders to build taller wood buildings. Several recent studies have shown that substituting mass timber for steel and concrete in mid-rise buildings can reduce the emissions associated with manufacturing, transporting, and installing building materials by 13%-26.5%. However, the prospect of increased utilization of wood products as a climate solution also raises questions about the impact of increased demand for wood on forest carbon stocks, on forest condition, and on the provision of the many other critical social and environmental benefits that healthy forests can provide. A holistic assessment of the total climate impact of forest product demand across product substitution, carbon storage in materials, current and future forest carbon stock, and forest area and condition is challenging, but it is important to understand the impact of increased mass timber utilization on forests and climate, and therefore also on which safeguards might be necessary to ensure positive outcomes. To thus assess the potential impacts, both positive and negative, of greater mass timber utilization on forests ecosystems and emissions associated with the built environment, The Nature Conservancy (TNC) initiated a global mass timber impact assessment (GMTIA), a five-part, highly collaborative research program focused on understanding the potential benefits and risks of increased demand for mass timber products on forests and identifying appropriate safeguards to ensure positive outcomes. Introduction As the need to address climate change grows more urgent, policymakers, businesses, and others are seeking innovative approaches remove carbon dioxide emissions from the atmosphere and decarbonize hard-to-abate sectors.Concrete and steel, construction materials whose combined production represents about 11 percent of annual global greenhouse gas emissions, present a particular challenge [1].Global building stock, which is primarily composed of these materials, is projected to double over the next 40 years, effectively adding a built area the size of Paris to the planet every week through 2060 [2].Aligning this projected surge in construction with the climate mitigation goals of the Paris Agreement is critical to a climate-stable future.Forests can play a role in reducing atmospheric carbon.However, there is disagreement over whether forests are most effective reducing carbon emissions when left alone versus managed for sustainable harvesting and wood product production. Timber framing and "post-and-beam" construction are traditional methods of constructing buildings.Historically, this type of construction has been limited to low-rise buildings such as single-family homes, smaller apartment buildings, and non-residential structures.More recently, there has been a growing interest in building more with wood.A new class of wood products (mass timber) has emerged, allowing wood buildings to be much taller (e.g., 8-18 stories), and thus mass timber has the potential to displace some steel and concrete building materials, which today have inherently higher embodied carbon and energy.Cross-laminated timber (CLT) is at the forefront of the mass timber movement, which is enabling designers, engineers, and other stakeholders to build taller wood buildings.CLT panels are made by laminating dimension lumber orthogonally in alternating layers.Panels generally made from CLT are lightweight, yet very strong, with good fire, seismic, and thermal performance [3,4]. Several recent studies have shown that substituting mass timber for steel and concrete in mid-rise buildings can reduce the emissions associated with manufacturing, transporting, and installing building materials by 13-26.5% [5][6][7].Other studies have quantified the amount of carbon stored in mass timber materials themselves, which persists for the useful life of the building and perhaps longer if materials are recovered, reused or repurposed [8]. However, the prospect of increased utilization of wood products as a climate solution also raises questions about the impact of increased demand for wood on forest carbon stocks, on forest condition, and on the provision of the many other critical social and environmental benefits that healthy forests can provide.Increased wood harvest for mass timber use can increase, decrease, or have a neutral impact on forest carbon stock, depending on the forest attributes and environmental factors, the harvest and management strategies, the spatial and temporal scale being viewed, the carbon pools being considered in the forest ecosystem, and indirect impacts on the wider wood product market [9,10].For example, increased demand for forest products through sustainable harvesting may expand forest carbon sinks by encouraging forest growth and regeneration over time [11,12].It can incentivize new tree planting and investment in forest management that can contribute to increased forest growth and inventory [13].Improved forest management may lower the risk of wildland fires in regions such as the western U.S. [14][15][16], which are increasing in intensity potentially reducing overall forest carbon stocks and threatening forests and communities.However, increased demand may also have negative impacts, if for example, unsustainable forest management is adopted, by altering harvest intensities or rotation length beyond sustainable levels.Increasing mass timber demand can potentially also have initial negative impacts on forest carbon stocks through increased production emissions and residues. A holistic assessment of the total climate impact of forest product demand across product substitution, carbon storage in materials, current and future forest carbon stock, and forest area and condition is challenging.Several recent studies have tried to assess the total climate impact of changes in wood demand across the full value chain at regional or national levels, concluding that improved forest management and shifts to longer-lived wood product utilization would drive net climate benefits in Canada [17][18][19][20] and in selected sites across North America [21,22].Other researchers have concluded that utilization of long-lived wood products could drive net negative impacts on climate, excluding product substitution benefits [23,24]. For policymakers, developers, and others considering the use of mass timber to achieve climate and policy goals, this lack of clarity can be confusing.Additionally, use of mass timber is generally projected to increase due to general market forces.For these reasons, it is important to understand the impact of increased mass timber utilization on forests and climate, and therefore also on which safeguards might be necessary to ensure positive outcomes. Global Mass Timber Impact Assessment (GMTIA) To assess the potential impacts, both positive and negative, of greater mass timber utilization on forests ecosystems and emissions associated with the built environment, The Nature Conservancy (TNC) initiated a global mass timber impact assessment (GMTIA), a five-part, highly collaborative research program focused on understanding the potential benefits and risks of increased demand for mass timber products on forests and identifying appropriate safeguards to ensure positive outcomes. We selected five regions with high potential for mass timber utilization based on a range of criteria: we selected two regions in which mass timber has already achieved modest levels of adoption (Europe, which represented 60% of the global mass timber market in 2018 [25], and the USA, where 576 mass timber projects have either been built or are currently under construction [26].We chose additional regions where recent actions suggest that mass timber may play a role in future climate policies [27][28][29]; including one region in which global construction activity is projected to be concentrated through 2030 (China, which represents 24% of total projected global floor area expansion through 2016-2030 [2]); and one region that is home to significant areas of commercial softwood forests [30] and a well-established forest products manufacturing industry, but where mass timber markets remain nascent (the southern cone of South America).. The GMTIA is organized in five phases of work (Figure 1): i. Comparative life cycle assessments (LCAs) of functionally equivalent mass timber and conventional buildings in selected regions (Europe, China, Chile, and the US) to estimate embodied carbon and carbon storage of mass timber utilization at the individual building level for representative buildings using designs that are locally appropriate to each region, but functionally equivalent across regions.As with most LCAs, phase 1 of the GMTIA does not consider impacts on forest carbon stocks, which are explicitly addressed in phase 4.These LCAs also do not consider end-of-life treatment.ii. Regional demand assessments to extend the results of individual building LCAs to estimate embodied carbon, carbon storage, and changes in wood demand at varying levels of mass timber adoption (conservative, optimistic, or extremely high adoption levels) in new construction in each of the selected regions.iii. Global trade modelling using a variant of the Global Forest Products Model to estimate how changes in demand for forest products associated with increased penetration of mass timber in each region will directly and indirectly impact global forest product trade flows (e.g., if 90% of new buildings in region X are built with mass timber, where will that timber be supplied from, and will other trade flows be displaced?).iv. Forest impact assessments to evaluate the spatial-temporal impact of mass timber harvests on forest composition, structure and carbon stocks in forest ecosystems associated with different predicted mass timber demand scenarios as indicated by the regional demand assessment and the global trade modelling in Phases ii and iii.v. Integration of the results of Phases 1-4 to estimate the total impact on climate and forests of different levels of mass timber utilization in the selected regions and the identification of potential risks and of conditions needed to reduce potential negative impacts.Results of all phases will also be communicated via academic articles and policy recommendations. v. Integration of the results of Phases 1-4 to estimate the total impact on climate and forests of different levels of mass timber utilization in the selected regions and the identification of potential risks and of conditions needed to reduce potential negative impacts.Results of all phases will also be communicated via academic articles and policy recommendations.In July 2018, The Nature Conservancy convened a collaborative multi-disciplinary group of forest ecologists, conservation practitioners, academics, economists, and lifecycle analysts to design a comprehensive approach to understand the total impact of greater mass timber utilization on forests and climate.We convened over 20 collaborators and partners, bringing in a wide array of knowledge and expertise on the complex issues that need to be considered to assess the impacts of increased demand for mass timber (see the acknowledgements section for a full list of collaborators).The remainder of this article briefly discusses the theoretical basis for this research, which occurs in five phases.The initial three phases of this research make up many papers within this Special Issue of Sustainability. Theoretical Basis The climate impacts of concrete and steel are typically calculated as the emissions associated with the extraction, processing, manufacturing, transportation, installation, use, maintenance, and disposal of the products (often referred to as embodied emissions) for which traditional LCA is well suited [31,32].However, because of all the ecosystem services forests can provide, understanding the full climate impacts of forest product utilization requires consideration of a wider range of factors, not all of which are captured in typical LCA methodologies: a Forest carbon stock changes: Changes in demand for mass timber may drive market level changes that increase or decrease forest carbon stocks, forest area, or both, as described above.Demand changes may also simply drive product shifts or changes in utilization rates, and thereby have no detectable impact on forest carbon stocks.These impacts are likely to vary based on geography, the forest composition and structure, existing forest management practices, magnitude of demand changes, forest tenure, forest plans and ownership, timescale, and a variety of other factors.b Forest health and climate change: Demand changes will also lead to changes in forest health (potentially positive or negative).A changing climate will also affect the In July 2018, The Nature Conservancy convened a collaborative multi-disciplinary group of forest ecologists, conservation practitioners, academics, economists, and lifecycle analysts to design a comprehensive approach to understand the total impact of greater mass timber utilization on forests and climate.We convened over 20 collaborators and partners, bringing in a wide array of knowledge and expertise on the complex issues that need to be considered to assess the impacts of increased demand for mass timber (see the acknowledgements section for a full list of collaborators).The remainder of this article briefly discusses the theoretical basis for this research, which occurs in five phases.The initial three phases of this research make up many papers within this Special Issue of Sustainability. Theoretical Basis The climate impacts of concrete and steel are typically calculated as the emissions associated with the extraction, processing, manufacturing, transportation, installation, use, maintenance, and disposal of the products (often referred to as embodied emissions) for which traditional LCA is well suited [31,32].However, because of all the ecosystem services forests can provide, understanding the full climate impacts of forest product utilization requires consideration of a wider range of factors, not all of which are captured in typical LCA methodologies: a Forest carbon stock changes: Changes in demand for mass timber may drive market level changes that increase or decrease forest carbon stocks, forest area, or both, as described above.Demand changes may also simply drive product shifts or changes in utilization rates, and thereby have no detectable impact on forest carbon stocks.These impacts are likely to vary based on geography, the forest composition and structure, existing forest management practices, magnitude of demand changes, forest tenure, forest plans and ownership, timescale, and a variety of other factors.b Forest health and climate change: Demand changes will also lead to changes in forest health (potentially positive or negative).A changing climate will also affect the health of the forest, especially as natural disturbances such as wildfire, insects, and disease increase with intensity.The implications of climate change on forest dynamics ultimately impact the amount of forest products that are produced.c Embodied carbon: Embodied carbon for the construction products refers to all greenhouse gas (GHG) emissions associated with extraction, processing, and manufacturing, transporting, and installing construction materials [33].The harvest, transportation, and production of mass timber may have higher or lower emissions than alternative construction materials, thereby generating a negative or positive climate impact, depending upon the process and energy mix of their manufacture, transportation distances, and the emissions associated with the materials for which they are a substitute.d Carbon storage in wood products: Carbon storage occurs differently based on the type of harvested wood product.Forest products may store carbon for extended periods of time (as in the case of wooden furniture, and mass timber in buildings), emit carbon immediately (as in the case of bioenergy), or store carbon for an intermediate period dependent on recycling (for example, short-term storage in paper products).Temporary carbon storage, however, is not accounted for in traditional LCA frameworks, which tend to treat emissions as equivalent regardless of when in the life cycle they occur.e End-of-Life (EoL): There are many possible waste scenarios for building materials all of which vary depending on material, local, state, and country regulations, and existing deconstruction standards.Wood products have the potential to be (1) re-used (e.g., wooden boards salvaged from one building for use in another), ( 2) substituted for energy (e.g., wooden boards salvaged from a building and processed/burned to produce bioenergy), or (3) landfilled.Materials that are fully reused can have climate benefits, while wood products that decay in a landfill can decay slowly, and may produce methane, a very potent greenhouse gas [34]. Estimating the total impact of increased wood product utilization on climate change, and the potential impacts on forests requires an understanding of abovementioned factors, and the complex interactions with one another.Little and differing information is currently available to inform decision makers regarding (i) the potential scale of climate impacts associated with greater forest products utilization, (ii) the risks associated with increased wood products demand on forest degradation or deforestation, (iii) the potential benefits of increased wood products demand on reforestation or other increases in forest carbon stock, (iv) factors or conditions that enhance positive impacts or reduce negative impacts on forests and climate, (v) the role of market mechanisms in mediating forest impacts; and (vi) measures that might be taken to maximize benefits, minimize risks, and safeguard against undesirable outcomes [35]. While the GMTIA attempts a comprehensive LCA assessment comparing manufacturing emissions among functional equivalent buildings, it does not consider the operational emissions (the emissions associated with operating and maintaining the building over its useful life) of different building types due to a lack of readily available data and tools to achieve the comparison [32].Operational emissions represent as much as 28% of the global energy-related CO2 emissions (the main source of emissions in whole building LCA studies) [36], as such we recommend the development of tools and data collection to assess potential operational differences. Discussion and Conclusions Partial results of the first three phases of the GMTIA appear in this Special Issue of Sustainability.This Special Issue includes studies that present the results of the comparable LCAs for functionally equivalent buildings of different heights and from different regions; estimates from four important international wood producing regions of the impacts of wood product demand when moderate to high levels of mass timber adoption are considered and estimates of the impacts of these demand changes on global prices, production, consumption, and trade of forest products. Phases 4 and 5 of the GMTIA, which will generate the results perhaps most critical to decision makers and society at large, necessarily build on the results of the first three phases presented here.Phase 4, the impact assessment of demand changes on forest composition, structure, and carbon stocks, and phase 5, the integration of phases 1-4 to estimate the total impact of mass timber demand changes on forests and climate and identify pathways to mitigate negative consequences for forests, people, and climate, are already underway.These results are expected in early 2023. This work represents an important first step toward understanding the full breadth of impacts that increased mass timber utilization could have on forests and climate mitigation around the globe.The series of projects detailed in this Special Issue have been collaboratively designed to answer pressing questions in the discussion of mass timber impacts, both negative and positive.That said, the science, marketing, and application of mass timber are actively expanding fields, and the GMTIA is not able to provide the final word on these topics.Rather, we hope to initiate the development of key research and safeguarding standards that can guide policymakers, commercial interests, and land managers and owners in making sound decisions in how they approach elements of the broader mass timber conversation.It is critical to our success that the methods and standards we develop continue to evolve in response to the best available science, policy, and case studies.This project forms part of a broader portfolio of work and has been developed with a wide range of academic institutions, Non-Government Organizations (NGOS), and consultancies.To further collaborative engagement, The Nature Conservancy would like to hear from additional voices on these issues and encourages those with an interest in collaborating in this on-going work to contact the corresponding author. Figure 1 . Figure 1.Five phases of the global mass timber impact assessment (GMTIA). Figure 1 . Figure 1.Five phases of the global mass timber impact assessment (GMTIA).
v3-fos-license
2020-11-19T09:13:34.829Z
2020-11-18T00:00:00.000
227067251
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.nature.com/articles/s41598-020-77119-6.pdf", "pdf_hash": "b4f805fbfdc557de02131c4c4e7249ddd8891084", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43643", "s2fieldsofstudy": [ "Medicine" ], "sha1": "f373f8b32c3a51e7e3f64d500ac7cf2f0fd5643a", "year": 2020 }
pes2o/s2orc
Potential miRNA biomarkers for the diagnosis and prognosis of esophageal cancer detected by a novel absolute quantitative RT-qPCR method miRNAs are expected to become potential biomarkers in the diagnosis and prognosis of Esophageal cancer (EC). Through a series of screening, miR-34a-5p, miR-148a-3p and miR-181a-5p were selected as EC-associated miRNAs. Based on AllGlo probe, a novel absolute quantitative RT-qPCR method with high sensitivity, specificity and accuracy was established for detecting miRNAs. Then the clinical significance of these 3 miRNAs was explored with 213 patients (166 cases with EC and 47 cases with benign diseases) and 170 normal controls. Compared with normal controls, the level of miR-34a-5p increased while miR-148a-3p and miR-181a-5p decreased in EC and benign patients (P < 0.001), and the level of miR-181a-5p in early EC patients was significantly lower (P < 0.001). According to logistic regression analysis, combined detection of miR-34a-5p, miR-148a-3p and Cyfra21-1 provided the highest diagnosis efficiency of 85.07% with sensitivity and specificity reaching 85.45% and 84.71%. Compared with preoperative samples, the level of miR-34a-5p decreased while miR-148a-3p and miR-181a-5p increased in postoperative samples (P < 0.001). Collectively, this first developed, novel absolute quantitative RT-qPCR method exhibits high application value in detecting miRNAs, miR-34a-5p, miR-148a-3p and miR-181a-5p may serve as potential biomarkers in the diagnosis and prognosis of EC, and miR-181a-5p probably could serve as a new biomarker for early EC. Establishment and evaluation of the novel absolute quantitative RT-qPCR method based on AllGlo probe for detecting miRNAs. Using validated specific primers and designed probe sequence, the absolute miRNA quantitative RT-qPCR detection method based on AllGlo probe was constructed. AllGlo probe amplification products were sequenced, showing that all the miRNA PCR amplification products were specific. Compared with the SYBR Green method, the AllGlo probe detection method had a CT value of 1-2 cycles lower, which was of higher sensitivity. The stand curves (R 2 > 0.99) were established for quantitative detection of the three kinds of miRNAs (miR-34a-5p, miR-148a-3p and miR-181a-5p). Moreover, both intra-assay and inter-assay variabilities were less than 5% ( Table 2), indicating that the established detection method had good repeatability, stability and precision. Five different concentrations of standards were tested using a double-blind method to determine the correctness, the absolute deviations of the three kinds of miRNAs were all within acceptable ranges (≤ ± 0.4 log10) ( Table 3). The correlation coefficients (R 2 ) of the linear equations of the three miRNAs were all > 0.99, indicating that the detection method had a good linear relationship in the range of 5.6 × 10 3 to 5.6 × 10 10 copies/μL (Fig. 1). The LOD of miR-34a, miR-148a and miR-181a were 1906 copies/μL, 6202 copies/μL and 3332 copies/μL, respectively. Each of the eight high-concentration standards and the negative samples were cross-aligned and tested, the results showed that the negative specimens had no amplification curve, indicating that the detection was not affected by high-concentration samples. The methodological evalu- Table 1. Results of candidate miRNAs in discovery cohort. CT cycle threshold. *P < 0.05; ***P < 0.001. www.nature.com/scientificreports/ ation results proved that this absolute miRNA quantitative RT-qPCR detection method had high application value in scientific research and clinical promotion. miR-34a-5p, miR-148a-3p and miR-181a-5p could be used to identify EC and benign esophageal diseases. The three kinds of differentially expressed plasma miRNAs were examined by AllGlo RT-qPCR in the validation tests with the plasma samples of 166 EC patients, 47 benign esophageal diseases patients and 170 normal controls. According to the examination results, The expression levels of miR-34a-5p, miR-148a-3p and miR-181a-5p in EC patients, benign esophageal diseases patients and normal controls were statisti- www.nature.com/scientificreports/ cally significant (P < 0.001). Compared with normal controls, the level of miR-34a-5p increased, while the levels of miR-148a-3p and miR-181a-5p decreased in EC patients (Fig. 2). Evaluation of the clinical diagnostic value of the three miRNAs for EC. To evaluate the diagnostic efficiency of the three kinds of miRNAs, ROC curve was applied to analyze and find the appropriate cutoff value, respectively. As the levels of CEA and Cyfra21-1 in plasma are commonly used as auxiliary diagnostic markers for EC, the performance of the three kinds of miRNAs were compared with CEA and Cyfra21-1. ROC curve analysis showed that in distinguishing EC patients from normal controls, the areas under the curves(AUC) of miR-34a-5p, miR-148a-3p, miR-181a-5p were 0.8213, 0.8079, and 0.7814, respectively ( Fig. 3A-C), while the areas of CEA and Cyfra21-1 were 0.6172 and 0.7609, respectively (Fig. 3D,E). The optimal cut-off values of miR-34a-5p, miR-148a-3p, miR-181a-5p, CEA and Cyfra21-1 were 6.461, 9.394, 6.330, 4.60 ng/mL and 3.39 ng/ mL. At the optimal cut-off values, the sensitivity and specificity of miR-34a-5p were 76.53% and 83.53%, the sensitivity and specificity of miR-148a-3p were 82.53% and 64.71%, and the sensitivity and specificity of miR-181a-5p were 85.54% and 61.76%, while the sensitivity and specificity of CEA were 15.66% and 99.41%, and the sensitivity and specificity of Cyfra21-1 were 50.30% and 89.94%, indicating that the sensitivity of the three miR- The expression levels of miR-34a-5p, miR-148a-3p and miR-181a-5p in EC patients, benign esophageal diseases patients and normal controls were statistically significant (P < 0.001). The level of miR-34a-5p increased, while the levels of miR-148a-3p and miR-181a-5p decreased in EC patients. All data shown as log10 copies/μL. EC: esophageal cancer. *P < 0.05, **P < 0.01, ***P < 0.001. www.nature.com/scientificreports/ NAs were much higher than those of CEA and Cyfra21-1. Thus, miR-34a-5p, miR-148a-3p and miR-181a-5p in plasma could be complemented by the levels of CEA and Cyfra21-1 in plasma for the auxiliary diagnosis of EC. These findings validated the performance of miR-34a-5p, miR-148a-3p and miR-181a-5p as plasma markers for EC diagnosis. In order to obtain higher diagnostic efficiency, the diagnostic efficiency of different combinations of the three kinds of miRNAs with CEA and Cyfra21-1 was examined by logistic regression analysis. As was shown in Table 4, the diagnosis efficiency of combinations were all higher than that of each marker used alone. In the 166 patients with EC, according to logistic regression analysis of the 3 kinds of miRNAs with CEA and Cyfra21-1, the regression coefficient of miR-181a-5p was − 0.306 (P = 0.627 > 0.05) in binary logistic regression (Forward) and the weight of CEA was so low that both of them were excluded. Based on economic benefit, the model of the panel of miR-34a-5p, miR-148a-3p and Cyfra21-1 which was of the highest diagnostic efficiency (85.07%) was chosen, and a mathematical diagnostic model through Logistics regression was obtained: Y = 2.774*miR-34a-5p − 5.536*miR-148a-3p + 0.881*Cyfra21-1. The ROC curve analysis showed that the AUC of the panel of miR-34a-5p, miR-148a-3p and Cyfra21-1 was 0.9196, with sensitivity and specificity reaching 85.45% and 84.71% (Fig. 3F). In the 67 patients with early EC, logistic regression analysis of the combination of miR-34a-5p, miR-148a-3p and Cyfra21-1 also showed that the combined model had a higher diagnostic efficiency ( Table 4) than those of other panels. All of the above results indicated that the combined detection of the panel of miR-34a-5p, miR-148a-3p and Cyfra21-1 in plasma provided a higher diagnosis efficiency thereby further improving the accuracy of diagnosis. miR-181a-5p could serve for the diagnosis of early EC. To determine whether these three kinds of miRNAs could be used as tumor markers in the development and progression of EC, the correlations between the expression levels of miRNAs and clinical pathological features of patients were analyzed. No obvious differences were observed when EC patients were stratified by sex, age or other clinical features, whereas the expression levels of miR-181a-5p were different in patients with different TNM stages of EC (P = 0.0113, Table 5). In order to clarify whether the expression level of miR-181a-5p could be used for the diagnosis of early EC, we compared the expression levels of miR-181a-5p between early EC patients (156 cases, 10 EC patients with unclear TNM stage had been excluded from the total 166 ECpatients.) and normal controls. The expression level of miR-181a-5p in early EC patients was significantly lower than that of normal controls (P < 0.001, Fig. 4A). According to the ROC curve analysis of diagnostic efficiency of miR-181a-5p in early EC patients, the value of AUC was 0.7457, the diagnostic sensitivity and specificity were 85.07% and 62.94% with the optimal cutoff value of 6.330 (Fig. 4B). In 67 patients with early EC, 36 patients that were missed by Cyfra21-1 alone (Fig. 4C) and 49 patients that were missed by CEA alone (Fig. 4D) were identified by miR-181a-5p. In the process of exploring the ability of miR-181a-5p to distinguish early ECs from normal controls, we found that in the serial testing of miR-181a-5p and CEA, the specificity increased dramatically to 100%, while the sensitivity dropped to 11.1%. And in the parallel testing of miR-181a-5p and CEA, the sensitivity and specificity of the combination were 86.57% and 61.76%, which were similar to miR-181a-5p alone. Consistently, serial testing of miR-181a-5p and Cyfra21-1 increased specificity while reduced sensitivity, and parallel testing of miR-181a-5p and Cyfra21-1 failed to improve diagnostic efficiency. Indeed, the sensitivity of miR-181a-5p in the diagnosis of early EC was much higher than those of conventional tumor markers (CEA and Cyfra21-1). Our results provided evidence that the expression level of miR-181a-5p in plasma could be used to distinguish early EC patients from normal controls with clinically satisfactory sensitivity, which might be a new biomarker for early EC. The evaluation of miR-34a-5p, miR-148a-3p and miR-181a-5p serving as prognosis biomarkers for EC patients after surgery. In order to evaluate whether these three kinds of miRNAs could be used as prognosis biomarkers, the levels of the three kinds of miRNAs in preoperative and postoperative plasma samples of 80 EC patients who underwent esophagectomy were examined. The examination results showed that, compared with the preoperative samples, the level of miR-34a-5p significantly decreased, while the levels of miR-148a-3p and miR-181a-5p significantly increased in the postoperative samples (P < 0.001, Fig. 5). These results suggested that the levels of miR-34a-5p, miR-148a-3p and miR-181a-5p in plasma might be valuable predictors of postoperative prognosis for EC patients. www.nature.com/scientificreports/ Discussion miRNAs have recently emerged as a novel class of gene expression regulators. Studies had shown that miRNAs were stabilized in the serum, of which the expression level was related to tumor types and development stages 18,19 . Thus, the circulating miRNAs may be a novel kind of potential biomarker for early diagnosis and clinical evaluation of EC patients. However, the mature miRNA is a kind of short RNA of 21-25nt, which easily interfered by homologous sequences and homologous miRNA sequences during the detection process. Also, many miRNAs are less abundant in circulation and difficult to be detected accurately 17 . At present, PCR detection methods for miRNA mainly include SYBR GREEN method and probe method. The SYBR GREEN method has a lower cost, but due to methodological restrictions, the specificity is poor. As for the most commonly used TaqMan probe, it has a long sequence and high experimental cost, which is difficult to design and accept 20 . These limitations make current PCR detections of miRNAs more difficult to promote clinically. The sensitivity and amplification efficiency of miRNA detection are very significant, and the stability and repeatability of the detection method are of great impact on the results, without which, even if the detection method is economically feasible, it is difficult to become a routine examination method in clinical. Currently, AllGlo probe, the latest generation of quantitative fluorescent probe, has been applied in detecting H7N7, HPV and acute respiratory infection-associated virus. AllGlo probe has higher specificity and sensitivity www.nature.com/scientificreports/ than common methods as well as satisfactory cost effectiveness [30][31][32] . Since the AllGlo probe is shorter than other probes, the fluorophore will increase the TM value of the probe by 8-10 °C during the PCR reaction, making it more suitable for the detection of small fragment miRNAs. In the detection process by AllGlo probe, as long as there is a base mismatch in the amplification reaction, it will not produce a fluorescent signal, thereby greatly reducing the non-specific fluorescent signals and effectively resolving the interference of the miRNA precursors and homologous sequences. The advantages of this assay system overcome the problems of miRNAs detection and will promote miRNAs as novel tumor biomarkers in clinical diagnosis. In addition, the PCR method can also be applied to gene detection. This study was the first to design an AllGlo-probe-based absolute quantitative RT-qPCR assay to identify and quantify miRNA, thus solving the above problems. Our method not only overcome the restrictions of miR-NAs detection in plasma, but also was quantitative and convenient. The overall performance evaluation results of this method proved that the detection method was stable, accurate and sensitive with no contamination Expression of miR-181a-5p between normal controls and different stages (I + II, III + IV) of EC patients, showing that the level of miR-181-5p in early EC patients was significantly lower than that of normal volunteers. (B) ROC curve analysis of miR-181a-5p in distinguishing early EC patients and normal controls, demonstrating that miR-181a-5p possessed good diagnostic efficiency in distinguishing early EC patients and normal controls. (C,D) Two-parameter classification in detecting early stages of EC. In 67 patients with early EC, 36 patients that were missed by Cyfra21-1 alone (C) and 49 patients that were missed by CEA alone (D) were identified by miR-181a-5p. The cut-off values of miR-181a-5p, CEA and Cyfra21-1 were 6.330, 5.5 ng/mL and 3.39 ng/mL. *P < 0.05, **P < 0.01, ***P < 0.001. Figure 5. The changes of the levels of miR-34a-5p, miR-148a-3p, miR-181a-5p after the surgery (pre-operation vs. post-operation). All date shown as log10 copies/μL. EC esophageal cancer; ***P < 0.001. www.nature.com/scientificreports/ or cross-influence, which had significant application value in scientific research and clinical diagnosis. Compared with the SYBR Green qPCR, the AllGlo qPCR method had a higher sensitivity and a wider linear range (10 3 -10 10 copies/μL), meaning that we could easily detect the miRNAs which were expressed in low abundance in the circulation. The levels of miRNAs in some body fluids such as urine, cerebrospinal fluid and exosome are much lower than those in serum or plasma, however studies had shown that they have great significance in the process of cancer development or some other diseases [33][34][35] . Scientific Reports The established system was initially used to explore the diagnostic efficiency of plasma miRNAs through expanded sample size, which also provided evidence of the reliability of the method. We found that the expression level of miR-34a-5p increased, whereas the expression levels of miR-148a-3p and miR-181a-5p decreased in the plasma of EC. The results indicated that the plasma levels of miR-34a-5p, miR-148a-3p and miR-181a-5p could serve as biomarkers for EC diagnosis. The difference in the expression level of miRNA in benign diseases and EC may be related to the regulation of the development of EC by miRNA. According to the studies of Wang et al. and Han et al 25,36 , miR-34a-5p could inhibit proliferation, migration, invasion and epithelial-mesenchymal transition in Esophageal Squamous Cell Carcinoma by targeting lymphoid enhancer-binding factor 1 and suppressing the Hippo-YAP1/TAZ signaling pathway, and The lncRNA CRNDE could promote colorectal cancer cell proliferation and chemoresistance via miR-181a-5p-mediated regulation of Wnt/β-catenin signaling. Perhaps this is why the expression of miR-34a-5p was upregulated and the expression of miR-181a-5p was deregulated in benign disease compared to EC patients, and there may also be some other regulatory mechanisms. Our results also showed that combination of the three kinds of miRNAs could be used as a more comprehensive indicator of tumor detection compared to Cyfra211 and CEA. In particular, we observed significant differences in the expression of miR-181a-5p in EC patients at different stages of development and normal controls. The expression level of miR-181a-5p was lower in early EC patients than that in normal controls with the sensitivity 85.07%, indicating that miR-181a-5p could be used as a biomarker for early diagnosis of EC. In order to obtain the optimal diagnostic performance, we combined clinical indicators (CEA, Cyfra21-1) with these three kinds of miRNAs to construct a diagnostic mathematical model. After the indicators were combined and analyzed by logistic regression, the mathematical formula was constructed according to the different weights in the diagnosis process, of which the diagnostic AUC was up to 0.9196, and the sensitivity and specificity were up to 85.45% and 84.71%, respectively. Similar to the Roman index, this mathematical formula has a strong practicality, with which we can evaluate the risk of EC based on the examination results of miR-34a-5p, miR-148a-3p and Cyfra21-1, reducing the false negative rate, thus improving the diagnosis efficiency of EC. In this study, when compared with the preoperative samples, the level of miR-34a-5p significantly decreased, while the levels of miR-148a-3p and miR-181a-5p significantly increased in the postoperative samples (P < 0.001, Fig. 5). These results suggested that the levels of miR-34a-5p, miR-148a-3p and miR-181a-5p in plasma might be valuable predictors of postoperative prognosis for EC patients. As for the mechanism of the change of miRNA expression before and after surgery, it was a relatively complicated process. Since the expression level of miRNA is closely related to tumor proliferation, migration, invasion and epithelial-mesenchymal transition processes 36 , when the tumor tissue of EC patients was removed by surgery, the growth of the tumor was basically stagnant, and the inhibitory effect of miR-34a-5p was also reduced. Through this study, we found that the expression level of miR-34a-5p was positively correlated with the development of tumor tissue to a certain extent. In addition, recent studies also showed that miR-34a-5p played an important role in the immune system, especially in the chemotherapy process of patients with malignant tumors. Ebrahimiyan 37 , et al. 's study showed that altered expression of survivin, regulated by miRNAs, such as miR-34a-5p, may result in apoptosis resistance and autoreactivity in lymphocytes from patients and have important roles in systemic sclerosis pathogenicity. Zuo 38 et al. 's study demostrated that miR-34a-5p negatively regulated the expression of PD-L1 by targeting its 3′-untranslated region and miR-34a-5p/PD-L1 axis regulated cis-diamminedichloroplatinum (DDP) chemoresistance of ovarian cancer cells. Also, Luo 39 et al. 's study discovered that TP73-AS1 contributed to proliferation, migration and DDP resistance but inhibited apoptosis of non-small cell lung cancer cells by upregulating TRIM29 and sponging miR-34a-5p. These studies have directly or indirectly shown that miR-34a-5p has an important regulatory role in the function of the body's immune system. As for the association between miR-34a-5p and smoking, it is not very clear at present. Sui 40 et al. 's study showed that non-smoking lung adenocarcinoma patients, compared to smokers, had different characteristics in terms of somatic mutation, gene, and miRNA expression and the microenvironment, indicating a diverse mechanism of oncogenesis. In our study, probably because the sample size was relatively small, the difference between miR-34a-5p and smoking showed a statistical difference, but not very obvious. Regarding the relationship between miR-34a-5p and smoking, more research may be needed to prove. In summary, the novel absolute quantitative RT-qPCR method based on AllGlo probes designed to detect miRNAs possesses the advantage of high stability, accuracy and sensitivity. It has great application value in scientific research and clinical diagnosis. Meanwhile, by using this developed method, we identified that miR-34a-5p, miR-148a-3p and miR-181a-5p may serve as novel noninvasive biomarkers for EC diagnosis and prognosis, especially, miR-181a-5p probably could be used as a new biomarker for early EC. However, this study is still in its infancy, requiring more different types of samples and different kinds of miRNAs to ultimately optimize the detection system and method. We will continue to expand the sample size of EC, especially the patient's postoperative samples and samples during postoperative treatment. Simultaneously, information on the treatment, prognosis, and survival of EC patients will be collected, in order to further study, the specific role of these miRNAs in the prognosis of EC. Additionally, we will take follow-up studies to determine whether the plasma levels of these three kinds of miRNAs can predict recurrence/metastasis of EC. Verification of the sensitivity and specificity of the absolute quantitative RT-qPCR. The target miRNAs, used as templates which were synthesized by GENEJUE (Xiamen, China), were diluted by RNase-free water. The PCR reaction was performed on the ABI7500 to calculate the expansion efficiency and standard curve equation. In order to evaluate the precision of ALLGLO RT-PCR established in this study, the positive controls with three concentrations used as the templates were tested for consecutive 5 days, four times a day. According to the file EP9-A2, intra-CV value, daytime CV value and total CV value were calculated with the equation of CV = standard deviation/mean × 100%. A CV value < 5% is required. In order to check the correctness of the system, five different concentrations of standards were tested by a double-blind method and the mean, standard deviation and bias were calculated (EP9-A2). The high-concentration standard was serially diluted to eight different concentrations of standards and each sample was repeatedly measured twice, then the linear range and linear correlation coefficient were calculated with R 2 > 0.95 required (EP6-A). In order to determine the limit of detection (LOD), the middle-concentration standard was diluted to the limit of the range used as template in PCR and the diluted standards were repeatedly measured 20 times to determine the LOD of this method. In order to determine the contamination carrying rate of the detection method, a total of eight high-value standards and negative specimens were interspersed. Samples and clinical pathological data collection. The plasma samples were collected in the Center of Clinical Laboratory of Zhongshan Hospital Affiliated to Xiamen University from April 2016 to February 2018. A total of 213 patients who were diagnosed with esophageal related diseases (166 cases with EC and 47 cases with benign diseases) and 170 normal controls who were matched with age and gender were recruited in this study. All the patients were pathologically diagnosed with surgical specimens or biopsies. Preoperative plasma samples were collected before esophagectomy and postoperative plasma samples were collected in the second week after esophagectomy from 80 of the 166 EC patients. None of the normal controls had prior history of any major illness. The clinic pathological characteristics of all the patients and controls were presented in Table 4. Tumor stages and differentiation levels were determined using the TNM staging classification system published by American Joint Committee on Cancer (AJCC) in 2009. Before any treatment, 2 mL venous blood sample anticoagulated with sodium citrate from each participate was collected and immediately centrifuged to get the www.nature.com/scientificreports/ plasma which were then stored at − 80 °C. This study was approved by the Ethics Committee of Zhongshan Hospital Affiliated to Xiamen University, and written informed consents were provided by all the participants. Verification and validation of the selected miRNAs as biomarkers. The abundance of miRNAs was detected by the absolute quantification RT-qPCR method based on AllGlo probe which was designed above. The concentrations of CEA and Cyfra21-1 in plasma were detected by Roche Cobas e601 system based on the principle of electrochemical luminescence. The cut-off point of CEA is 4.6 ng/mL and the detection limit is 0.20 ng/mL with a CV < 5%. The cut-off point of Cyfra21-1 is 3.39 ng/mL and the detection limit is 0.10 ng/ mL with a CV < 5%. Samples were randomly detected blindly by trained clinical laboratory technicians before interpretation. Statistical analysis. The nonparametric Mann-Whitney U test was performed to compare the miRNA expression between the cancer patients and the normal controls, and Kruskal-Wallis test was used in more than two groups. Wilcoxon signed-rank test was used to determine the relative expression between pre and postoperation. The Mann-Whitney U test and the Kruskal-Wallis test were used to evaluate the correlations between the results of the miRNA expression and the clinicopathological parameters. Two-tailed P-value of < 0.05 was considered statistically significant. ROC curve was used to analyze the diagnostic sensitivity, specificity and diagnostic efficiency. The best diagnostic efficiency is judged by Youden index (sensitivity + specificity − 1).
v3-fos-license
2019-02-28T00:09:44.051Z
2019-02-25T00:00:00.000
104335632
{ "extfieldsofstudy": [ "Chemistry" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "http://www.orientjchem.org/pdf/vol35no1/OJC_Vol35_No1_p_399-403.pdf", "pdf_hash": "760502b5687c720b9fcf49426e70f86feb012022", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43644", "s2fieldsofstudy": [ "Chemistry" ], "sha1": "760502b5687c720b9fcf49426e70f86feb012022", "year": 2019 }
pes2o/s2orc
Heterogeneous Zeolite-Based Catalyst for Esterification of α-Pinene to α-Terpinyl Acetate The purpose of this study is to determine the most effective type of heterogeneous catalyst such as natural zeolite (ZA), Zr-natural zeolite (Zr/ZA) and zeolite Y (H/ZY) in esterification of α-pinene. α-terpinyl acetate was successfully synthesized from α-pinene and acetic anhydride by their heterogeneous catalysts. The esterification reaction was carried out with reaction time, temperature and zeolite catalysts. The most effective catalysts used in the synthesis of α-terpinyl acetate is catalyst H/ZY with the yield is 52.83% at 40oC for the time 4 h with a selectivity of 61.38%. The results showed that the effective separation of catalyst could contribute to developing a new strategy for the synthesis of α-terpinyl acetate. keywords: Zeolite, α-pinene, Terpinyl Acetate, Esterification. Esterification reaction of α-pinene has been carried out using homogeneous as well as heterogeneous catalysts.Heterogeneous catalysts have been considered as an alternative catalyst 12 .They are metal oxides, zeolites, and active metals 13 . The mechanism of the esterification catalyzed by heterogeneous catalysts is still debated [14][15][16] .We have also proposed a similar intermediate in the esterification reaction using natural zeolite. Several studies on the terpinyl acetate synthesis by α-pinene esterification reaction has been conducted by several previous researchers who have already converted α-pinene to terpinyl acetate with catalyst zeolite H-beta and it turns out the results is a concentration of terpinyl acetate by 29%8.li et al., (2013) have done a terpinyl acetate synthesis with catalyst from ionic solutions with results of terpinyl acetate concentration by 30.8%6,Terpinyl acetate is synthesized with lipase catalyst [17][18][19] .The terpinyl acetate synthesis with this catalyst results terpinyl acetate as much as 40.3% 5 . In this study, esterification of α-pinene to α-terpinyl acetate by using a heterogeneous catalyst such as natural zeolite (H/ZA), Zr-natural zeolite (Zr/ZA) and zeolite Y (H/ZY) were done to determine the most effective type of catalyst. EXPERIMENTAL The materials used in this study is turpentine oil from central Java, natural zeolite from Malang, and zeolite Y from Sigma Aldrich.Other chemicals such as sodium sulfate anhydrous (Na 2 SO 4 ), zirconum (IV) chloride (ZrCl 4 ), and acetic anhydride were purchased from Merck.All solid materials were directly used after drying without further purification. Natural zeolite is soaked with 1% hydrofluoric acid (HF) solution for 30 min then washed with aquademin, dried in an oven at 120 o C for 3 hours.Natural zeolite is then soaked with hydrochloric acid (HCl) for 30 min at 50°C while stirring with a magnetic stirrer.The zeolite is washed with aquademin until the chloride ion (Cl -) disappears and then soaked with 1 N ammonium chloride (NH 4 Cl) solution.Natural zeolite is impregnated with 10% (w/w) of zirconium (Zr) metal.Zr/ZA and Y Zeolite catalysts were calcined at 500 o C for 4 hours. Characterization of the catalysts includes crystallinity analysis is observed by X-Ray Diffraction (XRD Shimadzu 6000), surface morphology by Scanning Electron Microscopy (SEM Phenom), acidity by gravimetric method and tested with Front Transmittance InfraRed (FTIR Perkin Elmer Version 10.4.00). α-terpinyl acetate synthesis was carried out in a batch reactor with a magnetic stirrer.First, the 1 g of α-pinene, 10 ml acetic anhydride, 10 ml dichloromethane, and 5 ml of distilled water were introduced in the reactor, followed by the 0.5 g of catalysts.The reaction mixture was continuously stirred during the reaction using a magnetic stirrer at a temperature of 40°C.To optimize α-terpinyl acetate formation, experiments were conducted to study factors such as reaction time and type of catalyst.The reaction mixture is then separated from the catalyst with a centrifuge for 10 min and speed of 350 rpm.The reaction products were analyzed by Gas chromatography-mass spectrometry (GC-MS). RESULT AND DISCUSSION Preparation of Zr/ZA catalysts made by impregnating Zr metal.Zr metals are impregnated in ionic form, Zr 4+ cation that exchange with the cation H + in H/ZA.Activation of zeolite Y is done by calcination at a temperature 550 o C for 4 hours.Activation is aimed to enable the acid sites on the zeolite, remove organic substances, and eliminate gas products are still entrapped in the zeolite. Characterization of catalyst crystallinity is performed by using XRD.Diffractogram of the catalyst H/ZA, Zr/ZA, and H/ZY is presented in Figure 1.Based on XRD analysis of H/ZY, H/ZA, and Zr/ZA, their angle position of diffraction (2q) and their distance between the field will describe the types of crystals.The sharp intensity on area 2q = 9.77°, 21.45°, 25.63°, 27.49° indicates the mordenite characteristic 20 .The peak at 2q = 9.84°, 21.34°, 25.87°, 27.21° in natural zeolite sample from Malang indicated mordenite group in dominant phase. The total acid sites based on data in Table 1.The samples H/ZY has the highest total acidity.In Zr/ZA catalyst has a higher acidity than H/ZA.This is due to the developing of zirconium metal in the catalyst.Zirconium has d orbitals that are not fully filled so can effectively accepts an electron pair from a base adsorbate.Donation amount of acid site of a zirconium metal is lewis acid sites 21 .The H/ZY catalyst has a higher acidity value than the H/ZA and Zr/ZA catalysts so that the H/ZY catalyst is more effective in the esterification reaction. Analysis of the morphology of the catalyst surface was performed using SEM instrument to observe microscopically catalyst H/ZA, Zr/ ZA, and H/ZY (Fig. 3).The images specify that the zeolite surface is coated by Zr 4+ .In catalyst H/ZA has irregular morphology, such as the hollow porous rocks.Based on the results of XRD, natural zeolite Malang has the characteristics of mordenite crystals shaped like needles, but the results are not visible needle shape, while the Zr-zeolite catalyst morphology seems natural form mordenite crystals resemble needles.Based on the results of SEM magnification 10000x known that catalyst H/ZY have a uniform morphology in the form of granules resembling an irregular cube.Based on the results of SEM magnification 10000x known that the form of the catalyst H/ZY is in the form of granules resembling an irregular cube. Acidity for each adsorbent and catalyst depends on the nature of the adsorbent and the catalyst surface, which can be a lewis acid or Bronsted acid.To be able to distinguish the type of Bronsted acid sites (1640 cm -1 ) and lewis acid sites (1400 cm -1 ) in the zeolite, it can be seen from Fig. 2. The presence of the lewis acid can appear at the top of the range of 1450 cm -1 , whereas the Bronsted acid appears at the top of the range 1550 to 1640 cm -1 .A weak peak at 788 cm -1 appears in the spectrum of the modified zeolite 10,22 .The greatest concentration of product α-terpinyl acetate in esterification reaction performed from the reaction by using a catalyst H/ZY.The H/ZY catalyst has the highest acidity values than ZA and Zr/ ZA catalyst so that the catalyst H/ZY more effectively carry out the esterification reaction.In catalyst Zr/ZA has an acidity value higher than the ZA catalyst, but on the outcome of the esterification reaction α-pinene level of α-terpinyl acetate products that produced is lower, it could be due to the Zr metal is too large so that the closed pore zeolite and finally inhibit the action of catalysts in α-terpinyl acetate formation. The α-pinene esterification with temperature variations of 30, 40 and 60°C with a reaction time of 3 h and the ratio of pinene acetic anhydride compound (1:15) yielded α-terpenyl acetate with 0.18, 21.40, and 14.72% with the conversion of 4.99, 74.13 and 52.44.The effect of reaction temperature on α-pinene esterification is shown in Table 2.When the temperature was low, the yield of α-terpinyl acetate was low.As the temperature was increased, the yield of α-terpinyl acetate increased accordingly.When the temperature was high, the yield of α-terpinyl acetate tended to decrease.The results showed the yield of α-terpinyl acetate first increased and then decreased with increasing temperature. Figure 4 shows the effect of various heterogeneous catalysts on the reaction, including all minorproducts.The reaction time was 4 h α-terpinyl acetate reached its highest value of 52.83%. As shown in Fig. 5, once the carbonium ion is produced by protonation of α-pinene, there exists a competition between ring enlarging (II, III, IV) and ring-opening reactions (VI).In the ring-opening reaction, the esterification product is α-terpinyl acetate.The esterification reaction of α-pinene as in situ in a reaction that occurs through hydration reactions prior to the addition of distilled water to form α-terpineol then α-terpineol with acetate anhydride to form α-terpinyl acetate (XI) and by-products such as bornyl acetate (V) and fenchyl acetate (VIII).A similar behavior was observed by liu et al.,(2013) 6 , and lu et al., (2008) 9 . CONCLUSION Synthesis of α-ter pinyl acetate can be done by an esterification reaction α-pinene as in situ with the heterogeneous zeolite-based catalysts.The most effective catalysts used in the synthesis of α-terpinyl acetate is catalyst H/ZY with the largest concentration product yield is 52.83% at 4 h with a selectivity of 61.38%.
v3-fos-license
2018-12-02T20:30:56.207Z
2018-11-29T00:00:00.000
54169073
{ "extfieldsofstudy": [ "Medicine", "Chemistry" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "http://www.jbc.org/content/294/5/1437.full.pdf", "pdf_hash": "b26b31f0dba18e7308f2288c29b2bf82b64dc91c", "pdf_src": "Highwire", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43645", "s2fieldsofstudy": [ "Biology", "Medicine" ], "sha1": "c27461fa52483dc08fdf551c5a7f06c262e21491", "year": 2018 }
pes2o/s2orc
A positive feedback mechanism ensures proper assembly of the functional inner centromere during mitosis in human cells The inner centromere region of a mitotic chromosome critically regulates sister chromatid cohesion and kinetochore–microtubule attachments. However, the molecular mechanism underlying inner centromere assembly remains elusive. Here, using CRISPR/Cas9-based gene editing in HeLa cells, we disrupted the interaction of Shugoshin 1 (Sgo1) with histone H2A phosphorylated on Thr-120 (H2ApT120) to selectively release Sgo1 from mitotic centromeres. Interestingly, cells expressing the H2ApT120-binding defective mutant of Sgo1 have an elevated rate of chromosome missegregation accompanied by weakened centromeric cohesion and decreased centromere accumulation of the chromosomal passenger complex (CPC), an integral part of the inner centromere and a key player in the correction of erroneous kinetochore–microtubule attachments. When artificially tethered to centromeres, a Sgo1 mutant defective in binding protein phosphatase 2A (PP2A) is not able to support proper centromeric cohesion and CPC accumulation, indicating that the Sgo1–PP2A interaction is essential for the integrity of mitotic centromeres. We further provide evidence indicating that Sgo1 protects centromeric cohesin to create a binding site for the histone H3–associated protein kinase Haspin, which not only inhibits the cohesin release factor Wapl and thereby strengthens centromeric cohesion but also phosphorylates histone H3 at Thr-3 to position CPC at inner centromeres. Taken together, our findings reveal a positive feedback–based mechanism that ensures proper assembly of the functional inner centromere during mitosis. They further suggest a causal link between centromeric cohesion defects and chromosomal instability in cancer cells. Error-free chromosome segregation in mitosis requires timely resolution of sister chromatid cohesion and correct attachment of kinetochores to spindle microtubules. The centromere is a highly specialized chromatin region where sister chromatids are held together and the kinetochore is assembled during mitosis. At the intersection of the inter-kinetochore (inter-KT) 3 axis and the inter-sister chromatid axis is the inner centromere region. By acting as a platform to recruit various proteins, the inner centromere plays a key role in the regulation of sister chromatid cohesion and kinetochore-microtubule (KT-MT) attachments (1). Impaired integrity of inner centromeres causes chromosome missegregation, leading to chromosomal instability (CIN), which is a hallmark of cancer cells and may contribute to tumorigenesis (2). However, at the molecular level, how the functional inner centromere is assembled remains largely elusive. Sgo1 is an important cohesin protector that predominantly localizes to centromeres in mitosis (11,(22)(23)(24). Sgo1 localizes to inner centromeres in a stepwise manner (Fig. 7) (25). First, through binding histone H2A, which is phosphorylated at thre-onine 120 (H2ApT120) by the outer kinetochore-localized kinase Bub1 (26 -29), Sgo1 is recruited to two KT-proximal centromere regions under the inner layer of kinetochores. In a second step, Sgo1 moves to inner centromeres, where it binds cohesin in a manner that is strongly enhanced by Cyclin-dependent kinase 1 (Cdk1) phosphorylation of Sgo1 (19,30). Previous studies have reported that Sgo1 collaborates with protein phosphatase 2A (PP2A) to antagonize phosphorylation of SA2 and Sororin, thereby preventing Wapl-dependent cohesin removal from chromosomes (13, 30 -34). Exogenous expression of a Sgo1 mutant defective in binding PP2A can hardly prevent premature sister chromatid separation induced by RNAi-mediated depletion of endogenous Sgo1 in human cells (32). However, it is controversial whether Bub1-dependent centromere localization of Sgo1 plays a role in maintaining centromeric cohesion in mammals (23,24,26,28,(35)(36)(37). Sgo1 also promotes centromeric accumulation of the chromosomal passenger complex (CPC), an integral part of the inner centromere and a key regulator of KT-MT attachments that consists of Aurora B kinase and the regulatory subunits Survivin, Borealin, and INCENP (38). Although it has been proposed that Sgo1 can bring the CPC to inner centromeres through direct interaction with Borealin that has been phosphorylated by Cdk1 (39), the mechanism by which Sgo1 targets CPC to centromeres remains incompletely addressed in mammalian cells (1,41). In this study, we show that selective delocalization of Sgo1 from mitotic centromeres by disrupting the H2ApT120 -Sgo1 interaction results in loosened centromeric cohesion, which is accompanied by decreased centromeric localization of CPC and an elevated rate of chromosome missegregation. We further reveal the molecular mechanism by which Sgo1 and Haspin cooperate to allow proper assembly of the functional inner centromere and ensure high-fidelity chromosome segregation. Loss of centromeric Sgo1 weakens cohesion at mitotic centromeres Sgo1 directly binds H2ApT120 through a conserved SGO motif in its C-terminal region (27,29). Mutation of a conserved basic residue, lysine 492 to alanine (K492A), in its SGO motif prevents exogenously expressed Sgo1 from binding H2ApT120 and localizing to mitotic centromeres (26,32). To study the role of Sgo1 at centromeres, we set out to make the K492A mutation in endogenous Sgo1 by CRISPR/Cas9-mediated genome editing in HeLa cells (50). We obtained two clones, 3-9 and 3-1, in which the K492A mutation was confirmed by genomic DNA sequencing (Fig. 1A). Unless otherwise stated, we used clone 3-9 as the Sgo1-K492A mutant cells for the following studies. Immunoblotting of lysates from mitotic cells arrested with the microtubule destabilizer nocodazole showed that the Sgo1-K492A mutant protein was expressed at a level comparable with that of WT Sgo1 in control HeLa cells (Fig. 1B). Immunofluorescence microscopy demonstrated that the Sgo1-K492A mutant failed to localize at mitotic centromeres, whereas H2ApT120 remained unaffected (Fig. 1C). Inspection of chromosome spreads prepared from nocodazole-arrested mitotic cells showed that the Sgo1-K492A mutant displayed diffuse signals on chromosome arms (Fig. S1A), likely because of its capability of binding cohesin (19,30). We also noticed that sister chromatids in the Sgo1-K492A mutant cells remained paired after 3-h treatment with nocodazole, indicating that loss of centromeric Sgo1 does not prevent the establishment of sister chromatid cohesion. Moreover, the percentage of cells with partly closed chromosome arms was higher in Sgo1-K492A cells than in control HeLa cells (Fig. 1D), which is in line with the impaired chromosome arm resolution in Bub1-depleted or -inhibited cells (23,24,51). We next examined whether Sgo1-K492A cells have defects in sister chromatid cohesion. We found that Sgo1-K492A cells were strongly impaired in maintaining chromosome alignment on the metaphase plate during the sustained metaphase arrest induced by MG132 (Fig. 1, E and F), a proteasome inhibitor that prevents degradation of Cyclin B and Securin and therefore inhibits Separase activation. Inspection of chromosome spreads prepared from MG132-arrested mitotic cells revealed a strong increase in premature sister chromatid separation in Sgo1-K492A cells (Fig. 1, G and H). After 8-h treatment with MG132, the percentage of cells with cohesion loss increased from 1.2% in control HeLa cells to 23.2%-29.0% in Sgo1-K492A cells. Moreover, the percentage of cells with mild premature sister chromatid separation was also obviously higher in Sgo1-K492A cells (33.9%-42.0%) than in control HeLa cells (14.3%). These cohesion defects resemble an accelerated "cohesion fatigue" phenotype (52)(53)(54)(55). We then measured the inter-KT distance on chromosome spreads prepared from cells that were arrested in mitosis with 3-h treatment with nocodazole. Interestingly, the inter-KT distances of mitotic chromosome spreads were at least 17.3% further apart in Sgo1-K492A cells than in control HeLa cells ( Fig. 1I and Fig. S1B), indicative of weakened centromeric cohesion. Thus, H2ApT120-mediated centromeric localization of Sgo1 is required for the maintenance of proper sister chromatid cohesion at mitotic centromeres. Loss of centromeric Sgo1 leads to defective mitosis progression and chromosome congression We next investigated the effect of loss of centromeric Sgo1 on cell proliferation and mitosis progression. We found that, under unperturbed conditions, the Sgo1-K492A mutation did not obviously affect cell proliferation (Fig. S2A). Interestingly, compared with that of control HeLa cells, the proliferation of Sgo1-K492A cells was more sensitive to clinically relevant low doses of paclitaxel, a microtubule poison widely used as a classic chemotherapy drug (56). This result is in line with a previous study showing that inhibition of Bub1 kinase activity by smallmolecule inhibitors caused remarkable impairment of chromo-Mechanism for inner centromere assembly some segregation and cell proliferation upon treatment with low doses of paclitaxel (51). Time-lapse live imaging of cells stably expressing histone H2B fused to GFP (H2B-GFP) showed that, during unperturbed mitosis, the duration of mitosis in Sgo1-K492A cells (38.7 min, n ϭ 126) was only mildly longer than that in control HeLa cells (34.8 min, n ϭ 115). Interestingly, there were strong mitosis progression defects in Sgo1-K492A cells during the recovery from mitotic arrest induced by nocodazole treatment for 10 h (Fig. 2, A and B, and Movies S1-S4). Following chromosome biorientation, most (91%) control HeLa cells underwent anaphase onset at 62.7 Ϯ 3.2 min, on average, after nocodazole washout. In contrast, Sgo1-K492A cells showed strong mitotic arrest with complex chromosome behaviors that could be classified into two categories. Although 16.5% of Sgo1-K492A cells (type I) behaved like control HeLa cells, the remaining 83.5% of cells (type II) were defective in chromosome congression and underwent strikingly prolonged mitosis. These type II cells either died in mitosis or partitioned chromosomes into two or more masses and aberrantly exited mitosis without anaphase onset at 482.1 Ϯ 35.8 min, on average, after nocodazole washout. We further monitored chromosome behavior when cells entered mitosis in the presence of MG132. We found that 3% and 18.2% of control HeLa cells and Sgo1-K492A cells were not able to achieve metaphase chromosome alignment, respectively (Fig. S2B). Moreover, among cells that were able to achieve metaphase chromosome alignment and remained alive, Sgo1-K492A cells began to exhibit irreversible chromosome scattering from the metaphase plate much earlier than control For clone 3-9, the genomic DNA PCR fragments were subcloned and sequenced. All 20 bacterial colonies showed the desired Sgo1-K492A mutation. For control HeLa cells and clone 3-1, the genomic DNA PCR fragments were sequenced directly. The sgRNA target DNA sequence preceding a 5Ј-NGG protospacer adjacent motif is shown. Multiple silent mutations were introduced into the repair template to prevent sgRNA targeting. B, HeLa cells and the indicated Sgo1-K492A mutant clones were treated with 0.33 M nocodazole for 12 h. Then mitotic cell lysates were immunoblotted with the indicated antibodies. C, HeLa cells and the indicated Sgo1-K492A clones were immunostained with the anti-human centromere autoantibody (ACA) and antibodies for Sgo1 and H2ApT120. D, HeLa and Sgo1-K492A cells were treated with 0.1 M nocodazole for 3 h. Mitotic chromosome spreads were stained with anti-human centromere autoantibody and DAPI. The chromosome morphology was classified and quantified in around 200 cells. Means and ranges are shown (n ϭ 2). E and F, HeLa and Sgo1-K492A clones were exposed to MG132, fixed at the indicated time points for CENP-C and DNA staining, and quantified in around 100 cells (E). Example images are shown (F). G and H, HeLa and Sgo1-K492A clones were exposed to MG132 for 8 h. Mechanism for inner centromere assembly HeLa cells (Fig. 2, C and D), which is in line with their centromeric cohesion defects. Thus, selective delocalization of Sgo1 from mitotic centromeres causes mitosis progression defects, particularly in achieving and maintaining chromosome alignment on the metaphase plate. Loss of centromeric Sgo1 causes defects in correcting erroneous KT-MT attachments and accumulating CPC at mitotic centromeres Inspection of paraformaldehyde (PFA)-fixed asynchronous Sgo1-K492A cells demonstrated an increased rate (8.3%-10.7%) of lagging chromosomes relative to control HeLa cells (3.7%) (Fig. 3, A and B). Lagging chromosomes are a hallmark of CIN, arising from persistent errors in KT-MT attachments (57). To determine whether Sgo1-K492A cells are defective in correcting KT-MT attachment errors, we performed S-trityl-L-cysteine (STLC) release assays (58). STLC is a kinesin-5/Eg5 inhibitor that prevents centrosome separation during mitotic entry, resulting in the formation of monopolar spindles with erroneously attached chromosomes (59). We treated cells with STLC to accumulate monopolar mitoses and then released them into MG132 to allow bipolar spindle formation and chromosome alignment. Examination of fixed cells showed that Sgo1-K492A cells were impaired in aligning chromosomes on the metaphase plate ( Fig. 3C and Fig. S3). We further used live imaging to monitor chromosome alignment and segregation when cells were released from transient mitotic arrest induced by STLC treatment for 5 h. We found that most control HeLa cells underwent metaphase chromosome biorientation, followed by subsequent anaphase onset at 96.3 Ϯ 3.2 min, on average, after STLC washout. In contrast, 34.7% of Sgo1-K492A cells were defective in chromosome congression and underwent prolonged mitotic duration (Fig. 3, D and E, and Movies S5-S7), reminiscent of the "type II" cells observed upon release from nocodazole (Fig. 2, A and B). Moreover, upon STLC release, the percentage of anaphase cells with lagging chromosomes increased from 5.2% in HeLa cells to 9.9%-14.7% in Sgo1-K492A cells (Fig. 3F). These results suggest that Sgo1-K492A cells are defective in correcting erroneous KT-MT attachments. Aurora B kinase accumulates at mitotic inner centromeres and plays an important role in promoting chromosome biorientation, mainly by phosphorylating kinetochore substrates to Mechanism for inner centromere assembly release improperly attached microtubules (60,61). Immunofluorescence microscopy demonstrated that, compared with control HeLa cells, Aurora B was less concentrated at centromeres in Sgo1-K492A cells and rather, displayed diffuse signals along the length of chromosomes (Fig. 3G). The ratio of the intensity of Aurora B versus CENP-C, a component protein of the constitutive centromere-associated network at inner kinetochores, was reduced by 33.8%-32.7% in Sgo1-K492A cells (Fig. 3H). By measuring the relative intensity of Aurora B staining at centromeres and on arms, we found that Aurora B was 40 -50% less enriched at centromeres in Sgo1-K492A cells (Fig. 3I). Thus, Sgo1-K492A cells are defective in accumulating Aurora B at mitotic centromeres, which might account for the impaired error correction efficiency. The Sgo1-PP2A interaction is required to protect cohesion and localize CPC at mitotic centromeres We exogenously expressed Sgo1 C-terminally fused to GFP (Sgo1-GFP) in Sgo1-K492A cells. As expected, Sgo1-GFP mainly localized to mitotic centromeres and largely restored proper inter-KT distance and centromeric localization of Aurora B, whereas the Sgo1-K492A-GFP mutant failed to do so (Fig. 4, A-C, and Fig. S4, A and B). These results validate the specificity of the centromere defects in Sgo1-K492A cells. We next examined whether the interactions with cohesin and PP2A are important for Sgo1 function at mitotic centromeres. Previous studies showed that mutation of threonine 346 to alanine (T346A) in the cohesin-binding region (residues 313-353) does not affect the H2ApT120 -Sgo1 interaction but perturbs Sgo1 binding to the Scc1-SA2 interface and prevents Sgo1 from localizing to the inner centromere (19,26,30). Moreover, mutation of asparagine 61 to isoleucine (N61I) in the N-terminal coiled-coil region perturbs Sgo1 binding to PP2A and prevents Sgo1 from localizing to mitotic centromeres (32,62,63). To obtain equal levels of various Sgo1 proteins at the same location in the centromere region, we expressed Sgo1 as a fusion protein with the centromeric targeting domain of CENP-B (CB in short where necessary) (28,62), which binds a 17-bp CENP-B box motif within the ␣-satellite repeats of human centromeres (64 -66). As expected, we found that expression of CB-Sgo1-GFP restored the proper inter-KT dis- Mechanism for inner centromere assembly tance and centromeric localization of Aurora B in Sgo1-K492A cells (Fig. 4, D-F, and Fig. S4, C and D). Similar results were observed for CB-Sgo1-T346A-GFP as well as the CB-Sgo1-⌬313-353-GFP mutant lacking the cohesin-binding region. In contrast, CB-Sgo1-N61I-GFP was largely impaired in doing so. Similarly, CB-Sgo1-K492A-GFP, but not CB-Sgo1-N61I/ K492A-GFP, was able to support proper localization of CPC at centromeres (Fig. S4, E-H). Thus, when tethered to centromeres, the interaction with PP2A, but not H2ApT120 and cohesin, is required for Sgo1 to support proper cohesion and CPC at centromeres of Sgo1-K492A cells. The Sgo1-PP2A interaction is required to concentrate H3pT3 at mitotic centromeres We then investigated how the Sgo1-K492A mutation causes delocalization of CPC from mitotic centromeres. Immunofluorescence microscopy demonstrated that H3pT3 was enriched at the inner centromere in control HeLa cells, where it colocalized with Aurora B (Fig. 5, A-C), consistent with H3pT3 as the nucleosomal docking site for CPC (47)(48)(49). Interestingly, in Sgo1-K492A cells, the ratio of the intensity of H3pT3 versus CENP-C was reduced by 50%-60.8%, whereas that of H3pT3 at centromeres versus on arms was reduced by 62.8%-64.5%. Mechanism for inner centromere assembly Moreover, exogenous expression of Sgo1-GFP, but not Sgo1-K492A-GFP, restored centromeric H3pT3 in Sgo1-K492A cells (Fig. S5, A-C), validating the specificity of the phenotype. These data suggest that the defect in accumulating H3pT3 at mitotic centromeres might account for the reduced centromeric localization of Aurora B in Sgo1-K492A cells. Indeed, expression of Haspin as an enhanced GFP (EGFP) and CENP-B fusion protein (EGFP-CB-Haspin) in Sgo1-K492A cells effectively restored centromeric accumulation of H3pT3 (Fig. S5, D and E) and Aurora B (Fig. S5, F and G), as well as proper inter-KT distance (Fig. S5H). The centromeric level of cohesin correlates with centromere accumulation of H3pT3 and CPC Our results suggest that the centromeric cohesion defects in Sgo1-K492A cells might account for the reduced accumulation of H3pT3 and CPC at mitotic centromeres. To test this speculation, we sought to compromise the strength of centromeric cohesion by other approaches. Vertebrate cells express two paralogs of SA protein, SA1 and SA2 (68,69), which are redundantly required for sister chromatid cohesion and cell proliferation (70). A previous study showed that although SA1 is required for telomere and arm cohesion, SA2 is required for centromeric cohesion (71). In line with this, we recently showed that SA2 depletion by RNAi weakens centromeric cohesion, whereas sister chromatids remain not obviously compromised (44). We further found that centromeric accumulation of H3pT3 and Aurora B in SA2-depleted cells was significantly reduced (Fig. 6, A-D). Partial depletion of Scc1 by RNAi also strongly reduced centromeric H3pT3 in HeLa cells (Fig. S6, A-C). Consistently, a previous study showed that accumulation of H3pT3 and Aurora B at the inner centromere is clearly reduced in mouse embryonic fibroblast cells prepared from Pds5B knockout mice (72). These results suggest a causal link between weakened centromeric cohesion and decreased H3pT3 and CPC at mitotic centromeres. Conversely, we examined whether centromeric H3pT3 and CPC can be enhanced when cohesin is artificially tethered to centromeres. We found that expression of Scc1 as a CENP-B fusion protein (CB-Scc1-GFP) efficiently accumulated H3pT3 at the CENP-B loci in centromeres (Fig. 6, E-G), indicating recruitment of Haspin by Scc1. Moreover, CB-Scc1-GFP effectively recruited Aurora B to centromeres, presumably through H3pT3 (Fig. 6, H-J). We also noticed that tethering SA2 to centromeres as a CB-fusion protein (CB-SA2-GFP) only marginally increased centromeric H3pT3 (Fig. S6, D-F), suggesting that SA2 is not directly involved in the recruitment of Haspin. Taken together, these results demonstrate the requirement for centromeric cohesin in the enrichment of H3pT3 and CPC at mitotic centromeres. Discussion Structural and functional integrity of the inner centromere region is critical for coordination of sister chromatid cohesion and KT-MT attachment during mitosis. The role for Sgo1 in the assembly of inner centromere has been largely unclear (73), partly because of the massive loss of sister chromatid cohesion upon Sgo1 depletion by the commonly used RNAi technology (11,(22)(23)(24). In this study, we used CRISPR/Cas9-based genome editing to generate a mutant of endogenous Sgo1 in HeLa cells that cannot bind phosphorylated histone H2A, thereby disrupting its centromere localization. This mutant Sgo1 was employed as a tool to assess the function of centromeric Sgo1 for inner centromere assembly. In contrast to the suggestion that Bub1-dependent localization of Sgo1 to centromeres during mitosis is not required to maintain cohesion (35), we find that loss of H2ApT120-dependent centromeric Sgo1 results in weakened centromeric cohesion. Our data support a direct role of Bub1 in protecting centromeric cohesion by generating H2ApT120 rather than by activating the spindle checkpoint (36). Our finding of the failure in supporting proper centromeric cohesion by centromere targeting of the PP2A-binding-deficient Sgo1 mutant is in line with the role of Sgo1-associated PP2A in cohesion protection (13, 30 -34). When exogenous Sgo1 is artificially tethered to centromeres, its interaction with cohesin appears dispensable for the maintenance of proper centromeric cohesion in cells in which endogenous Sgo1 is delocalized from centromeres. However, in the physiological state, the cohesin-bound pool of Sgo1 renders PP2A in close proximity to the cohesin complex at inner centromeres and would promote centromeric cohesion (19,26,29,30). We did not observe a strong effect of loss of H2ApT120-dependent Sgo1 localization at centromeres on mitosis progression in an unperturbed situation, except for an increased frequency of chromosome missegregation. This seems to be in line with what was observed in HeLa cell treatment with the smallmolecule inhibitors of Bub1 kinase (51) as well as in a mouse mutant that lacks Bub1 kinase activity (28). Then, during unperturbed mitosis, why does loss of centromeric Sgo1 not show strong cohesion loss? During unperturbed mitosis, chromosome arm cohesion, which is mediated by the residual cohesin resistant to the prophase pathway of cohesin removal, persists throughout metaphase and is sufficient to maintain sister chromatid cohesion (9). Thus, local disassociation of cohesin from centromeres may not be sufficient to cause global cohesion loss in unperturbed mitosis as long as chromosome arm cohesion is not compromised. This may explain the difference in the severity of cohesion loss between Sgo1 deletion and centromeric Sgo1 delocalization. Strikingly, Sgo1-K492A cells are strongly defective in tolerating prolonged mitosis arrest. We reason that prolonged mitotic arrest allows time for further removal of cohesin from chromosome arms, rendering the Mechanism for inner centromere assembly strength of centromeric cohesion more critical to resist the sustained spindle pulling force and maintain sister chromatid cohesion until anaphase onset. Although Sgo1 may interact with Borealin to directly target CPC to centromeres (39), our data indicate that, by protecting centromeric cohesin to provide a binding site for the histone H3 kinase Haspin, Sgo1 also indirectly positions CPC at the inner centromere to facilitate correction of erroneous KT-MT attachments. We previously showed that Haspin associates with the cohesin complex by binding Pds5B, thereby protecting centromeric cohesin to retain Sgo1 at inner centromeres (44). Taken together, we propose a model in which Bub1-mediated H2ApT120 enables centromeric localization of Sgo1, which not only protects centromeric cohesin, largely through PP2A-me-diated dephosphorylation of SA2 and Sororin (30,31), but also triggers a positive feedback loop in which Sgo1 promotes cohesin-mediated centromeric localization of Haspin to further enhance the strength of centromere cohesion and ensure Sgo1 localization at inner centromeres (Fig. 7). Thus, assembly of the functional inner centromere requires a positive feedback network involving Sgo1 and Haspin that accumulates cohesin and CPC at the centromere to guarantee precise chromosome segregation in mitosis. This study not only reveals the molecular mechanism by which two histone marks, H2ApT120 and H3pT3, cooperate to establish the inner centromere (47) but also provides important insight into the complexity of the centromere signaling network that coordinates various dynamic processes of mitosis (1). Mechanism for inner centromere assembly Defects that moderately impair chromosome segregation may allow cancer cells with CIN to become established (74,75). Compromised sister chromatid cohesion is proposed to be involved in CIN in cancer cells, but the underlying molecular mechanism is not clear (76). Our results suggest a causal link between weakened centromeric cohesion and reduced accumulation of H3pT3-CPC at mitotic centromeres. This link may account for the close correlation between compromised sister chromatid cohesion and increased chromosome segregation errors in cancer cells (40). Our data may also explain why SA2-deficient cells display decreased centromeric Aurora B, increased KT-MT attachment stability, and an elevated rate of chromosome missegregation (77). Recent studies of the cancer genome identified recurrent mutations in cohesin subunits and regulators in a wide range of human cancers (70, 78 -80), including the most frequently mutated cohesin subunit, SA2 (81,82). It will be interesting to investigate, in the future, whether the cohesin-dependent Haspin-H3pT3-CPC pathway is widely impaired in chromosomally instable cancer cells with sister chromatid cohesion defects. Cell culture, plasmids, siRNAs, transfection, and drug treatments All cells were cultured in Dulbecco's modified Eagle's medium supplemented with 1% penicillin/streptomycin and 10% fetal bovine serum (Gibco) and maintained at 37°C with 5% CO 2 . Cells stably expressing H2B-GFP were isolated and maintained in 3.0 and 2.0 g/ml blasticidin (Sigma), respectively. To mea-sure the effect of paclitaxel treatment on cell proliferation, ϳ4 ϫ 10 4 HeLa cells or Sgo1-K492A cells were plated on 6-well plates (Falcon) and cultured in DMSO or 1-4 nM paclitaxel for 7 days. Surviving cells were digested with trypsin (Life Technologies) and resuspended into culture medium, and then the number of surviving cells were analyzed using an automated cell counter (Thermo Fisher). To make pBos-CENP-B-GFP, the H2B fragment in pBos-H2B-GFP (Clontech) was replaced with the KpnI/BamHI-digested PCR fragments encoding the centromere-targeting domain (residues 1-163) of CENP-B. The plasmid for Sgo1-GFP was constructed similarly. To make pBos-CB-Sgo1/Scc1/ PP2A_C␣-GFP constructs, the PCR fragments encoding Sgo1, Scc1, and PP2A_C␣ were subcloned into the BamHI site of pBos-CENP-B-GFP. To make pEGFP-CB-Haspin, the CENP-B fragment (residues 1-163) was also inserted into the BglII/Hin-dIII sites of pEGFP-Haspin. To make pMyc-PP2A_C␣, the PCR-amplified DNA fragments of PP2A_C␣ were first subcloned into the pDONR201 vector using Gateway Technology (Invitrogen), and then the corresponding fragment in the entry vector was transferred into a Gateway-compatible destination vector that harbors an N-terminal Myc tag. All point mutations were introduced with the QuikChange II XL site-directed mutagenesis kit (Agilent Technologies). All plasmids were sequenced to verify the desired mutations and absence of unintended mutations. The SA2 siRNA (5Ј-CCGAAUGAAUGGU-CAUCACdTdT-3Ј), Scc1 siRNAs (#1, 5Ј-AUACCUUCUUGC-AGACUGUdTdT-3Ј; #2, 5Ј-GCACUACUACUUCUAACCU-dTdT-3Ј), and control siRNA were ordered from RiboBio. Plas- Sgo1 is recruited to KT-proximal centromeres by Bub1-generated H2ApT120 and is then driven to inner centromeres where it binds cohesin. Largely through antagonizing the phosphorylation of cohesin and Sororin, Sgo1-bound PP2A protects centromeric cohesin to provide a binding site for Haspin, which not only feeds back to further protect cohesin and retain Sgo1 at the inner centromere but also recruits the CPC through H3pT3. Mechanism for inner centromere assembly mid and siRNA transfections were done with FuGENE 6 (Promega) and Oligofectamine or Lipofectamine RNAiMAX (Invitrogen), respectively. Cells were arrested in S phase or at the G 1 /S boundary by single or double thymidine (2 mM, Calbiochem) treatment, respectively, or in a prometaphase-like state with 0.1-3.3 M nocodazole (Selleckchem). Other drugs used in this study were STLC (5 M, Tocris Bioscience), MG132 (10 M, Sigma), and paclitaxel (MedChem Express). Mitotic cells were collected by selective detachment with "shake-off." CRISPR/Cas9-mediated editing of the Sgo1 gene in HeLa cells Single-guide RNA (sgRNA) for the human Sgo1 gene was ordered as oligonucleotides, annealed, and cloned into the dual Cas9 and sgRNA expression vector pX330 (Addgene, 42230) with BbsI sites. To make the K492A mutation in endogenous Sgo1, the pX330 plasmid encoding Cas9 and an sgRNA targeting a sequence (5Ј-TTACAGGAAACTGAGAAGAG-3Ј) close to Lys-492 of Sgo1 was cotransfected into HeLa cells with a single-stranded oligodeoxynucleotide as the homology-directed repair template. After 24-h incubation, the cells were treated with the DNA ligase IV inhibitor Scr7 (5 M, Selleckchem) for another 24 h to increase the efficiency of HDR-mediated genome editing. Then the cells were split individually to make a clonal cell line with selection using 1.0 g/ml puromycin for 3 days. Individual clones with an undetectable centromeric Sgo1 immunofluorescence signal were isolated. The genomic DNA fragments were PCR-amplified and sequenced to confirm the gene disruption (for clone 3-1). Alternatively, the PCR products were subcloned into pBluescriptII (Ϫ) with an EcoRV site and transformed into competent Escherichia coli cells (DH5␣), and then 20 positive bacterial colonies were sequenced (for clone [3][4][5][6][7][8][9]. The PCR primers were as follows: forward, 5Ј-ACACCACCTG-AAACTCAGCAGT-3Ј; reverse, 5Ј-AGGTTTAGGCAGCATA-AGAAATCG-3Ј. The sgRNA-resistant single-stranded oligodeoxynucleotide with the K492A mutation was ordered from Integrated DNA Technologies (5Ј-AATTGGTGTGTTTTA-CCATAACTTGGTAGGGAAGAGTAAGTTAATATTGGG-ATGCTTACATTATGCCTGAGATCTCTTTTTACTCTTAC-AGGgcACTccGgAGgGGaGACCCTTTTACAGATTTGTG-TTTTTTGAATTCTCCTATTTTCAAGCAGAAAAAGGA-TTTGAGACGTTCTAAAAAAAGTATGAA-3Ј). Multiple silent mutations (shown in lowercase) in the sgRNA target sequence and the protospacer-adjacent motif were introduced into the repair template to prevent sgRNA targeting. Fluorescence microscopy, time-lapse live-cell imaging, and statistical analysis Cells cultured on coverslips were fixed with 2% PFA in PBS for 10 min, followed by extraction with 0.5% Triton X-100 in PBS for 5 min, or fixed with 2% PFA for 10 min and then extracted with 1% Triton X-100 for 10 min. To produce chromosome spreads, mitotic cells obtained by selective detachment were incubated in 75 mM KCl for 10 min. After attachment to glass coverslips by Cytospin (Cytospin 4, Thermo Scientific) at 1500 rpm for 5 min, chromosome spreads were fixed with 2% PFA in PBS for 10 min, followed by extraction with 0.5% Triton X-100 in PBS for 5 min, or pre-extracted with 0.3% Triton X-100 in PBS for 5 min, followed by fixation with 4% PFA in PBS for 20 min. Fixed cells and chromosome spreads were stained with primary antibodies for 1-2 h and secondary antibodies for 1 h, all with 3% BSA in PBS with 0.5% Triton X-100 and at room temperature. DNA was stained for 10 min with DAPI. Fluorescence microscopy was carried out at room temperature using a Nikon Eclipse Ni microscope with a Plan Apo Fluor ϫ60 oil (numerical aperture 1.4) objective lens and a Clara charge-coupled device (Andor Technology). The inter-KT distance was measured using the inner kinetochore marker CENP-C on over 20 kinetochores per cell in at least 20 cells. Distance was determined by drawing a line from the outer kinetochore extending to the outer edge of its sister kinetochore. The length of the line was calculated using the imaging software of NIS-Elements BR (Nikon). Quantification of fluorescent intensity was carried out with ImageJ (National Institutes of Health) using images obtained with identical illumination settings. Briefly, on chromosome spreads, the average pixel intensity of H3pT3, Aurora B, or CENP-C staining at centromeres, defined as circular regions including paired centromeres, or on chromosome arms (except for CENP-C) was determined using ImageJ. After background correction, the ratio of centromeric H3pT3/CENP-C, centromeric Aurora B/CENP-C, centromeric H3pT3/arm H3pT3, or centromere Aurora B/arm Aurora B intensity was calculated for each centromere. Time-lapse live-cell imaging was carried out with the GE DV Elite Applied Precision DeltaVision system (GE Healthcare) equipped with Olympus oil objectives of ϫ40 (NA 1.35) UApo/340 and an API Custom Scientific CMOS camera and Resolve3D softWoRx imaging software. Cells expressing H2B-GFP were plated in four-chamber glass-bottomed 35-mm dishes (Cellvis) coated with poly-D-lysine and filmed in a climate-controlled and humidified environment (37°C and 5% CO 2 ). Images were captured every 2 min (Movies S1-S6; Fig. 2, Mechanism for inner centromere assembly A and B, and Fig. 3, D and E) or every 5 min (Fig. 2, C and D). The acquired images were processed using Adobe Photoshop and Adobe Illustrator. Statistical analyses were performed with a two-tailed unpaired Student's t test in GraphPad Prism 6. A p value of less than 0.05 was considered significant.
v3-fos-license
2018-12-16T03:49:02.855Z
2016-10-04T00:00:00.000
54983353
{ "extfieldsofstudy": [ "Economics" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://doi.org/10.1088/1748-9326/11/10/105002", "pdf_hash": "4d5b3b1ca82f90a84e762c916c2d1d8bc6d96639", "pdf_src": "IOP", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43647", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "sha1": "b55415e6a9f4bd37151f3b0c9a859119be5a762d", "year": 2016 }
pes2o/s2orc
Energy and protein feed-to-food conversion efficiencies in the US and potential food security gains from dietary changes Feeding a growing population while minimizing environmental degradation is a global challenge requiring thoroughly rethinking food production and consumption. Dietary choices control food availability and natural resource demands. In particular, reducing or avoiding consumption of low production efficiency animal-based products can spare resources that can then yield more food. In quantifying the potential food gains of specific dietary shifts, most earlier research focused on calories, with less attention to other important nutrients, notably protein. Moreover, despite the well-known environmental burdens of livestock, only a handful of national level feed-to-food conversion efficiency estimates of dairy, beef, poultry, pork, and eggs exist. Yet such high level estimates are essential for reducing diet related environmental impacts and identifying optimal food gain paths. Here we quantify caloric and protein conversion efficiencies for US livestock categories. We then use these efficiencies to calculate the food availability gains expected from replacing beef in the US diet with poultry, a more efficient meat, and a plant-based alternative. Averaged over all categories, caloric and protein efficiencies are 7%–8%. At 3% in both metrics, beef is by far the least efficient. We find that reallocating the agricultural land used for beef feed to poultry feed production can meet the caloric and protein demands of ≈120 and ≈140 million additional people consuming the mean American diet, respectively, roughly 40% of current US population. Introduction The combination of ongoing population rise and the increasing demand for animal-based products places a severe strain on world natural resources (Smil 2002, Steinfeld et al 2006, Galloway et al 2007, Wirsenius et al 2010, Bonhommeau et al 2013. Estimates suggest that global meat demand would roughly double over the period 2000-2050(Pelletier and Tyedmers 2010, Alexandratos and Bruinsma 2012, Pradhan et al 2013, Herrero et al 2015. Earlier analyses (Steinfeld et al 2006, Godfray et al 2010, Foley et al 2011, Herrero et al 2015 of food supply chains identified inefficiency hotspots that lend themselves to such mitigation measures as improving yield (through genetics and agricultural practices), increasing energy, nutrient and water use efficiencies, or eliminating waste. Others focused on the environmental performance of specific products, for example animal-derived (de Vries and de Boer 2010, , 2011, 2014, Thoma et al 2013. A complementary body of work (Pimentel and Pimentel 2003, Eshel and Martin 2006, Eshel et al 2010, Hedenus et al 2014, Tilman and Clark 2014, Springmann et al 2016 quantifies the environmental performance of food consumption and dietary patterns, highlighting the large environmental impacts dietary choices can have. Key to estimating expected outcomes of potential dietary shifts is quantifying the amount of extra food that would become available by reallocating resources currently used for feed production to producing human food (Godfray et al 2010, Foley et al 2011, Cassidy et al 2013, Pradhan et al 2013, West et al 2014, Peters et al 2016. One notable effort (Foley et al 2011, Cassidy et al 2013 suggested that global reallocation to direct human consumption of both feed and biofuel crops can sustain four billion additional people. Yet, most cultivated feed (corn, hay, silage) is human inedible and characterized by yields well above those of human edible crops. Moreover, most previous efforts focused on calories (Cassidy et al 2013, Pradhan et al 2013, while other key dimensions of human diet such as protein adequacy are equally important. Here we quantify efficiencies of caloric and protein fluxes in US livestock production. We answer such questions as: How much feed must enter the livestock production stream to obtain a set amount of edible end product calories? What is the composition of these feed calories in the current US system? Where along the production stream do most losses occur? We provide the analysis in terms of both protein and calories and use them to explore the food availability impacts of a dietary change within the animal portion (excluding fish) of the American food system using the dietary shift potential method as described below. While dietary changes entail changes in resource allocation and emissions (Hedenus et al 2014, Tilman and Clark 2014, Eshel et al 2016, Springmann et al 2016, here we highlight the food availability gains that can be realized by substituting the least efficient food item, beef, with the most efficient nutritionally similar food item, poultry. Because beef and poultry are the least and most efficient livestock derived meats respectively, this substitution marks the upper bound on food gains achievable by any dietary change within the meat portion of the mean American diet (MAD). In this study we focus on substitution of these individual items, and plan to explore the substitution of full diets elsewhere. As a yardstick with which to compare our results, we also present the potential food availability gains associated with replacing beef with a fully plant-based alternative. Methods and data The parameters used in calculating the caloric and protein Sankey flow diagrams (figures 1 and 2) are based on Eshel et al (2015Eshel et al ( , 2014 and references and sources therein. Feed composition used in figures 1 and 2 are derived from NRC data (National Research Council 1982, 2000. For this work, the MAD is the actual diet of the average American over 2000-2010 (United States Department of Agriculture ERS 2015), with approximate daily loss-adjusted consumption of 2500 kcal and 70 g protein per capita (see SI and supplementary data for additional details). Calculating the dietary shift potential The dietary shift potential, the number of additional people that can be sustained on a given cropland acreage as part of a dietary shift, is (1), the left-hand side (ΔP a→b ) is the number of additional people that can be fed on land spared by the replacement of food item a with food item b. P US ≈300 million denotes the 2000-2010 mean US population; l a and l b denotes the annual per capita land area for producing a set number of calories of foods a and b. This definition readily generalizes to protein based replacements, and/or to substitution of whole diets rather than specific food items. To derive the mean per capita land requirement of the MAD, l , MAD we calculate the land needs of each of the non-negligible plant and animal based items the MAD comprises. We convert a given per capita plant item mass to the needed land by dividing the consumed item mass by its corresponding national mean loss adjusted yield. The land needs of the full MAD is simply the sum of these needs over all items (see supplementary data). The per capita crop land requirements of the animal based MAD categories (e.g., l , poultry ) l beef are based on Eshel et al (2014Eshel et al ( , 2015. The modest land needs of poultry mean that replacing beef with an amount of poultry that is caloric-or protein-equivalent spares land that can sustain additional people on a MAD. We denote by c item the kcal (person yr) −1 consumption of any MAD item. The set number of calories (or protein) consumed in the MAD is different for beef and thus for the calculation of substituting beef with poultry, we multiply the per capita land area of poultry by / c c , beef poultry the per capita caloric (or protein) beef:poultry consumption ratio in the MAD, which is 1.2 for calories and 0.6 for protein. Using equation (1), the caloric dietary shift potential of beef is For the beef replacement calculation, the resultant post-replacement calories (light orange arrows in figure 3(a)) comprise (1) the poultry calories that replace the MAD beef calories, plus (2) calories that the spared lands can yield if allocated to the production of MAD-like diet for additional people (national feed land supporting beef minus the land needed to produce the replacement poultry). The MAD calories that the spared land can sustain is calculated by multiplying the spared land area by the mean caloric yield of the full MAD with poultry replacing beef, ≈1700 Mcal (ac yr) −1 . The national annual calories due to substituting beef for poultry is where c and l are the per capita daily caloric consumption and annual land requirements of poultry, beef or the full MAD, respectively. The first and second terms on the right-hand side of equation (2) are terms (1) and (2) of the above explanation, respectively. To derive the difference between the above replacement calories and the replaced beef calories (percentages in figure 3), we subtract the original national consumed beef calories P c 365 US beef from the above equation. The difference between replacement and replaced caloric fluxes is As noted above, the quotient on the right-hand side gives the number of extra people that can be fed, reported in figure 3. An analogous calculation replacing calories with protein mass, yields the protein dietary shift potential shown in figure 3(b). The current calculation of the dietary shift potential also enables calculating the food availability gains associated with any partial replacement. Figure S2 depicts the relation between the dietary shift potential (additional people that can be fed a full MAD diet) and the percentage of national beef calories (from MAD) replaced with poultry. The choice of poultry as the considered substitute We use poultry as the replacement food in our food availability calculations for several reasons. First, US poultry consumption has been rising in recent decades often substituting for beef (Daniel et al 2011), suggesting it can serve as a plausible replacement. In addition, poultry incur the least environmental burden among the major meat categories and thus the calculation of Plant-based diets can also serve as a viable replacement for animal products, and confer larger mean environmental (Eshel et al 2014(Eshel et al , 2016 and food availability gains (Godfray et al 2010). Recognizing that the majority of the population will not easily become exclusive plant eaters, here we choose to present the less radical and perhaps more practical scenario of replacing the environmentally most costly beef with the more resource efficient poultry. We also augment this calculation with a plant-based alternative diet as a substitute. Finally, poultry stands out in its high kcal g −1 and g protein g −1 values and its desirable nutritional profile. Per calorie, it can deliver more protein than beef while delivering as much or more of the other essential micronutrients (figure S1). While it is tricky to compare the protein quality of beef and poultry, we can use the biological value (modified essential amino acid index and chemical score index Ihekoronye 1988) and the protein digestible corrected amino acid score, the protein indicator of choice of the FAO. Within inevitable variability, the protein quality of poultry is similar to that of beef using both metrics (Sarwar 1987, Ihekoronye 1988, López et al 2006, Barrón-Hoyos et al 2013. While the FAO has recently introduced an updated protein quality score (DIAAS-digestible indispensable amino acid score) (FAO Food and Nutrition paper No. 92 2011), to our knowledge no reliable DIAAS data comparing beef and poultry exists. Results The efficiency and performance of the animal portion of the American food system is presented in table 1 (see detailed calculations in supplementary files), highlighting a dichotomy between beef and the other animal categories, consistent with earlier environmental burden estimates (Eshel et al 2014). The calories flow within the US from feed to livestock to human food is presented in figure 1. From left to right are primary inputs (concentrated feed, processed roughage and pasture) feeding the five secondary producer livestock categories, transformed into human consumed calories. We report energy fluxes in Pcal=10 12 kcal, roughly the annual caloric needs of a million persons. Annually, ≈1200 Pcals of feed from all sources (or ≈800 Pcals when pasture and byproducts are excluded) become 83 Pcals of loss adjusted animal based human food. This is about 7% overall caloric conversion efficiency. The overall efficiency value arises from weighting the widely varied category specific efficiencies, from 3% for beef to 17% for eggs and dairy, by the average US consumption (rightmost part of figure 1). Concentrate feed consumption, such as maize, is distributed among pork, poultry, beef and dairy, while processed roughage and pasture (50% of total calories) feed almost exclusively beef. The concentrated feed category depicted in figure 1 also includes byproducts. We note that because detailed information on the distribution of byproducts as feed for the different animal categories is lacking, we cannot remove them from the feed to food efficiency calculation. Yet, our analysis shows that for the years 2000-2010 the contribution of byproducts to the total feed calories (and protein) was less than 10% (see SI spreadsheet) and so their effect on the values is quantitatively small. The results reported in all figures are corrected for import-export imbalances, such that the presented values refer to the feed used to produce the animal-derived food domestically consumed in the US (i.e., excluding feed used for livestock to be exported, and including imported feed, albeit quite minor in the US context). While calories are widely used to quantify food system performance, protein-which is often invoked as the key nutritional asset of meat-offers an important complementary dimension (Tessari et al 2016). The flow of protein in the American livestock production system, which supplies ≈45 g protein person −1 d −1 to the MAD, is shown in figure 2. Overall, 63 Mt (1 Mt=10 9 kg) feed protein per year are converted by US livestock into 4.7 Mt of loss-adjusted edible animal based protein. This represents an overall weightedmean feed-to-food protein conversion efficiency of 8% for the livestock sector. Protein conversion efficiencies by individual livestock categories span an ≈11-fold range, more than twice the corresponding range for calories, from 31% for eggs to 3% for beef (see SI for more details). By isolating visually and numerically the contributions from pasture, which are derived from land that is unfit for production of most other foods, figures 1 and 2 quantify expected impacts of dietary shifts. Of those, we choose to focus on substituting beef with poultry. Because these are the most and least resource intensive meats, this substitution constitutes an upper bound estimate on food gains achievable by any meat-tomeat shift. Lending further support to the beef-topoultry substitution choice, poultry is relatively nutritionally desirable (see the methods section and figure S1), and-judging by its ubiquity in the MAD-palatable to many Americans. We quantify the dietary shift potential (a term we favor over the earlier diet gap Foley et al 2011), the number of additional people a given cropland acreage can sustain if differently reallocated as part of a dietary shift. While here we estimate the dietary shift potential of the beef-to-poultry substitution, the methodology generalizes to any substitution (see methods section for further information and equations). The beef-topoultry dietary shift potentials are premised on reallocating the cropland acreage currently used for producing feed for US beef (excluding pastureland) to producing feed for additional poultry production. Subtracting from beef's high quality land requirements those of poultry gives the spared land that becomes available for feeding additional people. Dividing this spared acreage by the per capita land requirements of the MAD diet (modifying the latter for the considered substitution) yields the number of additional people sustained by the dietary substitution. We calculate the dietary shift potential for beef (as defined above and in the methods section) by quantifying the land needed for producing calorie-and protein-equivalent poultry substitution, and their differences from the land beef currently requires. We derive the number of additional people this land can sustain by dividing the areal difference thus found by the per capita land demands of the whole modified MAD, ≈0.5 acres (≈2×10 3 m 2 ) per year. Evaluating this substitution, and taking note of full supply chain losses, we obtain the overall dietary shift potential of beef to poultry on a caloric basis to be ≈120 million people (≈40% of current US population; figure 3, panel (a)). That is, if the (non-pasture) land that yields the feed US beef currently consume was used for producing feed for poultry instead, and the added poultry production was chosen so as to yield exactly the number of calories the replaced beef currently delivers, a certain acreage would be spared, because of poultry's lower land requirements. If, in addition, that spared land was used for growing a variety of products with the same relative abundance as in the full MAD (but with poultry replacing beef), the resultant human edible calories would have risen to six times the replaced beef calories ( figure 3, panel (a)). For protein-conserving dietary shift (figure 3, panel (b)), the dietary shift potential is estimated at ≈140 million additional people (consuming ≈70 g protein person −1 d −1 as in the full MAD). As the protein quality of poultry and beef are similar (see the methods section and references therein), this substitution entails no protein quality sacrifices. As a benchmark with which to compare the beef to poultry results, we next consider the substitution of beef with a plant based alternative based on the methodology developed in Eshel et al (2016). In that study, we derive plant based calorie-and protein-conserving beef-replacements. We consider combinations of 65 leading plant items consumed by the average American that minimize land requirements with the mass of each plant item set to 15 g d −1 to ensure dietary diversity. We find that these legume-dominated plantbased diets substitute beef with a dietary shift potential of ≈190 million individuals. Discussion In this study we quantify the caloric and protein cascade through the US livestock system from feed to consumed human food. Overall, <10% of feed calories or protein ultimately become consumed meat, milk or egg calories, consistent with mean or upper bound values of conversion efficiency estimates of individual animal categories (Herrero et al 2015). Our results combine biologically governed trophic cascade inefficiency with such human effects such as consumer preferences (e.g., using some animal carcass portions while discarding others) or leaky supply chains which is shared also by plant items. As conversion efficiencies reflect resource efficiencies (Herrero et al 2015), these results mirror our earlier ones quantifying the environmental performance of the US livestock system, highlighting the disproportionate impact of beef (Eshel et al 2014(Eshel et al , 2015. Building on and enhancing earlier studies that considered direct human consumption of feed calories (Cassidy et al 2013, West et al 2014, our results quantify possible US calorie and protein availability gains that can be achieved by reallocating high quality land currently used for feed production for beef into producing the same amount of calories and protein from poultry and any extra land remaining is used to produce the MAD (only with poultry replacing beef). Using caloric and protein needs, we estimate 120 and 140 million additional sustained individuals, respectively. This potential production increase can serve as food collateral in face of uncertain food supply (e.g. climate change), or exported to where food supply is limited. In the case of envisioning various scenarios resulting in only partial substitution to poultry consumption, the current calculation also enables to deduce the food gains associated with substituting only a certain percentage of national beef calories with poultry (see figure S2). Our purpose here is not to endorse poultry consumption, nor can our results be construed as such. Rather, the results simply illustrate the significant food availability gains associated with the rather modest and tractable dietary shift of substituting beef with less inefficient animal based alternatives. Substitution of other food items with other nutritionally similar animal food items is also plausible (e.g., pork for beef), yet the food gains expected from such replacements are considerably lower (see supplementary data). Substitution of beef with non-meat animal based products (dairy and eggs) is possible on a caloric or protein basis (see supplementary data), yet given their dissimilar nutritional profile, a more elaborate methodology is required to construct and analyze such a shift (Tessari et al 2016). The dietary shift potential of replacing beef with a plant based alternative (dominated by legumes) (Eshel et al 2016) amounts to ≈190 million additional people. Thus while plant based alternatives offer the largest food availability gains, poultry is not far behind. We note that the substitution of beef for either poultry or plants also entails vast reductions in demand for pastureland. The effects of dietary shifts on demand for agricultural inputs (such as fertilizer or water) for the production of food on the land spared from growing feed for beef requires further investigation. This paper offers a system wide view of feed to food production in the US, and introduces the dietary shift potential as a method for quantifying possible food availability gains various dietary shifts confer. Building on this work, future work can quantify the dietary shift potential of full diets (e.g. Peters et al 2016), enhance the realism of various considered dietary shifts, and better integrate nutritional considerations, micronutrients in particular, in the assessment of expected outcomes.
v3-fos-license
2020-10-28T19:19:55.482Z
2018-12-01T00:00:00.000
239569193
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://rsci.mosuljournals.com/article_159366_8bac66b453f4219bebdcb0482ba9fbc7.pdf", "pdf_hash": "c78481cd85f889ddab8dc62dd236ee1e51bc9a0e", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43648", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "sha1": "dfcb5b753a0afecef8ad5e043847037d91b85f11", "year": 2018 }
pes2o/s2orc
Investigation of some Carboxylic Acids and Phenolic Compounds of Ailanthus altissima Leaves and their its Effect on Italian Cupressus Seedlings Root Rot Fungi The study was carried out for separation and identification of some carboxylic acids such as Aspartic, Citric, Tartaric, Propionic, Ascorbic, Maleic and Fumaric, Adepic acid, as well as Phenol, Resorcinol, Hydroguinone, Quercetine, p-Hydroxybenzoic, Benzoic acid and Gallic acid from Alianthus altissima determined by High Performance Liquid Chromatography (HPLC). Moreover the major components of carboxylic acid were presented as Ascorbic acid (86.38%). Also the phenolic compounds was studied and the results showed that PHydroxy benzoic acid (41.99%) was the highest amount. Isolation results from Cupressus seedlings infected with root rot disease appearance of the fungus F. solani, F. oxysporum , F. chlamydosporium and Rhizoctonia solani, the isolation ratios was 41% as amaximum value for F. solani, then F. oxyspoum (27%), F. chlamydosporium (12%) which was minimum value and Rhizoctonia solani (20%). Bioassay results of Ailanthus altissima leaves extract showed an increasing inhibition ratio of fungi growth with increasing leaves extract concentrations, F.oxysporum and F.chlamydosporium had the highest degree of growth inhibition (100) % for the two fungus at (4%) extracts concentrations, then followed by the fungus F.solani and Rhizoctonia solani were (84. 78,57) % respectively and Rhizoctonia solani had the minimum inhibition. The results of average fungus growth treated with several concentrations of Ailanthus leaves extracts with PDA media showed inhibition of growth with increasing extracts concentration, F. chlamydospoum had minimum mean growth 7mm at 1% extract conc. whereas the two species of Fusarium showed maximum inhibition for average growth were zero values at 4% of extract conc. except Rhizoctonia solani which had a different value (19.67) mm at the same conc. INTRODUCTION The plant Alianthus altissima commonly known as "there of heaven" or "smoke tree" belongs to the Family Simaroubeceae (Weekar et al., 2017). A.altissima is a fast-growing deciduous tree which is native to Asia. It was introduced in to Europe (1751) and to the United States (1784) into Eastern states by a Philadelphian gardener and in to western states by Chinese immigrants who used it for medicinal purpose. However, the tree was originally indigenous to china, but today it grows the wild and is cultivated in tropical and subtropical eastern Asia, Northern Europe and North America (Masteli and Jerkovi, 2002). A.altissima is used in traditional medicine as a bitter aromatic drug in the treatment of colds and gastric diseases. The plant is known to have an antimalarial activity due to presence of active chemical constituents such as indole alkaloids, lipids, fatty acid, phenolic derivatives and volatile compounds from leaves (Raja et al., 2017). Monocarboxylic acids such as formic acid, acetic acid or propionic acid are fundamental materials in the chemical industry. The carboxylic acid are most widely used in the field of food and beverages as an acidulate and also in pharmaceutical and chemical industries. Carborylic acids are classified by the chemical structure R-COOH. In this acid form, they are fully hydrocarbon soluble. Only those organic acids with a carbon of five or less exhibit water solubility. However, an important characteristic of organic acids is that alkali metal salts of those compounds are readily souble in water and insoluble in hydrocarbon media (Sushil and Badu, 2014). Phenolic compounds are a group of secondary metabolites that are widespread in the plant kingdom. They are characterized by diverse chemical structure and numerous pharmacological properties. Their molecules contain two functional groups: a carboxyl group and a phenolic hydroxyl group (Carolina et al., 2014). Because of chemical structure, they can be divided into derivatives of cinnamic acid benzoic acid which differ from each other in the number of hydroxyl groups and the placement of methoxyl substitution, other pharmacological effects of phenolic acids are: antipyretic, antibacterial, anti-inflammatory, antipyretic, antifungal, anthelmintic, cholagogic and immune stimulant (Itoh, 2010). Root rot fungi diseases is one of famous diseases and wide world pervasive which attach on tree roots cause high losses in nurseries (Agrios, 1987). The importance of root rot diseases notice threw the many studies in world as compartment with another diseases (leaves, fruits and stems diseases) (Mohamed, 1994). Cupressus seedlings root rot is an example of this kinds of diseases As the nurseries soil are unsterilized, the fungus, Macrophomina phaseolina, Fusarium and Rhizoctonia solani attached to Pinus brutia, Cupressus and Casuarina caused root rot disease at this seedlings (Mohammed, 1987). and the fungus F.equiset Corda sacca, Macrophomina phaseolina Tassi and seven isolations of Rhizoctonia solani Kuhn isolated from the roots of forest trees seedlings (Ali, 2007). The aim of this study was the identification some carboxylic acid and phenolic compound by HPLC, and its effect on Italian cupressus seedlings root rot fungi. Alianthus altissima (NRCS, 2012). MATERIALS AND METHODS Plant Material Leaves of A.altissima were collected from healthy trees (10 years) before flowering in … 2017 at university of Mosul/Iraq, and were dried at room temperature (25° 2C°) for one week, and kept in the dark until use. Soxhlet Extraction Soxhlet extraction was carried out with standard apparatus for (6-8) hrs. by using 150g of dried leaves with 350 ml of hexane to achieve de-fatted depending on the method of (Harborn, 1973). Extraction of Carboxylic Acids The leaves of A.altissima L. (150g) were re-extracted with 250 ml of methanol by using magnetic stirrer for 72 hrs. at 60°. The mixture was filtered and completed to 10ml in a volumetric flask with methanol (Grand et al., 1988). the extract was filtered and evaporated under vacuum in a rotary evaporator at 65° until 20ml. The analysis was performed using HPLC (Shimadzu 20A), C18 column (5μm, 150 mmx 4.6mm) thermostatted at 30 C°, the mobile phase was 40 mM Na 2 SO 4 . The pH was 2.68 adjusted with methanesulfonic acid with flow rate 1.0 ml min -1 , UV at 210 nm and injection volume was 5μL (Dionex, 2004). Extraction of Phenolic Compounds After the extraction by hexane, the leaves of A.altissima (150g) were re-extracted with 350ml of absolute ethanol by using soxhlet apparatus for 72hrs. at 78 C°. the extract was filtered and evaporated for acid hydrolysis, with 1N HCL for 1hr in both water at 100C° and then the mixture was separated by using ethyl acetate when added to solution, two layers were shown, the ethyl acetate layer was kept for other analysis. The compounds that contain an ethyl acetate were identified by HPLC-Technique (Sahimadzu, LC, 2010 A/Japan). The column was C18 (4.6 × 240) mm at a flow-rate 1.5 ml min -1 , the mobile phase consisting of acetonitrite: water (80: 20) V/V, UV at 280 nm (Al-Tkey, 2012). Measurements were made in the laboratory of Baghdad university/ College of Education for Girls. Identification of Compounds Analysis of carboxylic acids and phenolic compounds were identified by comparing the retention time of the samples with those of the known standards (Fig 1, 3). The quantities of organic acids and phenolic compounds were estimated from the peak areas, injecting known amounts of the standards (Plein and Covdet, 2010). Isolation and Root Rot Fungi Identification Isolation from Italian cupressus seedlings which infected with root rot diseases depending by Agnihorti (1971) as well as diseases symptoms which appear at vegetative parts, samples of cupressus roots infected by root rot diseases transfered to forest diseases laboratory/ Forestry Dept. / College of Ari. and Forestry, the diseases samples were washed under running water for 30min., small pieces of cupressus infected root seedlings were cut especially the infected zones to small pieces at 0.5 cm. then sterilized with 1% sodium hypochlorate for 3min., the species were removed from sterilization solution, washed with distilled water and dried between two sterile filter papers, then planted on sterilized petri dishes containing Potato Dextrose Agar ( PDA ) media supplied with antibiotic of streptomycin sulphate at 50mg/L conc. to prevent the bacterial growth, these pieces distributed as avarege (4 pices /petri dish), incubated at 25-+2 c for 5 -7 days Purification of isolated fungi were made in test tubes containing Potato Dextrose slant agar as a nutrient media as we shall use it in subsequent experiments. Fungi identification till to the genus level was made depending on the keys of (Barnett and Hunter, 2006) then to species depending on the keys of (Booth,1971) then isolation ratio were calculated. Bioassay of Ailanthus Extract Alcoholic extract of Ailanthus leaves with concentrations (1%,2%,4%) were obtained from stock solution which contain phenolic compounds depending on Alyahya ( 2003) method, DMSO (Dimethoxysulphorside) as a solvent for dry extract of Ailanthus leaves, this conc. were added to sterile petri dishes and mixed with sterile PDA, in petri dishes at 5.5 cm, Inoculation of isolated fungi were made on petri dishes (5.5cm Dim.), disk of isolated fungi (4mm. Dim.) which was taken from colony edge that modern growth, three replication for each concentration of extract were made, Results calculated by measuring two mean of orthogonal diameters for fungi growth (Alyahya, 2003 ). RESULTS AND DISCUSSION Identification of carboxylic acids The pH of mobile phase and temperature are crucial parameters. The most suitable mobile phase used for separation of carboxylic acids are aqueous water, pH was adjusted 2.68 value with hydrochloric acid at temperature 30 C°, therefore many carboxylic acids identified in A.altissima (Table 1) and Fig. (2). An example of the chromatogram (R t ) of the organic acids standards is given in (Table 1). Many carboxylic acids could be separated in less than 14 minutes. Among the organic acids, Ascorbic acid showed the highest value followed by citric, Aspartic, and Maleic acids. The individual and total levels of carboxylic acids in Ailanthus leaves may change according to the age of tree and the variety. In conclusion, this methods is suitable for the determination of carboxylic acids in Ailanthus leaves. They can be determined in low concentrations with a great sensitivity. The procedure of organic acids extracts is simple and rapid These results were in agreement with the findings of (Pindla et al., 2012) and (Meinhart et al., 2012). Identification of Phenolic Compounds Previous studies have reported abundant Phenolic compounds in A.altimissa, such as (Rutin, Quercetin, Luteolin, Apigenia, Gallic acid, Chlorogenic acid, Epicatechia (Ferdaous et al., 2013). Phenolic compounds play an important role for normal growth in plant development, as well as defense against infection and injury, the presence of phenolics in injured plants may have an important effect on oxidative stability and microbial safety 25* (Plein and Cevdet, 2010), (Table 2) and Fig. (4) showed the maximum quantities (%) and retention time Rt (min) of 6 standard and sample of A.altissima identified in ethanolic extract by HPLC-Technique. All phenolic compounds were identified according to their retention time and spectral characteristics against those of standards. Results confirm a variation in phenolic content of plant extracts, the highest amount of P-hydroxybenzoic Gallic acid, Quercetin appeared in extract (Shakir et al., 2018) showed that the A.altissima leaves contain many phenolic compounds such as gallic acid, Quercetin, Rutin, chlorogenic acid, Luteolin, And a result of (Raja et al., 2017) who demonstrated that the quantitiative analysis showed the presence of callic acid, coumarin, Quercetin in A.altissima leaves in unsound quantities. Isolation and Root Rot Fungi Identification: Isolation results from infected cupressus seedling by root rot disease showed appearance of the fungus F. solani, F. oxysproum, F. chlamydosporium and Rhizoctonia solani in different ratios (Table 3), the isolation ratios were 41% a maximum value for F. solani, then F. oxyspoum (27%), F. chlamydosporium (12%) which was minimum value and Rhizoctonia solani (20%) . Identification of fungi were according to universal classification keys to genus level ( Barnett and Hunter, 2006) and to species level (Booth, 1971). Bioassay of Ailanthus Leaves Extract Average growth inhibition ratio calculation: The results of bioassay for Ailanthus altissima leaves extract showed (Table 4) It was increasing in inhibition ratio of fungi growth with increasing of leaves extract concentrations, F.oxysporum and F.chlamydosporium had the highest degrees of growth inhibition (100) % for the two fungus at (4%) extracts concentrations, then followed by the fungus F.solani and Rhizoctonia sp. were (84,78,57) % respectively and Rhizoctonia solani. had minimum of inhibition. Ethanolic leaves extract was anti fungal due to the glycoside compound and phenolic compounds which was anti fungus, the results of identification for extracts compounds by High Performace Liquid Chromotography, previous results were supported with (Ratha et al., 2003) about their using the ethanol extracts of Ailanthus altissima leaves for stopping the growth of fungus Aspargillus niger, Penicillium,‫د‬ Aspargillus flavus, Aspargillus fumigant. Diameter Growth Average of Isolated Fungi Calculation The results of average fungus growth treated with several concentrations of ailanthus leaves extracts with PDA media showed inhibition growth ( Table 5) with increasing of extracts concentration, F. chlamydospoum had minimum mean growth 7mm at 1% extract conc. whereas the two species of Fusarium showed maximum inhibition for average growth ( zero values at 4% ) of extract conc. except Rhizoctonia solani whch had different value (19.67) mm at the same conc. As the results followed we can used Ailanthus alcohol extract as natural material in control the root rot disease of cupressus seedlings especially the fungus Fusarium species in 4% conc, of ailanthus leaves extracts.
v3-fos-license
2021-10-07T06:17:17.352Z
2021-10-06T00:00:00.000
238411487
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://jgeb.springeropen.com/track/pdf/10.1186/s43141-021-00254-8", "pdf_hash": "3225e1e5b3b21b60cc4ae150045f034e0cdb2ebf", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43649", "s2fieldsofstudy": [ "Environmental Science", "Biology" ], "sha1": "f6d3415398ee829762cefd6f91bc56a9e6a50c8b", "year": 2021 }
pes2o/s2orc
Improved chromium tolerance of Medicago sativa by plant growth-promoting rhizobacteria (PGPR) Background Soil pollution by heavy metals increases the bioavailability of metals like hexavalent chromium (Cr (VI)), subsequently limiting plant growth and reducing the efficiency of phytoremediation. Plant growth-promoting rhizobacteria (PGPR) have substantial potential to enhance plant growth as well as plant tolerance to metal stress. The aim of this research was to investigate Cr (VI) phytoremediation enhancement by PGPR. Results The results showed that the 27 rhizobacterial isolates studied were confirmed as Cr (VI)-resistant PGPR, by using classical biochemical tests (phosphate solubilization, nitrogen fixation, indole acetic acid, exopolysaccharides, hydrogen cyanide, siderophores, ammonia, cellulase, pectinase, and chitinase production) and showed variable levels of Cr (VI) resistance (300–600 mg/L). The best four selected Cr (VI)-resistant PGPR (NT15, NT19, NT20, and NT27) retained most of the PGP traits in the presence of 100–200 mg/L concentrations of Cr (VI). The inoculation of Medicago sativa with any of these four isolates improved the shoot and root dry weight. The NT27 isolate identified using 16S rDNA gene sequence analyses as a strain of Pseudomonas sp. was most effective in terms of plant growth promotion and stress level decrease. It increased shoot and root dry weights of M. sativa by 97.6 and 95.4%, respectively, in the presence of Cr (VI) when compared to non-inoculated control plants. It also greatly increased chlorophyll content and decreased the levels of stress markers, malondialdehyde, hydrogen peroxide, and proline. The results of the effect of Pseudomonas sp. on Cr content and bioaccumulation factor (BAF) of the shoots and roots of M. sativa plants showed the increase of plant biomass concomitantly with the increase of Cr root concentration in inoculated plants. This would lead to a higher potential of Cr (VI) phytostabilization. Conclusions This study demonstrates that the association M. sativa-Pseudomonas sp. may be an efficient biological system for the bioremediation of Cr (VI)-contaminated soils. Background The intensive urbanization and civilization of society are responsible for the prominent increase of rapid industrial development and the spread of metals in soils. Metal soil contamination is recognized as one of the biggest environmental concerns worldwide and constitutes a permanent threat to ecosystems, agricultural sustainability, and human health [1]. The agricultural sector suffers horribly from the increase over time of metal pollution, such as lead (Pb), cadmium (Cd), chromium (Cr), mercury (Hg), and Arsenic (As) causing a significant decrease in plant growth and crop yield [2]. Heavy metals are also used in various terrestrial chemical fungicides and fertilizers, wastewater irrigation, and sewage sludge causing heavy metal contamination of water resources and agricultural soils [2,3]. Cr is one of the most polluting heavy metals that is commonly used in the production of electroplating, stainless steel, textile dyeing, and in the leather industry, mainly in chrome tanning of skins [4,5]. Among the different types of Cr forms (Cr +6 , Cr +5 , Cr +4 , Cr +3 , Cr +2 , Cr +1 , Cr 0 , Cr −1 , and Cr −2 ), the most stable are Cr (VI) and Cr (III). The excessive accumulation of Cr (VI) in the soil causes enormous problems for plant growth and crop productivity [6]. A higher intake of Cr (VI) slows down seedling development, germination process, and root growth [7][8][9]. The interference of Cr (VI) with nutrient uptake, such as phosphorus, within the intracellular membrane structures and photosynthesis, increases plant phytotoxicity. This is due to lipid peroxidation through reactive oxygen species (ROS) and modification of antioxidant activities [9,10]. Cr crossing the plasma membrane oxidizes proteins and nucleic acid through the production of reactive oxygen species (ROS) due to its strong oxidizing nature, such as radicals, O 2− , OH − , and H 2 O 2 [7,11]. Higher accumulation of Cr (VI) in plant tissues can affect the chlorophyll content, transpiration process, transport of electrons, CO 2 fixation, photophosphorylation, photosynthetic enzyme activity, and stomatal conductance, which leads to a significant reduction of the photosynthetic rate [12][13][14]. Several efforts have been made to develop technologies useful for extracting and removing toxic heavy metals from water and soil, such as chemical oxidation or reduction, filtration, chemical precipitation, ion exchange, and electrochemical treatment [15]. However, these processes adversely affect the environment and the health of soil, plants, and humans. Also, when the concentration of heavy metals is low, these techniques are mostly ineffective and expensive [16]. Therefore, in this context, using eco-friendly approaches like plant growthpromoting rhizobacteria (PGPR)-assisted phytoremediation could be one of the best-suited choices to improve crop productivity and to alleviate heavy metals problems [17][18][19][20][21]. Metal hyperaccumulating plants have garnered considerable attention nowadays. Medicago sativa (alfalfa) for example is considered as an excellent fodder legume plant due to its high biomass productivity and its low susceptibility to environmental stresses like salinity and drought [22,23]. It is also proposed as a promising material for metal phytoextraction [24,25]. Numerous reports have investigated the use of PGPR to reduce efficiently Cr (VI) bioavailability and lower the Cr absorption by the plants. The main mechanisms of Cr (VI) bioremediation are biosorption (sorption of Cr (VI) by microbes and biological-based materials) and biotransformation (which convert more mobile and toxic Cr (VI) to non-toxic form Cr (III)) [26][27][28]. The interconnection between plants and rhizospheric microbes plays a vital role in the enhancement of phytoremediation efficacy via a mechanism called "bioassisted phytoremediation" [29]. PGPR resistant to heavy metals have the potential to relieve heavy metal stress by improving plant development. The PGPR can similarly improve the growth and resistance of plants to Cr (VI) through mechanism of biocontrol and growth promotion. It includes phytohormones stimulation, decreased stress-induced ethylene production by synthesized enzyme ACC (1-aminocyclopropane-1-carboxylate) deaminase; production of antioxidant enzymes to scavenge ROS; production of ammonia, HCN, and siderophores; phosphate solubilization; nitrogen fixation; and bacterial secretion of extracellular polymeric substances (EPS) [28,[30][31][32]. Such PGPR with multiple properties of Cr resistance combined with plant growth promotion may be more essential for phytoextraction and plant growth. Thus, the present study was aimed at the isolation of Cr-resistant PGPR and the evaluation of their performance under Cr stress. Hence, pot experiments were designed to analyze the effect of selected Cr (VI)-resistant PGPR interaction with M. sativa species to alleviate Cr stress and to enhance Cr (VI) bioremediation. Bacteria isolation The bacteria were isolated from rhizospheric soil of various plants (alfalfa, wheat, barley) from an agricultural area (33°56′ N, 5°13′ W, 499 m altitude) in the Fez region, Morocco. The root system was removed along with the bulk soil from 0 to 20 cm depth, and the rhizosphere soil was recovered, placed in sterile plastic bags, transported to the laboratory on an ice pack, and kept at 4°C until ready to be processed. The isolation of PGPR was accomplished on the basis of phosphate solubilization, which represents a substantial PGP trait. Briefly, 5 g rhizosphere soil was mixed into 45 mL distilled water. Further serial dilutions (10 −7 ) were prepared from soil solution (10 −1 ) with 0.9 mL distilled water [33]. An aliquot (0.1 mL) from each dilution was used to inoculate National Botanical Research Institute's phosphate growth (NBRIP) agar plates (10 g L −1 D-glucose, 5 g L −1 Ca 3 (PO 4 ), 5 MgCl 2 6H 2 O, 0.25 g L −1 MgSO 4 H 2 O, 0.2 g L −1 KCl, 0.1 g L −1 (NH 4 ) 2 SO 4 , 15 g L −1 agar, pH 7) that was incubated at 28°C for 5 days [34]. The halo zones around bacterial colonies and colony morphology were used to select bacterial isolates. PGP traits characterization and Cr (VI) tolerance PGP traits characterization Twenty-seven bacterial isolates maintained their Psolubilization ability after three successive subcultures on the NBRIP agar medium. Their colony diameter and halo zones were recorded as described by Islam et al. [35], and their ability to solubilize inorganic phosphate was estimated as phosphate solubilization index (PSI): PSI = the ratio of the total diameter (colony + halo zone)/the colony diameter. IAA production by the isolates was quantitatively estimated: 5 mL of LB Broth supplemented with L-tryptophan (1 g/L) and incubated at 28 ± 2°C for 120 h with continuous shaking at 120 rpm. After centrifugation (10,000g for 15 min) of bacterial culture, 1 mL of the supernatant was mixed with 2 mL of Salkowski's reagent (1.2 g FeCl 3 6H 2 O in 100 mL of H 2 SO 4 7.9 M) and incubated at room temperature for 20 min. Optical density was measured against the standard curve (serial dilutions of a solution of IAA 50 mg/mL in the LB medium) using a UV spectrophotometer at 535 nm [36]. A qualitative assay of siderophores secretion by the isolated bacteria was assessed using blue agar plates containing Chrome azurol S (CAS) (Sigma-Aldrich) with the methods prescribed by Schwyn and Neilands [37]. The positive reaction was revealed by the appearance of an orange zone around the colony, signaling siderophore production. HCN production was determined following the Lorck [38] method. Bacterial isolates were inoculated into Lauria-Bertani plates supplied with 4.4 g/L of glycine. Sterilized filter papers (Whatman N°.1) were mounted on the top of each plate after soaking in picric acid solution (0.5% of picric acid with 2% of sodium carbonate) and incubated for 5 days at 28 ± 2°C. The shift in the color of the filter paper from yellow to orange-red specified HCN production by bacteria. Ammonia production was checked for the isolated bacteria on peptone water following the Cappuccino and Sherman [39] method. Bacterial isolates were inoculated into peptone water (10 mL) and incubated for 48 h at 30 ± 2°C. Then, Nessler's reagent (500 μL) was transferred to each tube. The shift in color of the media (development of brown to yellow color) indicated ammonia production. Nitrogen (N 2 ) fixation experience was executed, in a malate nitrogen-free mineral medium with modifications g/L (5 g malic acid, 15 g Agar, 0.5 g K 2 HPO 4 , 4 g KOH, 0.02 g CaCl 2 , 0.1 g NaCl, 0. [40]. The inoculated media were incubated at 28 ± 2°C for 3 days. Nitrogen fixation activity was regarded as positive through shifting in color from pale green to blue. The production of EPS was tested on the modified RCV-sucrose medium [41] (yeast extract 0.1 g/L, sucrose 30 g/L, agar 15 g/L). The plates were inoculated with fresh bacterial cultures and then incubated for 5 days at 28°C. The formation of the bacterial gel colonies on the culture medium indicates the production of EPS. Cr (VI) resistance of the bacterial isolates The resistance of the isolates to Cr (VI) was assessed using the dilution plate process with a determination of the minimum inhibitory concentrations (MIC) for each bacterial isolate. For this purpose, the bacterial isolates were cultured in Petri dishes containing LB agar medium supplemented with Cr (VI) (K 2 Cr 2 O 7 ) at concentrations from 0 to 1000 mg/L. The Cr solution was filter sterilized before being added to the agar medium. After 48 h of incubation at 30°C, the minimum inhibitory concentration (MIC) was determined as the lowest concentration at which no viable colony-forming units (CFU) were observed [45]. Effects of Cr (VI) on the PGP traits of the selected bacteria Four bacterial isolates were selected on the basis of PGP traits and Cr (VI) resistance and tested for their ability to maintain PGP characteristics under Cr (VI) stress. The LB medium was supplemented with varying concentrations of Cr (VI) (100, 150, and 200 mg/L), and the PGP proprieties (P solubilization, N 2 fixation, IAA, NH 3 , HCN, cellulase, pectinase, and chitinase production) were evaluated as described above. Plant growth assay of M. sativa and tolerance to Cr (VI) exposure Experimental design Experiments were conducted in plastic pots containing soil collected from agricultural land in the Fez region. The soil of the experiments (pH 8.1, organic matter 12.93 g/kg, available phosphate 13.25 mg/kg, and available N 0.73 g/kg) was artificially contaminated with an aqueous solution of Cr (VI) (K 2 Cr 2 O 7 ), to have a concentration of 10 mg of Cr (VI) per kilogram of soil. Bacterial inoculums of each of the four selected bacteria were provided in LB medium and incubated for 24 h at 28 ± 2°C. The cells after centrifugation (6000g for 20 min) were washed twice with sterile saline solution and resuspended in sterile saline solution and then diluted with sterilized water to achieve an optical density of 0.6 corresponding to 10 8 CFU/mL. Alfalfa seeds were surface-sterilized and germinated in soft agar plates 0.7% (w/v) water-agar. Plantlets were transplanted in the culture devices (500 g of soil into a plastic cup (10 × 9 × 20 cm)), with 3 plants per pot (3 pots for each treatment). Then, 3 mL of PGPR inoculum (DO 10 8 CFU/mL) were added to each pot (1 mL per plant). The pot culture experiment was arranged in randomized design containing four treatments: (i) absence of bacteria and Cr (VI) (negative control), (ii) absence of bacteria and presence of Cr (VI) (positive control), (iii) presence of bacteria and absence of Cr (VI), and (iv) presence of bacteria and Cr (VI). Two days later, the pots were inoculated with 2 mL of a suspension of each bacterial culture (10 8 CFU/mL). Two milliliters of saline solution was added to the uninoculated plants. Pots were positioned in a greenhouse (approximately 16 h photoperiod, 26-30°C day and 18-22°C night) and watered regularly. Five weeks later in the budding stage (from this stage through early flower is usually ideal to harvest high-quality alfalfa), plants were harvested and washed with deionized water, then divided into roots and shoots. The biomass yield was estimated after oven drying at 65°C until constant weight. Plant analyses Chlorophyll content For the assessment of leaf chlorophyll content, Moran and Porath's [46] methodology was followed. The M. sativa leaves (50 mg) were homogenized with acetone (80%), and the extract was centrifuged for 5 min (9000g at 4°C). Then, the optical density was measured at 646.8 nm and 663.2 nm. The total chlorophyll was determined using the following equation: [(7.15 × A 663.2 ) + (18.71 × A 646.8 )] V/M, where V is the final volume of the filtrate and M is the fresh weight of the leaf. It was expressed as mg/g fresh weight of leaf tissue. The chlorophyll a/b ratios were also calculated. Proline content For the assessment of proline content of the leaves, Bates et al.'s [47] methodology was followed. Plant material (0.5 g) was mashed in 10 mL of aqueous sulfosalicylic acid 3%. Then, 2 mL of filtrate was mixed with 2 mL of ninhydrine and 2 mL of glacial acetic acid. After incubation for 1 h at 100°C, the reaction was stopped in an ice bath and 4 mL of toluene was added to each tube. Then, the optical density was measured at 525 nm. Free proline per gram of fresh weight was calculated as follows: [(μg proline/mL × mL toluene)/115.5 μg/μmole]/ [(g sample)/5] = μmoles proline/g. Hydrogen peroxide content For the assessment of hydrogen peroxide (H 2 O 2 ) content of the leaves, Jana and Choudhuri's [48] methodology was followed with some modifications. One gram of leaves was homogenized with 0.1% trichloroacetic acid (TCA) (15 mL) and centrifuged for 20 min at 6000g. The supernatant (0.5 mL) was added to 10 mM phosphate buffer pH = 7 (0.5 mL) and 1 mM KI (1 mL). Then, the optical density was measured at 390 nm. From a standard curve prepared using known H 2 O 2 concentrations, the sum of H 2 O 2 was determined and expressed as mM/ g fresh weight of leaf tissue. Malondialdehyde content To determine the malondialdehyde (MDA) content in plant leaves, Heath and Packer's [49] methodology was adopted. Briefly, 0.2 g of leaves was homogenized with 5 mL of (0.5% 2-thiobarbituric acid (TBA) and 20% TCA) solution, and 1 miL of alcoholic extract was added to 1 mL of 20% TCA to prepare the control. The mixture was heated for 30 min at 95°C, cooled at room temperature, and centrifuged (5000g for 10 min at 25°C). Optical density was measured at 532 nm and 600 nm. Effect of bacterial inoculation on plant phytoremediation potential This study was focused on plants inoculated by the bacterial isolate NT27 that showed interesting performance in terms of PGP traits under Cr (VI) stress, plant growth, and tolerance to Cr (VI). The plant's phytoremediation potential was assessed by analyzing Cruptake by root and shoot tissues of plants grown as described above. Approximately 200 mg of powdered plant tissue was digested after 24 h of drying at 70°C [50]. Then, using the inductively coupled plasma atomic emission spectrometer (ICP-AES) (Jobin Yvon), total Cr content was determined in root and shoot tissues. To estimate the metal uptake in plant sections, the bioaccumulation factor (BAF) was determined. It provides an index of a plant's ability to absorb a specific metal relative to its medium concentration [51]. Molecular identification The selected isolate NT27 was characterized by a molecular identification approach using the universal primers fD1 (50 AGA GTT TGA TCC TGG CTC AG 30) and rD2 (50 ACG GCT ACC TTG TTA CGA CTT 30) [52]. Bacterial DNA extraction and fragment of rDNA amplification were realized as described by Tirry et al. [53]. The sequences obtained were checked and extracted by Mega X (version 10.0.5). Related sequences were obtained from the GenBank database, National Center for Biotechnology Information (NCBI), using the BLAST analysis, and then accession number was obtained after submission to the NCBI GenBank database. Sequences were aligned to their nearest neighbors using the MUSCLE program [54], and then a phylogenetic tree was constructed using the MEGA-X program [55]. Statistical analysis In order to determine the significant differences among treatments, the data collected were submitted to ANOVA analysis by using the Minitab 18 software. All the values were compared using Tukey's method at p ≤ 0.05. PGP traits and Cr (VI) resistance of the bacterial isolates In the present work, 27 bacterial isolates were isolated from the plant rhizosphere based on the solubilization of inorganic phosphate. These isolates showed different PGP traits (phosphate solubilization, nitrogen fixation, IAA, HCN, siderophores, ammonia, EPS, and hydrolytic enzyme production), with different degrees of tolerance to Cr (VI) ( Table 1). Four isolates NT15, NT19, NT20, and NT27 were selected on the basis of their Cr (VI) tolerance and their PGP characteristics. They showed high resistance to Cr (VI) concentrations up to 600 mg/L. They also showed interesting (PGP) traits, for example, higher values of PSI (3.6) and IAA (572.27 μg/mL) were recorded by NT15 and NT20 isolates, respectively. Furthermore, the selected isolates showed other PGP traits like N 2 fixation, EPS, NH 3 , HCN, siderophores, cellulase, pectinase, and chitinase production ( Table 1). PGP traits of the selected bacteria under Cr (VI) stress The ability of the selected isolates to maintain different PGP traits in the presence of Cr (VI) at concentrations ranging from 100 to 200 mg/L is presented in Table 2. The results show that for the strain NT15, the IAA production decreased by 16.64%, 27.8%, and 76.2%, compared to the control at 100, 150, and 200 mg/L of Cr (VI), respectively. Decreases of 14.4% and 18.3% for the phosphate solubilization index compared to the control were observed at 100 and 150 mg of Cr (VI), respectively, followed by total inhibition at 200 mg/L. The nitrogen fixation was maintained until 150 mg whereas ammonia production was maintained at all concentrations of Cr (VI). HCN and chitinase production was inhibited at all concentrations of Cr (VI). For the isolate NT19, decreases of 19.76%, 29%, and 64.45% of the IAA production, compared to the control, were obtained at 100, 150, and 200 mg/L of Cr (VI), respectively. Decreases of 20% and 28.8% in the P solubilization index, compared to the control, were observed at 100 and 150 mg/L of Cr (VI), respectively, followed by total inhibition at 200 mg/L of Cr (VI). At all concentrations studied of Cr (VI), the isolate was able to maintain NH 3 and cellulase production but was unable to fix nitrogen and to produce pectinase and chitinase. For the isolate NT20, decreases of 36.87%, 55.8%, and 79.4% in IAA production, compared to the control, were observed at 100, 150, and 200 mg/L of Cr (VI), respectively. Decreases of 23% and 65.4% in phosphate solubilization index, relative to the control, were observed at 100 and 150 mg/L, respectively, and a total inhibition was obtained at 200 mg/L of Cr (VI). At all concentrations of Cr (VI), the isolate retained its ability to fix nitrogen, ammonia, and cellulase production whereas HCN and chitinase productions were inhibited. Pectinase production was inhibited at the concentration of 200 mg/L of Cr (VI). For the isolate NT27, decreases of 25.75%, 46.69%, and 70.67% in IAA production, compared to the control, were observed at 100, 150, and 200 mg/L of Cr (VI), respectively. Decreases of 43.53%, 56.1%, and 62.74% in the phosphate solubilization index, compared to the control, were observed at 100, 150, and 200 mg/L of Cr (VI), respectively. Ammonia production and nitrogen fixation were maintained at all concentrations of Cr (VI). The strain was able to produce HCN and pectinase till the concentrations of 100 and 150 mg/L, respectively, whereas its ability to produce chitinase was lost at all concentrations of Cr (VI). Effect of bacterial inoculation on the tolerance of M. sativa to Cr (VI) stress Plant growth The effect of the four selected isolates (NT15, NT19, NT20, and NT27) on alfalfa plant growth was studied in the presence of 10 mg/L of Cr (VI) (Fig. 1a, b) Chlorophyll content and antioxidant system Our results showed that the treatment of the plants with Cr (VI) caused a reduction in the total chlorophyll content and chlorophyll a/b ratio by 34% and 62.75%, respectively, in comparison with unstressed plants (Fig. 2a). In the absence of Cr (VI), all isolates increased the total chlorophyll content and chlorophyll a/b ratio of M. sativa leaves, with a maximum increase of total chlorophyll of 12.6%, observed in the plants inoculated with NT15 compared to (Fig.2). In the presence of Cr (VI), all the isolates significantly (p < 0.05) lowered the proline content in the shoots of alfalfa plants, with a maximum reduction of 63% recorded in the plants inoculated with the isolate NT27 compared to the uninoculated plants. In control plants, the decrease in the level of proline in the plants inoculated by the four isolates was not significant (Fig. 2b). Inoculation with the isolates decreased MDA values with a maximum of 42.4% observed in the plants inoculated with NT27, compared to the uninoculated plants. No significant effect of bacterial inoculation was observed in the case of non-stressed plants (Fig. 2c). With respect to H 2 O 2 content, a significant increase (55.86%) was observed in uninoculated plants in response to Cr (VI) stress. However, bacterial inoculation significantly reduced the accumulation of H 2 O 2 by 51.73%, 49.1%, 42.2%, and 59.35% in plants inoculated by NT15, NT19, NT20, and NT27, respectively. In the absence of Cr (VI) stress, inoculation of plants also reduced significantly the accumulation of H 2 O 2 in plant tissues, with a maximum decrease of 54.5% observed in plants inoculated with NT27, compared to the control (Fig. 2d). Effect of bacterial inoculation on metal uptake by plants The total Cr uptake in the shoots and roots of M. sativa after 45 days is shown in Table 3. Data showed that the roots accumulated more Cr than shoots in both inoculated and uninoculated M. sativa plants. Bioaugmentation with the NT27 isolate significantly (p < 0.05) enhanced the root uptake of Cr and increased the bioaccumulation factor (BAF) by 49.03% as compared to uninoculated and uncontaminated control, while no significant difference was noticed in shoot Cr contents. Identification of the bacterial isolate The 16S rRNA sequencing results identified the selected bacterial isolate NT27 as Pseudomonas sp. (GenBank: MT337487.1) which showed similarities of 99.38% with Pseudomonas sp. FL40 (DQ091247.1). Representative species of closely related taxa, analyzed using the neighbor-joining (NJ) algorithm, formed a Pseudomonas cluster consisted of the isolate NT27, Pseudomonas sp. strain NTE1, Pseudomonas sp. PCWCW2, P. Table 3 Effect of Pseudomonas sp. (NT27) on Cr content (μg/g) and bioaccumulation factor (BAF) of the shoots and roots of alfalfa grown on contaminated soils with Cr (VI). Values with different letters are significantly different (p < 0.05) Treatment Chromium uptake (μg/g of dry weight) BAF Discussion Cr is considered among the most toxic heavy metals because of its higher electronegativity [56,57]. The widespread of Cr participates in the deterioration of agricultural soils on a regular basis [58,59]. The present study was performed to isolate, screen, and characterize Cr (VI)-resistant PGPR and to determine their effects on growth and Cr (VI) toxicity tolerance of M. sativa plants. Our results showed that the 27 bacterial isolates studied showed various PGP properties (P solubilization, N 2 fixation, IAA, EPS, HCN, siderophores, NH 3 , cellulase, pectinase, and chitinase production) and variable levels of Cr (VI) resistance (300-600 mg/L). Four bacterial isolates (NT15, NT19, NT20, and NT27) were selected for showing an ability to resist up to 600 mg/L of Cr (VI) concentration along with maintenance of high production of IAA, HCN, cellulase, pectinase, chitinase, and NH 3 ; P solubilization; and nitrogen fixation in the presence of Cr (VI) concentrations ranging between 100 and 200 mg/L. The production of several metabolites showed a gradual decline when the concentration of Cr (VI) increases, indicating that, under stressful conditions, bacterial cells were actively involved in stress management than in other metabolic processes [60]. The detrimental effects of Cr (VI) on plant growth obtained in this study were also reported in previous works [18,20,[61][62][63]. Chen et al. [64] reported that 20 mg of Cr (VI) per kilogram in soil can significantly reduce plant dry weight and root length of wheat. Also, Barcelo and Poschenrieder [65] suggested that a high accumulation of Cr (VI) in the roots and shoots restricts cell division, which limits their elongation. After inoculation with the PGPR isolates, plant growth improved in Cr (VI)-treated M. sativa plants (showing similar values to uncontaminated plants both in roots and shoots) (Fig. 1) [63] demonstrated that Cr (VI)-tolerant PGPR strains "Agrobacterium fabrum and Leclercia adecarboxylata" and "Klebsiella sp. CPSB4 and Enterobacter sp. CPSB49" respectively enhanced the growth of maize (Zea mays) and Helianthus annuus (L.) cultivated under Cr (VI) stress. Other studies have shown a positive effect of PGPR on plant growth in the presence of other heavy metals such as Cd [68], Cu and Cd [69], and Pb [70]. Rhizobacteria that promote plant growth can increase plant development and performance indirectly by reducing the toxic effects of metals or directly by producing phytohormones and growth factors [71,72]. Indeed, PGP traits are successfully involved in promoting plant growth and attenuating the degree of toxicity in plants exposed to metal stress [72]. The high concentration of heavy metals in the soil affects plant growth because it interferes with the uptake of nutrients such as phosphorus as suggested by Halstead et al. [73]. However, this deficiency can be compensated by the ability of PGPR to solubilize phosphates which plays an important role in improving the uptake of minerals such as P by plants in metal-contaminated soils [74]. Also, the production of phytohormones by PGPR has been shown to play a key role in plantbacteria interactions and plant growth in heavy metalcontaminated soils [75]. For instance, the stimulation of plant growth observed under Pb stress after inoculation with P. fluorescens has been attributed to the production of IAA [76]. Furthermore, microbial communities in the rhizospheric zone could play an important role in metal stress avoidance through secreting extracellular polymeric substances (EPS) such as polysaccharides, lipopolysaccharides, and proteins, possessing an anionic functional group that helps remove metals from the rhizosphere through the process of biosorption [28,77]. The EPS produced by some microorganisms induce the formation of biofilms in response to the exposure to toxic heavy metals. Biofilm formation helps detoxify heavy metals by enhancing the tolerance capacity of microbial cells or by converting toxic metal ions into non-toxic forms [78]. PGPR are also characterized by the production of siderophores, which can stimulate plant growth directly under iron limitation, for example [79], or indirectly by forming stable complexes with heavy metals such as Zn, Al, Cu, and Pb and helping plants to alleviate metal stresses [80]. Indeed, siderophores have a variety of chemical structures; they have atoms rich in electrons such as electron donor atoms of oxygen or nitrogen that can bind to metal cations [81,82]. Hannauer et al. [83] and Hernlem et al. [84] conducted a study with 16 different metals and concluded that the siderophores pyoverdin and pyochelin produced by P. aeruginosa are able to chelate all of these metals. In addition, siderophores secreted by PGPR can decrease free radical formation, which helps protect microbial auxins from degradation to promote plant growth [85]. Thus, in the present study, it is likely that the observed positive effect of the PGPR on plants grown under Cr (VI) might be primarily attributed to their PGP characteristics. On the other hand, our results showed that the treatment of the plants with Cr (VI) caused a reduction in the total chlorophyll content in comparison with unstressed plants (Fig. 2a). These results are in agreement with other research reporting that the chlorophyll content decreased consistently with increasing Cr (VI) concentration [67,86]. The alteration of chlorophyll content due to the Cr (VI) effect may be due to the inhibition of enzymes responsible for chlorophyll biosynthesis as suggested by Karthik et al. [66]. Cr (VI) toxicity inhibits photosynthesis by increasing H 2 O 2 accumulation, superoxide production, and lipid peroxidation [87]. A higher Cr (VI) input disrupts the ultrastructure of the chloroplast and restricts the electron transport chain. Restriction of the electron transport chain diverts electrons from the PSI (electron donor side) to Cr (VI), which considerably decreases the photosynthesis rate [88,89]. Upon inoculation by PGPR, the total chlorophyll levels increased under Cr stress. This could be due to the improvement of its synthesis or to the slowing down of the process of its degradation [66]. The improvement in chlorophyll content following inoculation could also be due to the reduction of Cr (VI) to non-toxic Cr (III) and/or to different PGP traits of these bacteria. Siderophores through the chelation reaction are known to reduce iron deficiency induced by heavy metals and thus help plants to synthesize photosynthetic compounds such as heme and chlorophyll [90,91]. Furthermore, the enzymatic activities, phytohormones, siderophores, and organic acids of PGPR are all responsible for the reduction of toxic Cr (VI) to non-toxic Cr (III) [92][93][94]. Our results showed also that Cr (VI) stress amplifies the accumulation of proline in M. sativa plants. The increased proline content in plants has previously been identified as an adaptive response to environmental stresses [95,96]. Proline helps plants deal with stressrelated toxicity by controlling osmotic balance, detoxifying reactive oxygen species (ROS), stabilizing antioxidant enzymes, modulating gene expression, and activating multiple detoxification pathways [97,98]. The inoculation with PGPR resulted in a substantial decrease of proline content in M. sativa plants, indicating that inoculated plants were less affected by Cr (VI) stress than uninoculated plants. Similar findings were found by Islam et al. [99] who reported that the level of proline in the corn plant exposed to Cr (VI) stress was significantly higher (1.08 times) than the uncontaminated plants and that inoculation with a PGPR strain reduced the proline concentration by 30%. A similar result was obtained by Karthik et al. [66], with a decrease of proline accumulation by 84.56% and 44% in the case of the association of P. vulgaris with two Cellulosimicrobium strains AR6 and AR8, respectively, under Cr (VI) toxicity. Cr stress increased MDA content in M. sativa plants. MDA is a product of lipid peroxidation of the cell membrane system. MDA reacts with free amino acid groups, causing cell damage due to inter-and intramolecular reticulation of proteins [99]. The elevated MDA indicates an oxidative stress, and this may be one of the mechanisms by which Cr (VI) toxicity manifests in plant tissues. The accumulation of MDA is reported also in the [102]. The four isolates studied in the present work (NT15, NT19, NT20, and NT27) showed a high tolerance to Cr (VI) and a high production of substances that promote plant growth in the presence of Cr (VI), demonstrating their potential to contribute to beneficial plant-microbe interactions in soils contaminated by heavy metals. More specifically, the NT27 isolate showed significant resistance to Cr (VI), characteristics of promoting plant growth and a capacity to enhance the tolerance of M. sativa to Cr (VI). This isolate was identified as a strain of Pseudomonas sp. by 16S rDNA sequence analysis. Its effect on Cr (VI) content and bioaccumulation factor (BAF) of the shoots and roots of M. sativa plants was significantly (p < 0.05) higher in comparison with uninoculated and uncontaminated control. Several works reported the increased metal concentrations in tissues of inoculated plants by the Pseudomonas genus. For instance, Kamran et al. [103] and Ma et al. [104] observed that P. putida and Pseudomonas sp. A3R3 increased the Cr (VI) and Ni uptake in Eruca sativa and Alyssum serpyllifolium plants, respectively. For other bacterial geniuses, Din et al. [32] and Tirry et al. [105] noticed an increased Cr (VI) accumulation in Sesbania sesban and M. sativa by the addition of B. xiamenensis PM14 and Cellulosimicrobium sp., respectively. Nevertheless, for certain cases, it has been documented that inoculating plants with metal-resistant bacteria reduced metal uptake and increased plant biomass [106], which can be explained by the metal immobilization in the rhizosphere. In fact, several authors recorded lower Cr (VI) accumulation in bacterial inoculated plants due to the bacterial immobilization of Cr (VI) through several mechanisms, including reduction, adsorption, accumulation, and production of cell surface-related polysaccharides and proteins [107,108]. In the present study, it is outstanding that Cr accumulation by roots was more significant than by shoots following NT27 inoculation of M. sativa. This is probably due to Cr (VI) reduction to Cr (III), which would have favored the immobilization of Cr (VI) in the rhizosphere and its phytostabilization in the plant roots. The phytoremediation ability of M. sativa plants seems to be largely favored by the strain of PGPR involved. Conclusions We can conclude from our results that the inoculation of M. sativa species by PGPR overcoming the negative effects of Cr (VI) stress and increased the plant growth rate and the content of chlorophyll. It also greatly decreased the levels of stress markers, malondialdehyde, hydrogen peroxide, and proline. The bacterial strains NT15, NT19, NT20, and NT27 exhibited high tolerance to Cr (VI) and produced substances favoring the growth of plants, both in normal and under Cr (VI) stress conditions, demonstrating their potential to contribute to beneficial plant-microorganism interactions in soils contaminated by metals. This study provides clear evidence of the response of bacterial strains in the rhizosphere to Cr (VI) and the enhancement of M. sativa growth and antioxidant system under stress by Cr (VI). The results showed also that an enhanced Cr (VI) phytoremediation of M. sativa can be achieved by Pseudomonas sp. inoculation. Therefore, inoculation of these bacterial strains from the rhizosphere might be a good choice for application in microorganism-assisted phytoremediation approaches for the remediation of heavy metal-contaminated soils. These strains can also act as a lasting factor in the phytostabilization of Cr (VI) and a control of its entry into the food chain.
v3-fos-license
2019-04-22T13:06:36.749Z
2017-11-01T00:00:00.000
126117352
{ "extfieldsofstudy": [ "Environmental Science" ], "oa_license": "CC0", "oa_status": "GREEN", "oa_url": "https://dugi-doc.udg.edu/bitstream/10256/14345/1/AM-ThinBorderBetween.pdf", "pdf_hash": "c7f02f5d743d1d9b1e9da05c0463c000d1a7ecc0", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43652", "s2fieldsofstudy": [ "Environmental Science" ], "sha1": "abc32bcf5749d8942f7112a1100b0aede86ee256", "year": 2017 }
pes2o/s2orc
The thin border between cloud and aerosol: sensitivity of several ground 11 Cloud and aerosol are two manifestations of what it is essentially the same physical 12 phenomenon: a suspension of particles in the air. The differences between the two come from 13 the different composition (e.g., much higher amount of condensed water in particles 14 constituting a cloud) and/or particle size, and also from the different number of such particles 15 (10-10,000 particles per cubic centimeter depending on conditions). However, there exist 16 situations in which the distinction is far from obvious, and even when broken or scattered 17 clouds are present in the sky, the borders between cloud/not cloud are not always well 18 defined, a transition area that has been coined as the ―twilight zone‖. The current paper 19 presents a discussion on the definition of cloud and aerosol, the need for distinguishing or for 20 considering the continuum between the two, and suggests a quantification of the importance 21 and frequency of such ambiguous situations, founded on several ground-based observing 22 techniques. Specifically, sensitivity analyses are applied on sky camera images and 23 broadband and spectral radiometric measurements taken at Girona (Spain) and Boulder (Co, 24 USA). Results indicate that, at these sites, in more than 5% of the daytime hours the sky may 25 be considered cloudless (but containing aerosols) or cloudy (with some kind of optically thin 26 clouds) depending on the observing system and the thresholds applied. Similarly, at least 27 10% of the time the extension of scattered or broken clouds into clear areas is problematic to 28 establish, and depends on where the limit is put between cloud and aerosol. These findings 29 are relevant to both technical approaches for cloud screening and sky cover categorization 30 algorithms and radiative transfer studies, given the different effect of clouds and aerosols 31 (and the different treatment in models) on the Earth’s radiation balance. 32 Consequently, fundamental questions remain: What is the limit of visibility from which a 80 suspension of droplets must be considered cloud? Should this limit be set for an -average‖ 81 human eye, or can it be objectively established for some instrument as in Dupont et al. 82 (2008)? Or is it even reasonable to consider such a limit given that the aerosol/cloud particle In general the distinction between a cloudy and a cloudless sky, and the separation between 113 cloud and aerosol, is appropriate for attribution studies and modeling radiative effects of 114 different climate forcing mechanisms, but imposing this classification may be unnecessary 115 (or inconvenient) in relation to new and advanced methods of observation and measurement. 116 If so, the distinction could also be unnecessary in radiative transfer models, or in future For example, Charlson et al. (2007) highlighted the importance that has been given to the 121 separation between the -cloud‖ and -clear‖ regimes in various fields of study including the 122 radiative forcing by clouds and the quantification of direct effects and indirect radiative 123 forcing by aerosols. The paper questioned the separation between the two regimes, and 124 suggested the desirability of treating the phenomenon as a continuum. Similarly, Koren et al. 125 (2007) described a transition zone (-twilight‖ zone) around the cloud in which the optical 126 properties are close to those of the cloud itself. The authors estimated that an appreciable 127 fraction (between 30 and 60%) of the part of the globe at any given time considered free of 128 clouds could correspond to that area of transition, a fact that could have important climate 129 implications. The question of the climatic importance of clouds that are considered -small‖ in cloud field as an area that includes detectable clouds and twilight zone, and found that the 133 cloud field fraction could be as large as 97% in an area where the detectable cloud fraction is 134 53%. In the cited works, several methodologies were used: spectral radiometry from the 135 surface in the visible and near infrared, satellite measurements, and modeling. Also long-136 wave spectral radiometry is being used for the purpose of studying the properties of thin 137 clouds and the transition region (Hirsch et al., 2014(Hirsch et al., , 2012. The goal of the current paper is to quantify the importance and frequency of situations where 155 ambiguity between clouds and aerosol occur; in other words, situations where the suspension 156 of particles depend on subjective definition to be classified as either cloud or aerosol. These radiation measurements, and spectral measurements. Two sites are considered: Girona 163 (Spain), and Boulder (Co, USA). used to take images of the sky during daylight hours, at 1 minute time steps. The camera is a 186 conventional digital CCD camera, provided with a fish-eye (i.e. >180º field-of-view) lens and 187 mounted on a sun tracker, in such a way that a black sphere projects its shadow on the lens, 188 blocking the direct sun from entering the camera. In the current research, one year (2014) of 189 data and observations from each of these instruments will be analyzed. The two locations are middle-latitude, Northern Hemisphere sites. However, they hold some 212 geographic and climatic differences that make pertinent the use of data from both sites in the 213 current research. First, Girona is at low altitude and close to the sea, while Boulder is at high 214 altitude and thousands of km away from the closest coast. Therefore, climate in Boulder is 215 much more continental, in the sense that warmer summers and colder winters are likely; more 216 important here is that the atmosphere above Boulder is in general drier and cleaner, so 221 Raw measurements and observations from the above instruments need to be processed in 222 order to obtain quantitative or qualitative information about the sky condition, clouds and/or 223 aerosol. In all processing and algorithms used (and explained below) decisions must be taken 224 to distinguish between clear sky (either clean or with a certain aerosol load) and clouds, or 225 between clouds and aerosol. These decisions usually take the form of thresholds, which are 226 somewhat subjectively selected after some tuning procedure. Sometimes, the human 227 intervention is obvious, for example when deciding which sky images are considered as 228 cloudless references. In the next paragraphs, we will explain the standard methods applied to 229 raw data, and will describe the sensitivity analyses that we have performed on them to reach 230 our goal. 231 Broadband solar radiation measurements at high temporal resolution (< 5 minutes) can be 232 used to infer the sky conditions. In this regard, after some initial attempts (Calbó et al., 2001; the measurement is the basis of all methods: the underlying assumption is that clouds make 288 solar radiation (either broadband or spectral) more variable in time than aerosols. Here we 289 will use the methodology as presented by Michalsky et al. (2010), which consists of two 290 filters applied consecutively on a moving time window of a given width (10 minutes in the 291 original paper). The first, coarser filter takes the difference between each adjacent 292 measurement, and also calculates the maximum minus the minimum OD in the window. If all 293 differences are less than a given threshold, and if the range of measured OD within the time 294 window is less than another threshold, then the points pass the first filter. The second, more 295 stringent filter scales the allowed variability according to the magnitude of the OD, which is 296 estimated by applying a low-pass filter on the series. Thus, the absolute value of the largest 297 difference between adjacent data must be less than a given fraction of the estimated OD at the 298 midpoint of the sample window, and the range must be less than another fraction of the same 299 estimate. The values of the four thresholds were 0.02 and 0.03 (absolute differences of OD at 300 550 nm) and 10 and 20% respectively in the original paper. In the present study, we will 301 change these four values, and also the time window where differences and ranges are 302 calculated, to assess their effect regarding -transition‖ cases. The final result of the MFRSR 303 cloud screening is every sample tagged as -good‖ or -bad,‖ meaning that can be 304 representative of aerosols or not. In the current paper, we will assume that samples labeled as 305 -bad‖ correspond to the presence of some kind of clouds. 306 As mentioned above, images of the whole sky are becoming more ubiquitous both in 307 atmospheric research and in solar energy management applications. Automatically captured 308 sky images allow a continuous (many such cameras take images every minute or even more indicating that when such high values of diffuse radiation are in principle set to correspond to 335 clear sky, other tests for clear-sky detection filter out these cases anyway. It should be noted 336 that even with the lower threshold, the diffuse irradiance allowed as -clear sky‖ is well above 337 the Rayleigh limit, i.e., a certain amount of scattering particles larger than molecular is 338 always allowed. A summary of results is presented in Table 1. There are almost 11,000 339 minutes identified as clear when the higher threshold is used but labeled as not clear when the 340 lower threshold is applied. This means that almost 5% of the daylight hours (specifically, of The difference in mean fsc when data are processed with one or the other threshold is 0.023 349 (0.022) in Girona (Boulder). This difference might not seem very large, but, as we will show 350 below, it is produced by larger differences for some particular conditions. Thus, differences Table 1), for scattered to 354 broken cloud conditions, the average difference is 0.044 (0.046), which is more than 10% of 355 the average fsc of about 0.4 at both sites. Logically, since RadFlux uses the difference 356 between measured and estimated clear-sky diffuse as the basis for fsc estimation, estimated 357 fsc tends to be lower when Max_Diff is greater. In absolute value, differences tend to be 358 greater for lower fsc (see Figure 1 corresponding to Girona data). Table 2 The data points highlighted in red in Fig. 6 are all those that have passed the cloud screening 413 filter (i.e., labeled as -good‖). Initially (Fig. 6a) and lower thresholds to produce a more -strict‖ filter. When the former is applied, almost 426 58% of points are considered aerosol (Fig. 5b), but when the latter is used, less than 19% of 427 the points pass the filter (Fig. 5c). With very few exceptions, all points with OD > 1.0 and 428 most points with negative AE (and OD > 0.1) do not pass the cloud screening even with the 429 relaxed filter. 430 The numbers in Table 3 allow an estimation of the frequency of transition cases between 431 cloud and -pure‖ aerosol. We start with about 420,000 instantaneous measurements for The discussion above concerns results from Girona. When the same analysis is applied to 452 measurements from Boulder, the numbers obviously change, but not the main result of a large 453 percentage of cases in the transition zone. We started with 610,000 instantaneous measurements (note that 20-sec resolution was used in Boulder for the whole year) from 455 which about 158,000 (25%) were not processed by the MFRSR, due to thick clouds occulting 456 the Sun. Then we applied the three cloud screenings (default, relaxed, strict) to the rest of the 457 samples (452,000, see Table 3). About 242,000 additional points were labeled as -bad‖ (i.e. 539 More important than these aggregate numbers is to look at the particular cases with large (or 540 small) cloud fraction changes when the threshold is changed. In this sense, Table 4 shows the 541 differences in thin cloud fraction between the original processing and new processing using where the change in the threshold produces a large change in the thin cloud fraction, because 558 a large part of that image is made up of what seems very thin clouds. In the example of Fig. 559 9d, the large differences seem to be related to a relatively high atmospheric aerosol load. 560 For the case of the increased threshold, the thin cloud fraction estimate in a little more than 561 80% of images decreases by less than 0.10 (Table 4). This includes a) some situations of 562 cloudless skies, b) situations with scattered to broken cloudiness but with a low amount of 563 thin clouds (in these two cases, the increase of the threshold of course makes it impossible to 564 get lower cloud fractions), and c) situations of overcast skies with thick clouds, that present 565 much higher values of the red to blue ratio (or that are set as cloudy because of very low light 566 intensity, i.e., very thick clouds). Again, this result confirms that the method is quite robust, 567 and also that almost 20% of time a change in the threshold produces a change in the thin 568 cloud fraction of more than 0.1. In Figures 9a-f the result of the cloud identification with the 569 higher threshold is also displayed, and we can see the moderate effect on cases of Fig. 9c and 570 9d, corresponding to clouds (or aerosols) with not well defined limits and that are mainly 571 visible when they are in front of the Sun due to their forward scattering characteristics. The 572 greatest effect of changing the threshold is found in situations such as those presented in Fig. 573 9e and 9f, where the hazy atmosphere (involving cumulus clouds formation) is too 574 problematic to be classified as -clear‖ or -cloudy‖ with thin or even opaque clouds by the 575 method applied to TSI raw images. It should be noted that in the cases where the effect of 576 changing the threshold is small (Fig. 9a and 9b), the optical depth as measured by the 577 0.2, which is the value that Dupont et al. (2008) found as the limit related to the more 580 common differentiation between cloud and not cloud. 581 5. Discussion, summary and conclusion 582 We have presented observations from three ground based, passive systems, that are intended 583 to detect clouds and aerosols in the atmosphere. Indeed the three systems share one 584 characteristic, which is that they are sensitive to the solar radiation flux once it has been 585 modified (affected) by the presence of suspended particles in the air (of course, solar 586 radiation flux is also affected by atmospheric gases). Thus, sky cameras -map‖ radiation 587 coming from the whole sky dome and record this radiation in three color channels (red, In these latter studies, a spatial approach 625 was considered, i.e., they accounted for the extension of this zone in a snapshot of the sky. 626 Our study, however, combines this approach (for sky camera images and partially for 627 broadband hemispheric solar radiation measurements) with a temporal approach, that is 628 accounting how often the atmosphere presents a state that cannot be distinctly categorized as 629 cloud or as aerosol (for broadband hemispheric radiation measurements and also for MFRSR, 630 Sun pointing, measurements). Therefore, our numbers correspond mainly to temporal 631 frequencies, are limited to two particular sites, and are quite conservative, but if we discard 632 the overcast conditions, the relative frequency of the transition cases increases to more than 633 15% of the remaining cases (this number is estimated by dividing the above overall value of 634 10% by the frequency of non-overcast cases, which is about 70% at the two involved sites). 635 Our results support the argument that clouds and aerosol are two extreme manifestations of 636 the same physical phenomenon, which is a suspension of particles in the atmosphere. This
v3-fos-license
2023-07-11T18:15:40.653Z
2023-10-01T00:00:00.000
253162746
{ "extfieldsofstudy": [], "oa_license": "CCBYSA", "oa_status": "GOLD", "oa_url": "https://ijece.iaescore.com/index.php/IJECE/article/download/30100/16875", "pdf_hash": "ebe3ee610bcd32bf2ce40dbc0871ee0b883381da", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43653", "s2fieldsofstudy": [ "Computer Science" ], "sha1": "11b1a6447652327722a8dbbadd6bdc3e8bffa019", "year": 2023 }
pes2o/s2orc
IPv6 flood attack detection based on epsilon greedy optimized Q learning in single board computer ABSTRACT INTRODUCTION Network intrusions are a part of global network connectivity that occur everywhere and may cause problems to all connected devices.From year to year, network intrusions are still growing on many sides such as attack patterns and protocols.Before internet protocol version 6 (IPv6) was introduced, internet protocol version 4 (IPv4) was often used as a medium to attack a network.But in a recent article, many hackers started targeting IPv6-connected devices with flooding attacks [1].Similar to the IPv4 protocol, the IPv6 network protocol also has weaknesses in terms of security.This protocol could be used for flooding attacks with different levels of risk [2], [3]. There are many types of intrusion that exist, one of them is known as denial of service.This kind of intrusion floods spams data in massive numbers to shut down internet of things devices such as IP Cameras or other types of monitoring devices [4].Since those devices have lower processing capability, it was easier for intrusions from outside to knock down the devices [5]- [7].Another factor that increases the risk of the device is the unattended mechanism that allows devices to work without human interference [8], [9].Thus, the security enhancement for Internet of Things devices became a trend and a challenge for future research [10].Many articles of research focus on IPv6 mitigation on the internet of things (IoT).The development of IPv6 intrusion detection started from signatures-based until machine learning-based detection.Signaturebased intrusion detection is easier to implement on the internet of things since high processing power is not required.The devices only need to read the rules and compare the characteristic of the data to determine the detection.This kind of detection has high accuracy since the model only needs to check the existing signature with the data [11], [12].However, there is a problem detecting a complex attack.The signature-based detection is not well-suited for complex detection, so machine learning is invented to overcome the problem [13], [14].Machine learning-based detection usually uses a feature classification to detect intrusion into the network [15]- [17].For example, IPv6 intrusion detection for router advertisement flooding has an accuracy of up to 98.55%.This model can detect IPv6-based intrusion effectively with a machine learning algorithm [18].Even though machine learning-based detection has high detection accuracy, the implementation on the Internet of Things device was not feasible due to its limitations [19].Besides that, the supervised learning method in machine learning only guesses the correct answer based on the trained model.Hence to improve its accuracy, the researchers must be involved.Incapability to improve its accuracy without human interference is the main weakness of the supervised learning method in the previous articles of the research [20]. Because of this reason, this study tries a different approach to developing intrusion detection with epsilon greedy optimized Q learning.Unlike the supervised learning method, this method not only guesses the correct answer but also improves itself in the shape of reward feedback [20].Q learning itself is a reinforcement learning method that makes an internet of things device an agent to learn the characteristics of the intrusions in the IPv6 network.Hence, the device can determine whether the data is an intrusion or neutral.This paper stated the contributions of the study with the following statements.We proposed a reinforcement learningbased flooding attack detection model based on the IPv6 package pattern.Unlike the current state-of-the-art model that cannot improve its accuracy, the proposed model use epsilon greedy optimized Q learning-based as the self-improving detection model. THE PROPOSED METHOD 2.1. Data gathering In this section, we explain the proposed method used to solve the problem that exists in the previous studies.The proposed method consists of several parts such as the data sample for training and testing, the algorithm, the environment of the agent, and the agent itself.To build an agent that is capable to detect IPv6-based attacks, this study gathered several intrusion characteristics by capturing live.The setup topology for data gathering is illustrated in the Figure 1. Figure 1.Simple IPv6 network topology to capture data Figure 1 is the process for gathering the required data.To obtain both neutral and intrusion data, this study used two computers that connected via a wireless network to simulate the flooding attacks.One computer is given a role as an attacker and equipped with The Hacker's Choices tools (THC-IPv6) to flood the victim with IPv6-based data.The used tools in this experiment consist of denial6, flood_unreach6, thcsyn6, nping, and fping [21].Meanwhile, the target computer is an internet of things processing board called Raspberry Pi that is equipped with Wireshark to collect flood data.This study gathered two types of data that consist of neutral and intrusion data.Each type also consists of two different protocols such as transmission control protocol (TCP) and internet control message protocol version 6 (ICMPv6).The tools used to gather TCP-based data such as thc-syn6 (as intrusion sample) and nping (as neutral sample).Meanwhile, denial6 (as intrusion sample) and fping (as neutral sample) are used to gather ICMPv6-based data. Table 1 explains the used flooding tools from the THC-IPv6 toolkit in the experiment.The attacks consist of two protocols, ICMPv6 and TCP where each protocol has five different toolsets used as the dataset generator.To generate the ICMPv6 dataset, we use fping to generate normal ICMPv6.Meanwhile, we use denial6 from the THC-IPv6 toolkit with two different packet generation switches (one for hop-by-hop, and one for large unknown option); flood-unreach6 for flooding the target with unreachable packets; rsmurf6 to smurf the target (as part of distributed denial-of-service act).For the TCP data, we use nping to generate normal TCP packets, thc-syn6 with four different switches to attack the target.The first switch for thcsyn6 will generate TCP-synchronize (TCP-SYN) packets, the second switch generates TCP-acknowledge (TCP-ACK) packets, the third switch generates TCP-SYN-ACK packets, and the last one generates hop-by-hop router alert TCP packets.However, the obtained data from the data-gathering phase contains many unneeded fields.Since not all fields are required, this study pre-processes the raw dataset into a finer dataset that contains required fields.In some cases, there is value similarity inside a dataset, this study decided to choose at least one field to be uniquely allowed in the dataset.Table 2 contains the used fields in the dataset: According to Table 2, the data fields were taken from the header and the payload of the data.In the header part, there are, source address; destination address; protocol; length; and payload length are used as unique fields.The header fields between TCP and ICMPv6 are the same since both protocols already have unique values.The ICMPv6 payload part consists of type and data.Mean-while TCP payload part consists of window size and flags.The last field labelled detection contains a manually assigned Boolean value to indicate whether the data is an intrusion or not. After completing the labelling process, this study continues the next portioning data process.This study decides on a fair 50:50 portion for both intrusion and neutral packets (not based on the tools used).Making this fair portioning will help prevent the agent to turn one side only.Besides data row size, this study also portioned the data rows according to the protocol and its data type.Table 3 explains the portioning of the data rows according to the protocol and its data type.The ICMPv6 protocol has 1,000 rows of fping data, 250 rows of denial6 test case1 data, 250 rows of denial 6 test case 2 data, 250 rows of flood_unreach6 data, and 250 rows of rsmurf6 data.Hence the ICMPv6 has 1,000 neutral and 1,000 intrusion data.Meanwhile, the TCP protocol has 1,000 rows of nping data, 250 rows of thcsyn6 without option data, 250 rows of thcsyn6 ACK data, 250 rows of thc-syn6 SYN-ACK data, and 250 rows of thcsyn6 hop-by-hop data.Similar to the ICMPv6 protocol, TCP has 1,000 neutral and 1,000 intrusion data.Adding various intrusion types to the dataset will increase the agent's knowledge about the intrusions.All dataset that contains neutral and intrusion data is stored inside a CSV file for easier access.Before the training process starts, the agent load and split the dataset into 70:30 stratified training and test data.The stratifying process during data split has the purpose to balance the number of rows in each data field.At the end of dataset pre-processing, this study obtained 2,800 rows of training data and 1,200 rows of test data randomized. Environment design After the data pre-processing phase is complete, this study designed an environment for the agent to learn the data.However, the environment used for intrusion detection is different from publicly available environments.The problem lies in the numbering system that the environment uses.The publicly available environments used rational numbers as their states.Thus, the dataset is not compatible with the current environment.To solve this problem, this study changed the numbering system in the environment with the whole number system.Besides changing the number system, this study also used number conversion through truncated decimal converted SHA-1 checksum hash to change any values inside the data set into unique numbers.Figure 2 illustrated the process of the number conversion of the dataset. 2 contains a method to convert any data type into unique numbers starting by encoding each data into a UTF-8 string.The next step is to get the hash result of the string with the SHA-1 algorithm and turn it into hexadecimal through digest.The decimal value can be obtained by the decimal conversion process of hexadecimal hash.However, the result of the conversion is too long for the agent to store.Hence, the result from the previous process is truncated into ten digits.This number is unique and useful to distinguish between intrusion A and B. By using this method, the environment will accept the truncated decimal data. The next process is to configure the reward mechanism in the environment.The reward is a feedback mechanism that reinforcement learning uses to optimize the agent's decision mechanism.The calculation of the reward inside the environment uses IF-based rules by matching the detection indicator inside the data set with the action taken by the agent.From this point, the environment can raise four different detection indicators.According to Table 4, the agent will receive positive rewards if the agent determines the correct action and value (true positive (TP) and true negative (TN)).In the study, the environment will return ten points if the agent determines correctly.Meanwhile, the agent will receive negative rewards if the agent determines the .The environment will return minus five points and decrease the accumulated rewards.The accumulated rewards are usable for action decision factors and performance evaluation at a later stage.The agent will determine the next action according to the accumulated rewards.Besides that, high reward accumulations mean that the detection has good accuracy. Q learning agent The last step of the environment design phase is to build the agent and its interaction with the environment.The agent is the main system that determines whether a packet is an intrusion or not in this study.However, the agent needs to interact with the environment to determine the correct action for each data row in the dataset.The interaction between agent and environment will produce a state and a reward.The process of the interaction for agent training and testing is illustrated in Figure 3.According to Figure 3, the interaction starts with the agent loading pre-processed training data and the environment.After the loading process is complete, the agent chooses the action between random and Q Table with the epsilon greedy method assistance [22], [23].The formula used for Epsilon greedy is shown (1): where is the action taken for the agent, Q is the Q Table, P is the probability of action taken, is the time or step, and is the value of the probability.Since this method uses probability, the agent will receive an action from the maximum policy table or random action.Theoretically, forcing the agent to use the maximum policy inside the Q table more often can increase the accuracy.It means that the learning model needs to explore first and then exploit the result to achieve the best performance [24], [25] This formula consists of several elements such as reward (R), state (S) dan action (A).Besides that, this formula also counts as the time step or episode, as the agent's learning rate, and as the reward discount rate.These variables are also known as hyperparameters that may affect the learning process if changed.This function is known as Q-Function or action-value function where the agent can determine the specific action to take when exploiting the Q Table .Since this algorithm is an off-policy algorithm, the agent cannot decide between exploration and exploitation explicitly.The agent stores the evaluation result in the coordinate of the Q Table with the state on X-axis and action on Y-axis.The agent will repeat these steps until the specified episodes in the training phase.Meanwhile, during the testing phase, the agent only needs to do it once with Q Table as the action source. METHOD This section explains the agent's performance evaluations by following four experiments with different scenarios.Each scenario has a different epsilon size to evaluate the performance of intrusion detection.With these scenarios, this study can understand the relationship between exploration and exploitation with detection results.Table 5 explains the experiment's scenarios.According to Table 5, this study uses four different scenarios to evaluate the learning capability of the agent.Each scenario has a different epsilon configuration but the same training and test episodes.The epsilon configuration in the table starts from the best-policy action (0.1) to pure random action (1.0).Since the range is quite wide, this study only chooses the most significant epsilon value (0.1, 0.5, 0.9, 1.0).In the terms of training and testing epochs, this study will execute the training for ten episodes starting from one.This experiment uses a shorter period since the dataset contains repeated data and is sufficient to train the agent.Meanwhile, the testing epoch is only one episode.This phase force the agent to use the best action available in the table to test the accuracy of the detection.To obtain the accuracy of detection, this study uses a confusion matrix to populate the detection results.With the help of the confusion matrix, this study can calculate the accuracy of the detection.Thus, this study can understand better how the agent learns IPv6-based intrusions [26].Hence, the equation for the accuracy is in (3), where TP and TN.These two indicators are indicated that the agent chooses the correct action for the packet.The sum value of these two indicators is divided by the sum of all indicators (TP, TN, FP, and FN).The result of division is the accuracy of the detection.A higher value indicates that the agent has a good detection of the intrusions. Besides the accuracy benchmark, this study also uses a reward graph to evaluate how well the agent performs.If the confusion matrix focuses on how good the detection is, then the reward graph shows how good the agent chooses the correct action for each data.This type of evaluation is not feasible in supervised learning since the algorithm does not use an agent to do the learning process.As the control for the accuracy evaluation, this study compares machine learning-based intrusion detection with the agent. Using a reward graph as the evaluation aspect, this study can compare the agent's performance to pick the correct action for each data.If the agent chooses the correct action, then the accumulated rewards will increase.But if the agent incorrectly chooses the action, the accumulated rewards will decrease.Also, if the agent has maximum accumulated rewards, it means that it can correctly determine all the test data.The last aspect of the evaluation is the processing performance of the internet of things device.Since this study uses Raspberry Pi as the target device, this study also needs to gather performance evidence during the whole process.Benchmarking the performance of the agent inside the Raspberry Pi device, this study can understand the impact of reinforcement learning inside an IoT device.In this aspect, the research can gather the CPU and the memory usage during the training and testing phases. RESULTS AND DISCUSSION This section elucidates the result of the agent's evaluation.The results contained a detection summary, agent's accuracy and reward graphs, accuracy comparison with different algorithms, and performance benchmark.The first evaluation is the detection results during experiments, the data stored in a shape of a table with TP, TN, FP, and FN indicators.The second evaluation is the agent's performance with accuracy and reward graph.The third evaluation was the comparison result between other classification algorithms.The last one was the hardware performance benchmark for epsilon greedy optimized Q learning for low-end devices like single board computer (SBC). According to Table 6, agent 0.1 correctly determines the test data during the experiment.This agent did not have a value larger than 0 in false positive and false negative indicators.Unlike agent 0.1, the other agents did not have zero results in their results.This table showed that a higher epsilon value can lower the result in true indicators.Hence, the accuracy of the detections should be lower.To prove this statement, this study calculated Table 6 results into accuracy.Then, compare the accuracy and the rewards side by side for each agent with different epsilon.Figure 4 shows the comparison result between each agent.According to Figure 4, the agent with epsilon 0.1 has the highest accuracy and rewards.With average detection accuracy up to 98% and average rewards of 11,500, this agent outperformed the rest of the agents.Meanwhile, the result of each agent was: the agent epsilon 0.5 in the second place with the accuracy reached 83% and accumulated reward up to 8,850, in the third place is the agent epsilon 0.9 with accuracy reached 68% and reward of 6,262, and the last place is the agent without learning reached accuracy up to 50% and reward 2,974. The next step for the evaluation is to compare with the control model from another machine learning algorithm.Using the published article as the main reference for comparison, this study put the result of the comparison side by side.The cited references were using a similar tool to generate the intrusions, but different in the terms of detection model.Figure 5 showed the comparison between this agent's accuracy with the model from the article [27].Based on the result of Figure 5, this study compared several algorithms like support vector machine (SVM), naive Bayes (NB), decision tree (DT), k-nearest neighbor (KNN), neural network (NN), and epsilon greedy optimized Q learning (EG-QL).Compared to other machine learning models, the proposed Q learning agent has the highest accuracy of 98%.This means that the proposed model has the best performance compared to other models.Followed by NN with 81.57%, KNN with 81.57%, DT with 80.79%, naïve Bayes with 80.54%, and SVM up to 78.78% The last aspect of the evaluation is the performance benchmark for the Raspberry Pi device.In this part, this study split the performance benchmark into two parts: CPU and memory usage.The CPU usage result of the agents are illustrated in Figure 6. Figure 6.Performance benchmark on an SBC According to Figure 6, all agents utilized more than 99% to process the data in the training and test phases.The process of the agent is in a single processor, so there are three more processors available for the operating system to use.If calculated roughly, the agent only used 25% of all processors available in the Raspberry Pi.Hence, the process itself will not disturb the whole system.According to the result, most agents have similar memory usage except agent 1.0.The dataset inside the agent caused the high memory usage in each agent.Besides that, the agent also stored the learning policy (Q Table ) inside the agent.Thus, storing the learning policy also increased the memory usage in every agent (Agent 0.1, 0.5, and 0.9). The last part of this section discusses the result of the agents' evaluation and comparison.The discussion covers the accuracy of the detection agents, the impact of the dataset on training and test processes, and the performance benchmark in the Raspberry Pi device.In the detection accuracy evaluation phase, this study compared Q learning agents with each other and the previously available models.The first comparison found that the best agent has the highest accuracy compared to other agents.In this case, the agent with epsilon configured to 0.1 has the best accuracy up to 98%.The agent can reach the top accuracy because the agent used the best policy more often than random action space.Using the best policy as the main source of action can give the agent more proper choice than depending on the randomization.Thus, the agent can reach maximum accuracy faster than other agents.Reward evaluation determines how well the agent detects the intrusion.Similar to the accuracy test, a higher reward is always preferable to others.In this case, the agent with epsilon 0.1 has the highest reward with 11,550.Followed by agent 0.5 with 8,850 rewards, agent 0.9 with 6,262 rewards, and agent 1.0 with 2,974 rewards.Agent 1.0 in the evaluation phase has the lowest rewards since the agent relies on randomness to detect the intrusions. The second comparison was the top agent with other machine learning algorithms.According to the second comparison's result, the epsilon greedy optimized Q learning agent has the highest accuracy.Then, followed by NN, KNN, DT, naive Bayes, and SVM.There are several reasons why the agent has the highest accuracy compared to other models.One of the reasons is also related to the dataset used in the training and test phases.The dataset used to teach the agents consists of two components: neutral and intrusion data. No matter what type of attack is inside the dataset, the number is the most important factor.The balanced number can prevent the agent from siding on the heavier side after the training process.To do that, this study used stratified data split process to make sure the data is balanced.The next factor is the test data used in the testing process.Since the agent learned everything in the training process, the agent already has the best policy for each test data.However, if new unknown data is added to the test data the accuracy could decrease.The last discussion is the performance benchmark on the Raspberry Pi device.The purpose of the evaluation is to test the agent's feasibility in an embedded device.According to the performance benchmark, the agents used more than 99% in a single processor to run the whole process.From a single processor point of view, this is a bad practice and not feasible to implement the agent in a real situation.If the agent is installed on a device with multiple processors, the system still has three more processors available.The last performance benchmark is memory usage.This aspect evaluated the memory usage of the agents during the whole process.The usage of each agent is affected by the dataset used.It means that the more dataset used in the agent, the more memory will be used.Agent 1.0 is an exception because the agent did not split the data into training and testing data.Thus, did not increase memory usage.In the terms of feasibility of memory usage, all the agents can run normally inside the Raspberry Pi without hindering the operating system.It can be concluded that the proposed algorithm and its agent can determine whether the packet data is an intrusion or not correctly.Compared to control models from a previously published article, the proposed agent has the best accuracy among the models.Besides that, the agent has lower system specifications and is feasible on the internet of things device. CONCLUSION Network security is a vital aspect of this modern era.Since many devices are connected to the internet, security protection is a serious concern.One technology that depends on network connectivity is IoT.The IoT device is connected to the internet and exposed to the invisible risk of attack.Besides that, the use of IPv6 as the communication protocol also posed an additional risk to the devices.To mitigate this problem, this study proposed an intrusion detection system using reinforcement learning.According to the evaluation results, the Q learning detection agent 0.1 outperformed the other agents' accuracy and rewards.With up to 98% of accuracy and 11,550 rewards, agent 0.1 has the highest accuracy compared to other agents.If compared to control models from the published article, the current agent is still in the first place.The current agent has an accuracy of up to 98%, followed by NN with 81.57%, KNN at 81.57%, DT at 80.79%, NB at 80.54%, and SVM up to 78.78%.Besides accuracy, the agent is also evaluated for the performance benchmark to test its feasibility.According to the performance benchmark, the agent has the highest CPU usage with more than 99% and memory usage up to 9.96%.However, in multi-processor devices, this is not a big problem.Hence, the agent is feasible to be installed on Raspberry Pi devices only. Figure 2 . Figure 2. Data to decimal number conversion method Figure Figure2contains a method to convert any data type into unique numbers starting by encoding each data into a UTF-8 string.The next step is to get the hash result of the string with the SHA-1 algorithm and turn it into hexadecimal through digest.The decimal value can be obtained by the decimal conversion process of hexadecimal hash.However, the result of the conversion is too long for the agent to store.Hence, the result from the previous process is truncated into ten digits.This number is unique and useful to distinguish between intrusion A and B. By using this method, the environment will accept the truncated decimal data.The next process is to configure the reward mechanism in the environment.The reward is a feedback mechanism that reinforcement learning uses to optimize the agent's decision mechanism.The calculation of the reward inside the environment uses IF-based rules by matching the detection indicator inside the data set with the action taken by the agent.From this point, the environment can raise four different detection indicators.Table4contains the reward calculation and detection indicators. Figure 3 . Figure 3. Agent algorithm with epsilon greedy and Q learning Figure 4 .Figure 5 . Figure 4. Accuracy and reward comparison between agents Table 1 . Attacks performed during data gathering Table 2 . The packet characteristics for learning target Table 4 contains the reward calculation and detection indicators. Table 4 . Reward calculation and detection indicators . At this point, the agent already has the The agent inputs a data row and action into the environment and let the environment calculate the reward.The environment returns the reward and the state after the process is complete.The agent receives the state and the reward and evaluates the learning process with the Q learning algorithm.As shown by (1) is the formula used for Q learning: ( , ) ← ( , ) + [ +1 + .( +1 , ) − ( , )] IPv6 flood attack detection based on epsilon greedy optimized Q learning … (April Firman Daru) 5787 dataset and the action. Table 5 . Experiment scenario setup for evaluation Table 6 . Q learning agent's average detection results
v3-fos-license
2021-07-25T06:17:03.547Z
2021-07-01T00:00:00.000
236210582
{ "extfieldsofstudy": [ "Computer Science", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/1424-8220/21/14/4640/pdf", "pdf_hash": "f1b297ec2406d3e504093ad9bcf1f52e6c281101", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43654", "s2fieldsofstudy": [ "Engineering", "Materials Science" ], "sha1": "94c656af814a4046db202f381aeef73808cca58a", "year": 2021 }
pes2o/s2orc
Modeling and Imaging of Ultrasonic Array Inspection of Side Drilled Holes in Layered Anisotropic Media There has been an increase in the use of ultrasonic arrays for the detection of defects in composite structures used in the aerospace industry. The response of a defect embedded in such a medium is influenced by the inherent anisotropy of the bounding medium and the layering of the bounding medium and hence poses challenges for the interpretation of the full matrix capture (FMC) results. Modeling techniques can be used to understand and simulate the effect of the structure and the defect on the received signals. Existing modeling techniques, such as finite element methods (FEM), finite difference time domain (FDTD), and analytical solutions, are computationally inefficient or are singularly used for structures with complex geometries. In this paper, we develop a novel model based on the Gaussian-based recursive stiffness matrix approach to model the scattering from a side-drilled hole embedded in an anisotropic layered medium. The paper provides a novel method to calculate the transmission and reflection coefficients of plane waves traveling from a layered anisotropic medium into a semi-infinite anisotropic medium by combining the transfer matrix and stiffness matrix methods. The novelty of the paper is the developed model using Gaussian beams to simulate the scattering from a Side Drilled Hole (SDH) embedded in a multilayered composite laminate, which can be used in both immersion and contact setups. We describe a method to combine the scattering from defects with the model to simulate the response of a layered structure and to simulate the full matrix capture (FMC) signals that are received from an SDH embedded in a layered medium. The model-assisted correction total focusing method (MAC-TFM) imaging is used to image both the simulated and experimental results. The proposed method has been validated for both isotropic and anisotropic media by a qualitative and quantitative comparison with experimentally determined signals. The method proposed in this paper is modular, computationally inexpensive, and is in good agreement with experimentally determined signals, and it enables us to understand the effects of various parameters on the scattering of a defect embedded in a layered anisotropic medium. Introduction In recent years, there has been an increase in the usage of composites in aircraft structures, such as for the Boeing 787 and the Airbus A350 [1]. A variety of methods are used to test such structures, but ultrasonic inspection is the most widely used due to its sensitivity to defects present in composite structures, its ability to localize such defects, and its speed of detection [2,3]. There has been a rapid increase in the ultrasonic phased array testing of composite structures used in the aerospace industry [4]. The testing of such structures is complicated due to the inherent anisotropy of composites and the presence of multiple layers. The purpose of non-destructive inspection of structures is to detect and locate defects that are present in them, such as cracks, voids, delaminations, and disbonds [2,[5][6][7]. The output ultrasonic signal from defects in such structures is influenced by the anisotropy and layering present in them. Hence, computational models are required that take the anisotropy, layering, and the response of the defect into account. A variety of approaches to simulate the array signals from multilayered materials have been reported in the literature. These approaches include applying ray methods to a homogenized layered structure [8], using hybrid ray-finite difference time domain (FDTD) methods [9], multi-Gaussian beams [10,11], or using plane wave models to calculate the reflection or transmission of the waves in the bounding media [12][13][14][15]. These approaches are either computationally expensive when used for layered materials, singular when interacting with curved interfaces, or do not reflect the real-world situation of bounded beams. Anand et al. [16] used Gaussian beams due to their computational efficiency and non-singularity when interacting with curved interfaces, combined with the recursive stiffness matrix method, which enables the response of the layered structure to be taken into account, when modeling the bounded beam interaction between phased arrays and multilayered media. One of the preliminary assumptions of the model was that the multilayered laminate is bounded by a semi-infinite water layer, allowing only longitudinal waves to impinge onto the composite laminate and thereby reducing the number of unknowns [17]. To simulate the scattering response of defects embedded within the laminate, traditionally, finite element methods (FEM) [18] have been used, which are computationally expensive as the inspection frequency, number of elements used in the array, and number of layers in the laminate increase. For the simulation of defects of simple shapes such as side-drilled holes, an analytical model is computationally less expensive [19]. To the best of the authors' knowledge, the use of Gaussian beams to simulate such an interaction with defects embedded in a multilayered material does not exist. The model developed by Anand et al. [16] is based on the assumption of water bounding layers, which is invalid as the defect is surrounded by a homogeneous isotropic or anisotropic elastic medium. To address this limitation, in this paper, we provide a method to calculate the reflection and transmission coefficients for a multilayered laminate bounded by a semi-infinite anisotropic medium. For a layered structure such as a quasi-isotropic carbon fiber-reinforced plastic (CFRP) laminate, which has a repeated set of layers of different orientations, the lower bounded medium of the embedded defect can be modeled as an equivalent homogeneous anisotropic medium, as the dominating signal is the scattering from the defect and the reflections from the plies below it can be neglected [20]. Defects such as side-drilled holes (SDH) are commonly used as reference defects for ultrasonic phased array testing [21]. This is because SDH also gives rise to the various wave-defect interactions, such as scattering, creeping waves, change in wave mode, etc., which can be observed with commonly encountered defects in metallic and composite structures. In this paper, we develop a model to simulate the scattering from an SDH that is embedded in a layered CFRP laminate. The model simulates the received full matrix capture (FMC) signals from the scattering of an SDH, and the modified total focusing method (TFM) algorithm is used to image the defect from FMC signals generated from simulations and experimentally. The novelty of this paper is that it provides an analytical modeling technique to model and simulate the responses of defects that are embedded in layered anisotropic structures such as composite structures. The analytical model takes into account the various effects of anisotropy and layering on the received signal. The analytical model is computationally inexpensive. The paper also provides a model-assisted correction total focusing method imaging algorithm to image defects in anisotropic structures. The next section provides the background theory used for modeling the scattering from an SDH in a layered anisotropic medium. Background Theory The following sections give a brief description of the stiffness matrix and transfermatrix methods. An understanding of the transfer matrix method is required as it forms the basis for the matrix formulation of the transmission/reflection of plane waves from a layered medium from/into a generally anisotropic semi-infinite medium. It is then followed by the theory for Gaussian beam modeling of transducers and for calculating the equivalent homogeneous properties of a layered medium. This section ends with the theory of scattering from an SDH. Transfer and Stiffness Matrix Method for Multilayer Wave Propagation We consider a multilayered CFRP laminate as seen in Figure 1. The laminate is composed of N number of layers, which are homogeneous and anisotropic. The layers are of thickness h and are of infinite extent in the plane of the lamina (x − y). The laminate is bounded by an upper and lower bounding medium, m = 0 and m = N + 1, respectively. A plane wave strikes the top surface of the laminate at an incident angle of θ with respect to the z axis. The projection of the wave vector on the x-y plane is denoted by ϕ. For ease and simplicity, the local coordinates are denoted by the numbers 1, 2, and 3, respectively and hence the coordinates are x 1 , x 2 , and x 3 . Hence, the displacement of a plane wave in a layer is given by Equation (1): where k is the wavenumber vector consisting of k 1 , k 2 , and k 3 components, ω is the angular frequency, and t is the time. The wavenumber components k 1 and k 2 , which lie in the plane of the laminate, remain unchanged, due to Snell's law, whereas the wavenumber component k 3 undergoes a change. The wavenumber component k 3 can be calculated using the Christoffel equation [22] as shown below: where c ijkl is the stiffness tensor, ρ is the density of the material, δ il is the Kronecker delta, d l is the polarization vector component for different wave modes-quasi-longitudinal, quasi-shear horizontal, and quasi-shear vertical-and i, j, k, l consist of values 1, 2, 3 corresponding to the three axes x, y, z. k 3 can be obtained by solving the Christoffel equation shown in Equation (2) and will consist of two solutions for each propagating wave mode. One solution corresponds to the downward traveling wave in the layer and the other corresponds to the upward traveling wave, denoted by '+' and '−', respectively. The wave modes are represented by p, with values 1, 2, and 3 for quasi-shear horizontal, quasi-shear vertical, and quasi-longitudinal waves, respectively. Hence, the plane wave displacement in the layer m can be calculated as shown below [23] by considering the wave amplitudes and the wavenumber components: where a m,p+/− are the wave amplitudes of the downward and upward traveling waves of mode p in the layer m. The x 3 coordinates are the local coordinates of the layer m. The relationship between the stress and displacement in the layer is given by The next step is to calculate the transfer matrix for the layer m. Substituting Equation (3) into Equation (4) and rearranging the displacement and stress at the top surface of the layer m gives Equation (5): and at the bottom surface of layer m by Equation (6) where u m and σ m are the displacement and stress matrices for layer m, A m± are the amplitudes of the upward traveling waves in layer m, h m is the thickness of the layer m, F is the matrix consisting of force vectors f ± of the three propagating modes of the wave in Equation (7), D is a matrix consisting of the polarization vectors as shown in Equation (8), and H is a diagonal matrix in which the propagators are distributed along the diagonal with the other elements of matrix being zero, as shown in Equation (9). In Equations (7)-(9), subscripts 1, 2, and 3 correspond to the different wave modes. In the transfer matrix method, we rearrange to obtain where the stress and displacement on the top of layer m are related to the stress and displacement at the bottom of layer m by a transfer matrix B m . In order to define the transfer matrix of the entire structure, continuity of stress and displacement constraints are applied at each layer interface. Hence, the transfer matrix B for the entire structure can be determined: Similarly, in the stiffness matrix method, we obtain an equation where the stresses on the top and bottom of layer m are related to the displacement at the top and bottom of layer m by a stiffness matrix S m . where Similar to the method used to determine B, the stiffness matrix S relating the stress in the upper semi-infinite bounding layer and the lower semi-infinite bounding layer to the respective displacements can also be determined by applying continuity of stress and displacement constraints: The next section shows the theoretical fundamentals of multi-Gaussian beams. Modeling of the Transducer Gaussian Beams Multi-Gaussian beams [24] can be used to model the radiation from phased array transducers by superimposition of Gaussian beams with different Wen and Breazeale coefficients. Hence, the velocity at a distance x 1 is calculated as shown below: where X represents the coordinates between the e th transmitting element and the receiving element, c is the wave velocity, x 3 is the distance traveled along the z axis in Figure 1, d is the polarization vector, and o and q have values ranging from 1 to 10, which correspond to the ten Wen and Breazeale coefficients. In the above equations, k is the wavenumber in the direction of propagation of the wave, and a 1 and a 2 are the width and length of the rectangular transducer, respectively. A o , A q , B o , B q are the Wen and Breazzle coefficients [25]. Hence, at the face of the transducer, where x 3 = 0, the velocity distribution is given below: The velocity distribution in the wavenumber-frequency domain can be calculated as given below: The velocity distribution obtained in the wavenumber-frequency domain will be used in Section 3.2 to calculate the received signal from the scattering from an SDH. Equivalent Homogeneous Anisotropic Properties of a Thick Laminate When a layered composite laminate such as CFRP with repeated layers is tested at lower frequencies, i.e., longer wavelengths, where the thickness of the plies is less than the wavelength of the wave, the reflections from the ply interfaces are negligible and have no effect on the propagation of the wave [26]. In such a scenario, the laminate can be considered to have equivalent homogenous properties, which can be used for calculating the group velocity of the laminate and for imaging purposes, etc. Many methods have been investigated to calculate the equivalent homogeneous properties. For this paper, the method described by Sun and Li [20] is chosen as it gives explicit relations to find the homogeneous anisotropic properties. Classical laminate theory is used for characterizing thin laminates [27]. For thick laminates, higher-order plate theories are used, which are more mathematically complex [28]. In thick laminates with periodic stacking layers, where the characteristic length of deformation of the laminate is larger than the periodicity, the non-homogeneous properties over each typical cell can be replaced by effective properties [29]. Thus, each cell of a laminate can be represented as a homogeneous anisotropic solid. Sun and Li considered a thick laminate consisting of repeated sub-laminates, where the thickness of the sub-laminates was small compared to the thickness of the entire laminate. The sub-laminate was then evaluated using constant stress and strain assumptions, and the effective homogeneous properties of the entire laminate were calculated. The SDH is assumed to be embedded in an anisotropic homogeneous medium as the specular reflection from the defect after the wave has traveled through the upper layered medium needs to be calculated and hence the embedding medium is considered homogeneous. The explicit expressions to calculate the effective homogeneous properties are given below, where C is the stiffness tensor in Voigt notation [20]: where h m is the thickness of the ply and These effective homogenized anisotropic elastic constants are then used in Section 3.1 for calculating the transmission coefficient from a layered medium into homogenized anisotropic media in which the SDH is embedded. These effective elastic constants are also used in Sections 3.2 and 3.3 to calculate the group velocity, which is used to calculate the scattering of the SDH and also used to calculate the angle-dependent velocity for the TFM algorithm. The next section describes the method to calculate the scattering from a side-drilled hole. Scattering Coefficient of a SDH Side-drilled holes are the reference reflectors, which are used in ultrasonic nondestructive testing [30]. As SDHs have a simple geometry, the exact scattering from the SDH can be calculated using the method of separation of variables [31]. The Kirchoff approximation could also be used to describe the scattering from an SDH, but it is a far-field and highfrequency approximation, where the size of the SDH is much larger than the wavelength of the inspecting wave. Many defects of importance, such as voids, porosity, etc., are smaller than the incident wavelength and hence Kirchoff scattering is not a good choice in such cases. Hence, for this study, the scattering coefficient is evaluated using the method of separation of variables, which is given in the below equations, where A scatt (ω) is the dimensionless scattering coefficient obtained by solving the scattering integral using the method of separation of variables, which is possible as the scatterer has a simple geometrical shape. where H is the Hankel function and i = 0, 1 corresponds to the order of the Hankel function, L is the length of the SDH, θ is the angle between the angle of incidence and angle of scattering, b is the radius of the SDH, δ is the Kronecker delta, and e r is the unit vector of the receiving transducer. For the pulse echo response of an SDH embedded in anisotropic materials, Huang suggested that the scattering of the SDH is the same as that of an SDH embedded in an isotropic medium for a particular angle of incidence [32]. Hence, for a particular angle of incidence, we consider the equivalent homogeneous anisotropic medium as isotropic and calculate the properties at this particular angle of propagation. The calculated scattering coefficient will be used in Section 3.2 to calculate the received signal from the SDH. Development of a Model to Facilitate Scattering of SDH in a Layered Anisotropic Medium This section provides the steps required to develop a model to simulate the scattering from an SDH that is embedded in a layered anisotropic medium and to post-process the FMC signals using model-assisted corrected TFM. Reflection and Transmission Coefficients of Layered Structure Bounded by Anisotropic Media In this section, we derive the equations for the reflection and transmission coefficients for a layered medium bounded by semi-infinite anisotropic media. The reflection and transmission coefficients are derived by combining the transfer matrix method and the stiffness matrix method. Consider the upper semi-infinite layer 0 as shown in Figure 1, where A reflected is the amplitude of the wave reflected from the layered structure, and A incident is the amplitude of the downward moving incident wave. Then, from Equation (5), we obtain H can be removed from the above equation as it controls the decay of the wave of complex wavenumbers in a finite thickness of the material and, as we are interested in only the semi-infinite layer, there is no decay due to this term. After matrix manipulation of Equation (41), we obtain the below equation: At the N th layer, where m = N before the lower semi-infinite anisotropic medium, we have the following equation, as shown in Equation (6): There is no reflected upward traveling wave in the lower semi-infinite medium due to no reflection boundary being present, so Equation (44) can then be written as We also know the stiffness matrix formulation given in Equation (15) as We can rewrite and solve the above equations in terms of the incident, transmitted, and reflected amplitudes and the stiffness matrix as shown below: For simplicity, the amplitude of the incident wave is taken as unity and the above equation can then be solved to calculate the amplitude of the reflected and transmitted wave, which are the reflection and transmission coefficients, respectively, of the upper and lower bounding layers. Calculation of the Scattering from SDH Embedded in the Medium We use the bounded beam approach to calculate the received signal from an SDH as the SDH scattering has been calculated in the frequency-space domain and not in the frequency-wavenumber domain. In the bounded beam approach, the signal from the transmitting element to the scatterer, the signal received by the receiving element, and the scattering response of the SDH in the frequency-space domain are multiplied as shown below [33] in the frequency domain to produce the output signal, which is dimensionless. T is the transmission coefficient of the plane waves traveling from layered media into homogeneous equivalent anisotropic media, β is the system function [16,21], v t is the acoustic wavefield at the face of the transmitting transducer, and v r is the acoustic wavefield at the face of the receiving transducer calculated from the previous section. A is the scattering magnitude of the SDH calculated using Equation (37). The received time domain signal from the scatterer is then calculated using Equation (55) gives the received FMC signal for scattering from a defect embedded in a medium. Equation (55) is used to generate the FMC data, which are used by the imaging algorithm to image the defect and the scattering from the defect. The next section gives an explanation of the total focusing method. TFM Imaging The total focusing method is considered the gold standard of imaging algorithms [34,35]. It is a delay and sum algorithm that uses the entire full matrix capture data. The TFM algorithm generates an image by synthetically focusing on every pixel in the image domain, as given in the below equation: where I is the intensity of the image at the point x,z, c is the velocity of the wave in the medium, and V t,r is the received signal for a transmitter receiver pair. For anisotropic media, the velocity c is calculated using the Christoffel equation [17], which varies as per the angle of propagation. Hence, the varying group velocity in an anisotropic material is taken into consideration, which differentiates the model-assisted corrected TFM from the isotropic TFM. Quantitative Comparison of the Images The TFM image formed using experimental data is different from the one formed using the simulated data as the experimental TFM image additionally contains the scattering from the SDH and interaction of the signals with the layers below the SDH and backwall. In defect detection, the signal from the defect is important; hence, in order to compare the experimental and simulated images, and also for comparison between different simulated images, the SNR of the defect should be considered. In this context, the SNR is defined as the ratio of the peak amplitude of the scatterer to the noise in the image around the scatterer. The reverberations from the layers, and the signals from the laminated structure, are considered noise as they affect the scattering amplitude of the scatterer. In this case, the SNR is given by Equation (57): The SNR for the simulated image can be calculated in the following steps: 1. Simulate the response from the embedded scatterer and calculate the peak amplitude of the scatterer. 2. Simulate the response of the laminate without the scatterer and calculate the root mean square of the amplitudes of the signal in a chosen region around the scatterer, which is the "noise" of the image. 3. Use Equation (57) to calculate the SNR of the SDH. The same procedure is carried out for the experimental TFM image, wherein the laminate FMC signals are processed before and after the SDH has been drilled into the laminate. The next section presents the results for each step and the final image, which was simulated using TFM. Simulation and Results This section contains the simulation and experimental results for both homogeneous isotropic and anisotropic multilayered materials, followed by the Discussion section. The hardware used to acquire the experimental signals was the FI Toolbox from Diagnostic Sonar. The phased array transducers were from Olympus. The FI toolbox was the acquisition module, which acquired the signals and the signals were post-processed in MATLAB®. All computational work encompassing the modeling and implementation of the imaging algorithm was carried out using MATLAB 2017®. The specifications of the transducers are shown in Table 1. For simulation and experimental purposes, we considered an 80 mm aluminum block (Olympus EP1000-PABLOCK-1) as shown in Figure 2a. For simplicity, only 1 SDH of diameter 1.5 mm at a depth of 28 mm was considered for simulation and experimental validation purposes. A CFRP laminate that was quasi-isotropic and 19 mm thick with (0/45/-45/90) layup was considered for simulation and experimental purposes and is shown in Figure 2b. There were 169 layers of UniDirectional CFRP prepreg of 110 µm thickness in the laminate, with a layer of epoxy resin with an approximate thickness of 5 µm between them. The laminate was manufactured from Toray TC380 unidirectional prepreg in an epoxy resin system. Manufacturing was carried out using autoclave curing. The SDH in the CFRP laminate was at a depth of 12 mm from the surface of the laminate and was manufactured by drilling. As the size of the SDH was relatively small and the SDH had a length of 20 mm, it was assumed that the delamination caused by drilling was minimal. For the purpose of the simulation, the layers containing the SDH and those below it were homogenized. The aluminum and CFRP lamina properties were the same, as shown in Table 2. Calculation of Equivalent Homogeneous Properties By substituting the lamina properties into Equations (21)- (35), we obtain the equivalent homogeneous anisotropic properties as shown Table 3. The properties in Table 3 were then used to calculate the transmission coefficient of the plane waves into the semi-infinite anisotropic medium and were also used to calculate the group velocity in the medium. In the next section, the simulation and experimental results for an SDH embedded in an isotropic medium are presented. SDH Embedded in Aluminum Inspected by a 2.25 and 5 MHz Array In this section, we present the simulation and experimental results of the scattering from an SDH embedded in an isotropic medium. Simulation and experimental FMC signals were generated for an isotropic material so as to prove the validity of the developed model for both isotropic and anisotropic materials. The images of an SDH embedded in an isotropic material were also generated to allow visual comparison of the differences between scattering in isotropic and anisotropic embedding media. Figure 3a,b show the nondimensional scattering magnitude of the 1.5 mm diameter SDH embedded in aluminum when inspected by waves with a center frequency of 2.25 and 5 MHz, respectively. It was observed that as the frequency increased, the magnitude of the scattering amplitude also increased. It was also observed that as the wavelength of the inspecting wave increased as compared to the size of the SDH, the scattering became less directional. In Figure 4a,b, we present the simulated and experimentally obtained TFM image of SDH embedded in aluminum. The aluminum block shown in Figure 2a was used to obtain the FMC signals experimentally. Figure 4a shows the TFM image generated from the FMC signals obtained from the simulation, whereas Figure 4b shows the TFM image generated for the FMC signals obtained experimentally. Figure 5a,b show the scattering of an SDH of diameter 1.5 mm embedded in aluminum at a depth of 28 mm inspected by an ultrasonic array with a frequency of 5 MHz. Figure 5a shows the TFM image generated from the FMC signals obtained from the simulation, whereas Figure 5b shows the TFM image generated for the FMC signals obtained experimentally. It can be seen from Figure 4a that the location and size of the SDH were accurate when the frequency of 2.25 MHz was used. The simulation results agreed qualitatively with the experimental results. When the SDH was inspected by the 5 MHz array, as shown in Figure 5, the SDH seemed to be spread over a large area. In the case of the 5 MHz array, the simulated and the experimental images were in good agreement. To enable a quantitative analysis between the simulation and experimental results, the SNR values of the SDH are provided in Table 4. The SNR values were calculated using the equation and procedure described in Section 3.4. It could be observed that the error between the SNR values was within the range of +/− 8 dB between the simulated and experimental values. In the next section, the simulation and experimental results for an SDH embedded in an equivalent anisotropic medium are presented. In Figure 6, it can be observed that as the angle of incidence increased, the scattering amplitude decreased and the directionality of the scattering was reduced. As observed in the case of the isotropic medium, as the frequency increased, the scattering magnitude increased. Figure 7 shows the TFM image generated from the simulated FMC signals for the laminate without an SDH and from the SDH embedded in an equivalent homogeneous anisotropic medium. Figure 7a shows the TFM image of the CFRP laminate without the SDH. The FMC signals were simulated using the model developed in a previous paper [16]. This image is the noise image as it shows the structural reverberations and internal reflections from the layers in the laminate, which contributed to the noise generated in the FMC signals. Figure 7b shows the TFM image of the scattering from the SDH generated from the FMC signals, which were simulated using Equation (55). In Figure 7a, the image of the CFRP laminate without the SDH, we can observe the internal reflections and reverberations from the layer interfaces. Figure 7b shows the scattering from the SDH embedded in a CFRP laminate. We can see that the SDH image is not exactly circular, it is spread across a diameter of 3 mm, and there is also lower-magnitude scattering around the SDH. To compare the images generated with the simulated FMC signals to the image generated using the experimentally obtained signals, we combined the signals obtained for Figure 7a,b to create Figure 8a. Figure 8a shows the image generated from the simulated FMC signals from the SDH and laminate and Figure 8b shows the image generated from the experimentally obtained FMC signals. It can be observed that Figure 8a is in quite good agreement with Figure 8b, with the noise seen in Figure 7b contributing to the noise in the composite image. Figure 8b shows more noise than Figure 8a and the source of the noise could be the manufacturing process, including the varying thickness of plies and epoxy after manufacture, which is difficult to account for in a simulation. In Figures 9 and 10, we present the results of the simulation carried out using Array 2 with a central frequency of 5 MHz. In Figure 9a, we can see the internal reflections and reverberations of the plies, which are more pronounced than those in Figure 7a. Figure 9b shows the SDH at a depth of 12 mm. As in the isotropic case, the SDH appears to be spread over a large area, with noise at the edges of the SDH. Figure 10a shows the image generated from the simulated FMC signals from the SDH and laminate, and Figure 10b shows the image generated from the experimentally obtained FMC signals. It can be observed that Figure 10a is in good agreement with Figure 10b, with the noise seen in Figure 10b contributing to the noise in the composite image. We can observe more noise and artifacts in Figure 10b, which could be due to manufacturing inconsistencies. To enable a quantitative comparison between the experimental and the simulation results, Table 5 shows the SNR of the simulation and experiment. An error in the range of 14 dB to 18 dB can be observed from Table 5. The error is higher in the case of the CFRP, for various reasons, such as the absence of the effect of layering below the SDH and the backwall reflections on the amplitude of the SDH signal. As the SDH is of a small diameter, the effects of the layers just below the SDH and the layers in which the SDH is embedded on the received amplitude are higher than in the simulations. It can also be seen that defects during manufacture also influence the signal from the SDH, which cannot be included beforehand in the simulation. Discussion Figures 4 and 5 show the comparison between the simulated and the experimental TFM images of SDH embedded in an aluminum block. It was observed that when the size of the SDH was larger than the wavelength of the inspecting wave, the SDH appeared to spread over a larger area, as in the case of the 5 MHz array. This is because the scattering at these higher frequencies was of a higher magnitude, and the decrease in the scattering magnitude with the scattered angle was less, as observed in Figure 3. A quantitative comparison of the SNR also led to the conclusion that the simulation images for defects in isotropic media agreed well with the experimental images. Next, Figures 8 and 10 present a comparison between the simulated and experimental TFM images of SDH embedded in a CFRP laminate. Here, the layer in which the SDH is embedded and the layers below it were modeled as a semi-infinite anisotropic region using the equivalent homogeneous anisotropic properties given in Table 3. As in the case of an isotropic embedding medium, the SDH at 2.25 MHz showed good agreement between the simulated and experimental image. More noise was visible around the SDH. This noise was due to the anisotropic velocity in different directions and also due to the creeping wave [36]. As the pitch between the elements was 1 mm and the array was a 64-element array, the angles of incidences were large, and the theoretical group velocities, as shown in Figure 11, along these angles were large, leading to the noise accompanying the scattering signal. The group velocity was calculated using the expression given in Equation (58), where u p is the group velocity, c p is the phase velocity, c ijkl is the elastic constant, p is the polarization direction, and n is the unit vector in the direction of propagation of the wave: u pi = c ijkl n k p l p k ρc p (58) Figure 11. Group velocity of the longitudinal wave for different angles of propagation. Figure 9 shows the scattering from an SDH when inspected with Array 2, which had a central frequency of 5 MHz. Less noise was observed around the edges as compared to the TFM image using a 2.25 MHz array. This was due to the fact that, because of the smaller pitch and lower number of elements in the array, the maximum angle of propagation was confined to less than 40 • and hence the variation in the group velocity was not very large. In CFRP, the image of the SDH was elliptical due to the various effects of the diffraction of the layers from above, the inspecting wavelength as compared to the size of the SDH, and the anisotropic velocity. The simulation provided a good tool to determine which frequencies need to be used to inspect a certain material, SDH size, location, etc. We observed that the difference in the SNR values between the simulation and experimental images was larger for CFRP as compared to aluminum. One of the reasons for this is that the SDH was assumed to be embedded in a homogeneous medium and the layers beneath it were not taken into account in the simulation. The layers below the SDH will also contribute to the noise in the image and influence the magnitude of the SDH, hence reducing the SNR in the experimental TFM image. Conclusions This paper proposes a modeling technique based on the Gaussian beam and the recursive stiffness matrix method to simulate the scattering from an SDH embedded in a CFRP laminate. The simulation requires the integration of different modules to simulate the scattering of an SDH. A novel method is implemented to calculate the transmission and reflection coefficients from layered media into a semi-infinite anisotropic medium by combining the transfer matrix and recursive stiffness matrix approaches. The modeling technique takes into consideration the diffraction, anisotropic velocity, and inspection frequency effects while simulating the scattering from the SDH embedded in a layered medium. The simulation and the experimental results are in good agreement, which was observed qualitatively using TFM to image the FMC signals and also quantitatively by comparing the SNR values for both isotropic and anisotropic samples. To the best of the authors' knowledge, there are no analytical models that can be used both in immersion and contact setups based on multi-Gaussian beams and the stiffness matrix method to simulate the scattering from SDHs. Hence, this paper provides a model that can be used both in immersion and contact setups and is both computationally inexpensive and accurate. Future work would include a full quantitative comparison, with a well-defined sample that has been validated using CT, and the modeling and validation of different defects, such as porosity and delaminations in plane and curved composite structures.
v3-fos-license
2019-08-10T15:28:44.580Z
2019-08-09T00:00:00.000
199511874
{ "extfieldsofstudy": [ "Chemistry" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://link.springer.com/content/pdf/10.1007/s11483-019-09600-3.pdf", "pdf_hash": "db17dca73789bf2b4ee36dbb5e391c79d49d105f", "pdf_src": "Springer", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43655", "s2fieldsofstudy": [ "Materials Science" ], "sha1": "c359d5ec1d42594e239e2933de8b2887e57bd6ef", "year": 2019 }
pes2o/s2orc
Analysis of the Retrogradation Processes in Potato Starches Blended with Non-Starchy Polysaccharide Hydrocolloids by LF NMR Molecular dynamics for pastes of two normal and one waxy potato starches and their binary mixtures with either arabic, guar or xanthan gum was determined. Spin–lattice and spin–spin relaxation times were measured by 1H NMR pulse spectrometer. Then, for water molecules the mean correlation time rotational mobility. Was calculated and analyzed. The measurements were taken after 2 h, and then after 1, 10, 30 and 90 days of storing at 5 °C. It was found that the susceptibility of pastes of potato starch to retrogradation was controlled, first of all, by the content of amylose. Amylose favored retrogradation. On the initial period of storing the length of the amylose chains played an essential role promoting retrogradation. Non-starchy polysaccharide hydrocolloids applied for blending potato starches influenced molecular properties of water in the pastes, particularly on long-term retrogradation. Generally, these components retarded that process. Retrogradation influences the texture, stability, quality, digestibility and functionality of starch pastes and starch containing products [15]. Usually its results are considered negative [16] but sometimes, for instance, in a low-energy food production retrogradation is considered beneficial as it delivers resistant starch (RS) [17][18][19]. Retrogradation proceeds in two short-and long-term steps, respectively. In the short-term step amylose crystallizes and that step of retrogradation begins already within first few hours since starch gelation and it can last up to two days. The long-term retrogradation is associated with recrystallization of the outer branches of amylopectin. Compared to retrogradation of amylose the latter is considerably slower [20,21]. The changes of retrogradations were observed in chemically modified starches [17,22]. Recently, a number of papers was published on blends of starches with various non-starchy polysaccharides, particularly with natural and synthetic gums. They attracted attention as novel materials for food technology, for instance, texturizing agents, packaging foils and other biodegradable materials [20,[23][24][25][26][27]. Their applicability depends, among others, on their stability against retrogradation. It was shown that retrogradation rate and degree are sensitive to various hydrocolloids [28][29][30]. Progress of starch retrogradation can be monitored involving several techniques. Among them also NMR spectrometry appeared useful. 13 C CPMASS NMR was used when the retrogradation in the solid state was investigated [31]. Considerable attention was paid to the observation of the fate of the water molecules on retrogradation. In such case the spin-lattice and spin-spin relaxation times were observed based on 17 O NMR [32] and 1 H NMR [33][34][35] studies. In the latter case the studies involved a rule that in the systems with a considerable mobility of molecules are characterized with longer spin-spin relaxation times. A decrease in the mobility of such molecules is reflected by shortening of those relaxation times. As the viscosity of those systems increases, the rate of the spin-spin relaxation increases, that is, the spin-spin relaxation time decreases. It is generally accepted that spin-lattice, T 1 , and spin-spin, T 2 , relaxation times qualitatively and quantitatively describe binding water in the system.( [36]. Molecular dynamics can be determined based on average correlation time, τ c , which is a microscopic parameter [37]. Relaxation times in biological systems including pastes and gels provide an information on a mode of binding water to macromolecular polymeric systems and reorientation of the water molecules evoked by big polymeric molecules. The aim of study was analyze the short and long term of retrogradation process in starch and starch-hydrocolloids mixtures from molecular point of view. Materials Normal potato starches NPS1 and NPS2 and waxy potato starch WPS containing estimated according to Morrison Sample Preparation The study were perform on the 5 w% starch paste samples or samples composed of 4.8w% starch 0.2w% hydrocolloid added. The sols of starch and their blends with particular hydrocolloids were heated on a gentle stirring for 30 min. at 90°C. Resulting hot pastes (0.2 cm 3 ) were passed into measurement viols, closed with parafilm and allowed to stand for cooling to room temperature. So thermally equilibrated samples in the measurement viols were cooled to 5°C in an ice shrank. Relaxation Time Measurements Relaxation times were taken after 2 h, and then after 1, 10, 30 and 90 days of storing at 5°C. The measurements of spinlattice (T 1 ) and spin-spin (T 2 ) relaxation times were conducted by the use of PS15T pulse 1 H NMR spectrometer (ELLAB, Poznań, Poland), at 15 MHz, equipped with integral temperature control system. Prior to the experiments, samples placed in the spectrometer were allowed to reach 20°C. The inversion-recovery (π-t-π/2) impulse sequence [36] was applied for measurements of the T 1 relaxation times. Distances (t) between RF pulses were changed within the range from 100 to 1000 ms and the repetition time was from 20 s. Each time, 32 FID signals and 119 points from each FID signal were collected. Calculations of the spin-lattice relaxation time values were performed with the assistance of the CracSpin software [39]. That software provided calculating relaxation parameters from experimental data using "spin grouping" approach. Marquardt's method of minimization has been applied for fitting multiexponential decays. The accuracy of the relaxation parameters was estimated and the standard deviations were given. Time changes of the current value of the FID signal amplitude in the employed frequency of impulses were described by the following formula: where: M z (t) -is the actual magnetisation value, M 0 -is the equilibrium magnetisation value, t -is the distance between impulses and T 1 -is the spin-lattice relaxation time. Measurements of the T 2 spin-spin relaxation times were taken using the pulse train of the Carr-Purcell-Meiboom-Gill spin echoes (π/2-τ/2-(π) n ) [36]. The distance (t) between π impulses amounted to 2 ms. The repetition time was 15 s. The number of spin echoes (n) amounted to 100. Tree accumulation signals were employed. To calculate the spin-spin relaxation time values, the authors applied the adjustment of values of the echo amplitudes to the Eq. (2): where: M x,y (t) -is the echo amplitude; M ois the equilibrium amplitude; tis the distance between π; impulses; T 2 is the spin-spin relaxation time. The calculations were performed with the dedicated software involving non-linear least-square algorithm. The accuracy of the relaxation parameters has been estimated with the standard deviations. Table 1 collects the spin-lattice, T 1 , and spin-spin, T 2 , relaxation times for 5% (g/g) starch pastes without and with admixture of non-starchy polysaccharide hydrocolloids measured after 2 h, and 1, 10, 30 and 90 days storing. Results and Discussion For all NPS pastes regardless the period of their storage in an ice shrank up to 90 days only one spin-lattice (T 1 ) and one spin-spin (T 2 ) relaxation time could be observed. It was characteristic for biopolymers pastes [35,40]. It suggested that in spite of progressing retrogradation of starches there was a fast chemical exchange, that is, the within the time necessary for the energy transfer from spin into environment as well as into to another spin, molecules of water could migrate from one fraction of water into another. Already after the 2 h storage some differences in spinlattice (T 1 ) and spin-spin (T 2 ) relaxation times for particular NPSs could be noted (Table 1) suggesting some differences in the formation of the paste networks on cooling. NPS1 paste after 2 h showed higher T 1 and T 2 , than these for NPS2. That observation rationalized an assumption that NPS1 paste after 2 h storing in spite of higher amylose content were less rigid than pastes of NPS2 and WPS and, in consequence, water molecules in the NPS1 paste could more freely rotate [41]. On storing the NPS2 and WPS pastes for 2 h the relaxation times, particularly T 1 only slightly changed. The NPS2 paste showed the lowest T 1 whereas the WPS paste had the lowest T 2. Thus, on could assume that T 2 decreased with amylose content in starch (Table 1). Relationships between T 1 for particular pastes under study were close to the values of relative degree of crystallinity calculated involving diffractometry. The calculations for results collected for the pastes after 2 h storage gave DoC w declining from NPS1 to NPS2 [42]. Observed values for the relaxation times were typical for those collected when low frequency electromagnetic waves were applied [41,43,44]. These values varied with the time of storage of the pastes. In the first day, T 1 for the NPS1 pastes declined suggesting reducing mobility of the water molecules due to a reconstruction of the paste network whereas at the same storage period T 1 for the NPS2 and WPS increased. The NPS1 contain the most amylose. This biopolymer bound water molecules in this short time during storage. Moreover, both T 1 and T 2 taken at that storage period for WPS were considerably lower than these for NPS2 pointing to a different network structure of both those pastes. This explain main role of amylose in network forming. These changes in T 1 , dependent on the starch variety, followed changes in DoC w for those starches [42]. Relaxation times significantly changed on extension of the storage period to 90 days. The NPS1 pastes showed the longest relaxation times compared to the pasts of the other starches. It suggested that the NPS1 paste formed structures richer in the free water fraction, that is, structures containing particular water molecules surrounded with a coats of other water molecules [45]. Small declining T 1 was not accompanied with any changes of T 2 what could mean that on prolonged storage in the gel formed within first 1 day, dynamics of the water molecules did not changed anymore. This was not a case for the NPS2 pastes. On their storage up to 90 days T 1 and T 2 declined perhaps due to a loss free water as the result of limiting mobility being a consequence of binding water in the paste network. In the WPS pastes relaxation times changed within the 30 days storage and then they stabilized delivering an evidence for the stabilization of the paste network structure. Comparison of the values of the relaxation times after 90 days storage led to a conclusion that in every paste water was evacuated from the polymer network nodes forming the fraction of free water. Additionally, large differences in the relaxation times for particular pastes pointed to differences in the modes of interactions between water and starch and between starch chains. Thus, on could state that long-term changes in particular pastes differed from one another. Variation of the relaxation times provided quantitative analysis of changes resulting from binding water and dynamics of the water molecules. The relaxation times collected employing low-field NMR allow an insight into rotational modes of the water molecules in the system. In binary pastes of NPS1 with non-starchy hydrocolloids the mobility of the water molecules was substantially limited. Simultaneously, quantitative changes in the free and bound water molecules could be observed solely in the NPS1 -XG pastes (Fig. 2). In this case the monotonical increase of mean correlation times was observed. It suggested that considerable number of water molecules participated in the network formation. Shorter relaxation times compared to those in the pastes free of the non-starchy hydrocolloids likely resulted from the intervention of hydrogen bonds in building nodes of the paste networks. Binary NPS2non-starchy hydrocolloid systems showed higher T 1 values than these for pure NPS2 pastes. T 2 for the binary paste of that starch with GG also increased whereas it remained unchanged in case of the corresponding gels with XG and AG. That finding might be an evidence for a weak starch -GG interactions as well as for a higher content of free water in that binary system as compared to the NPS2 paste. In the binary pastes of that starch with XG and AG dynamic properties of water in the paste did not change. In WPS binary pastes both relaxation times measured after 2 h storage increased as compared to the pastes free of hydrocolloids. It delivered an argument that WPS did not interact with those hydrocolloids ( Table 1). Changes of T 1 and T 2 on storage of NPS1 -XG binary pastes were relatively small. It could be rationalized in terms of a high stability of the network and its resistance to retrogradation. In contrast to the behavior of T 1 and T 2 in these binary pastes, these parameters for the pastes with both remained hydrocolloids considerably changed on prolonged storage. T 1 for the binary pastes of NPS1 with GG and AG decreased pointing to decreasing amount of free water in the pastes on prolonged time of their storage. The NPS1 binary paste with AG held more such water than the NPS1 -GG paste. Changes of T 2 reflecting changes of molecular dynamics showed that after 1 day storage of binary pastes of NPS1 with GG and AG was significantly reduced. After 30 days storage T 2 rose as a consequence of an increase in the mobility of the water molecules, that is, of increased syneresis. After the 90 days storage in the NPS1 binary paste with GG dynamics of the water molecules remained constant whereas in the relevant binary paste with AG the limitation of the water molecules mobility progressed. Thus, one could state that XG most efficiently limited retrogradation of starch and AG was least efficient in that respect. T 1 and T 2 of binary pastes of NPS2 with non-starch hydrocolloids declined on the storage showing that the water was successively arrested inside the paste network. Significant shortening of both relaxation times was observed just after 30 days what meant that in that period retrogradation progressed efficiently. These changes were least dynamic in the binary paste with XG manifesting that this paste was most stable on the storage, that is, it was most resistant to retrogradation. In the paste of NPS2 with AG T 1 rose up to 90th day as a result of the change of the proportion of free and bound water. On storage, relevant T 1 and T 2 monotonously declined with the storage time. Like in binary NPS2 pastes, these parameters for the WPS -XG binary pastes changed to a least extent. Likely that behavior was associated with binding water by XG rather than retarding retrogradation. The rate of the relaxation processes in biological systems is controlled to a great extent by molecular mobility. Depending on the environment of water, in the biopolymer network which is formed its binding may involve either hydrogen or ionic bonds. Such binding allows free rotation of the water molecules around such bonds. It is also known that in such structures some water molecules are arrested in the nodes of the polymeric network. It significantly limits their dynamics. In such biological systems as biopolymeric pastes including starch pastes of the concentration up to approximately 15%, usually one component of both relaxation times can be observed. It means that between molecules of water bound to macromolecule and free water a fast chemical exchange takes place. In such cases a mean time of the correlation can be determined. That parameter allows determination of the possibility of free rotation of the water molecules and its limited dynamics in the polymeric networks [46]. Mean correlation times, τ c , can be derived from the relaxation times, T 1 and T 2 involving the system of the BPP equations (Eqs. 3 and 4) [47,48]. This BPP relaxation rates 1/T 1 and 1/T 2 concern spin pair of protons. The time-dependent changes of mean correlation times in analyzed starch pastes are presented in Fig. 1. Mean correlation times for stored samples showed that studied pastes distinguished from one another in molecular dynamics. NPS1 pastes were characterized with the shortest relaxation times indicating that in these pastes the water molecules had the best opportunity for rotation and that these pastes contained the most free water. It could be associated with the highest amylose content in that starch. It facilitated retrogradation of those pastes. Moreover, in the stored pastes the mean correlation times declined pointing to a successive release of the water molecules from the network. NPS2 formed polymeric structures for whose the mean correlation times significantly increased from the 10th day storage. Such behavior could result from obscuring molecular movements by a formation of stable solid state structures. In the WPS pastes dynamics of water molecules remained stable in time pointing to the stability of the relevant molecular systems. Insight in the mean correlation times for binary NPS1 hydrocolloid systems (Fig. 2) revealed that after the 1 day storing the admixture of non-starchy hydrocolloids increased that parameter in respect to that for the paste of NPS1 free of the admixture. It could point to binding water in the relevant pastes in the initial stage of the formation of the networks. After 30 days the mean correlation times in the pastes rose above those found for the paste free of hydrocolloids. It could mean that every hydrocolloid inhibited removal of the water molecules from the polymeric structures limiting their mobility. Such phenomenon could result from the formation of more compact structures within the pastes which resembled solid state structures. In that system, XG limited rotation of the water molecules in time whereas AG substantially decreased direct values of the mean correlation time. It suggested that compared to the paste of NPS1 free of that hydrocolloid, AG added fluidity to the paste decreasing its viscosity. In binary NPS2 pastes with hydrocolloids ( Fig. 3) differences of the mean correlation time and associated with them differences of molecular dynamics of the water molecules in pastes could be observed just after the 30 day cool storing. After 90 days pastes of NPS2 with AG had the highest mean correlation time among those with XG, GG and free of hydrocolloid. Thus, the paste with AG bound most water. After 90 days, binary systems with XG and GG formed pastes of similar dynamics of the water molecules. However, it should be emphasized that on the prolonged storage, XG stabilized molecular movements to a highest extent. Blending WPS with hydrocolloids influenced the mean correlation time of the corresponding pastes to a various extent (Fig. 4). In the WPS binary pastes with GG the mean correlation time rose fast with the storage time pointing to a successive building structures of the solid state character. Pastes of WPS with XG behaved similarly, however, after the 90 day storage their mean correlation time decreased providing an evidence for a weakening the structure caused by progressing retrogradation. Moreover, one could suppose that after 30 days, in the pastes with GG and after 90 days in the pastes with XG and GG their structure was stronger than that of the WPS paste free of hydrocolloids. The mean relaxation time for the paste of WPS with AG was always lower than that for the paste free of hydrocolloid suggesting that it was more fluidal and disposed with lower viscosity. Conclusion Susceptibility of pastes of potato starch to retrogradation was controlled, first of all, by the content of amylose. Amylose favored retrogradation. On the initial period of storing the length of the amylose chains played an essential role promoting retrogradation. Apart from NPS1 containing most amylose also WPS containing amylose of lower molecular weight readily retrograded. Non-starchy polysaccharide hydrocolloids applied for blending potato starches influenced molecular properties of water in the pastes, particularly on long-term retrogradation. Admixture of Arabic gum to both normal potato starches inhibited rotation of the water molecules within first 10 days of storing, however, on the prolonged storage the water molecules within those systems reached the highest mobility. Admixture of that hydrocolloid to waxy potato starch resulted in an essential reduction of the mean correlation time what was associated with an intensification of the mobility of the water molecules due to considerable interactions of that hydrocolloid with short amylopectin chains. In the blends of waxy starch with guar and xanthan gums inhibition of rotational movements of the water molecules was observed after 10 days of storage. Since in hydrocolloid free pastes of waxy starch mobility of water did not change in time the result might suggest that both hydrocolloids bound water. In case of both normal potato starches xanthan gum determined long-term changes of the mobility of the water molecules and that effect was observed in the system of the highest, 30%, content of amylose. In such system after the 30 days storing arresting water molecules in the paste structure played a principal role in limiting their rotation. That effect was not observed in the pastes of lower amylose content because the way of binding water in starch significantly limited rotation of the water molecules.
v3-fos-license
2018-05-21T21:16:13.009Z
2018-04-19T00:00:00.000
21661569
{ "extfieldsofstudy": [ "Computer Science", "Mathematics" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/2073-8994/10/4/122/pdf", "pdf_hash": "1bc4d490b7cc313bd26aecf70ca3f94c5f9937f3", "pdf_src": "Adhoc", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43656", "s2fieldsofstudy": [ "Computer Science", "Engineering" ], "sha1": "2f896a933620cf5206410cef385d368a7c29dda9", "year": 2018 }
pes2o/s2orc
Cosmetic Detection Framework for Face and Iris Biometrics : Cosmetics pose challenges to the recognition performance of face and iris biometric systems due to its ability to alter natural facial and iris patterns. Facial makeup and iris contact lens are considered to be commonly applied cosmetics for the face and iris in this study. The present work aims to present a novel solution for the detection of cosmetics in both face and iris biometrics by the fusion of texture, shape and color descriptors of images. The proposed cosmetic detection scheme combines the microtexton information from the local primitives of texture descriptors with the color spaces achieved from overlapped blocks in order to achieve better detection of spots, flat areas, edges, edge ends, curves, appearance and colors. The proposed cosmetic detection scheme was applied to the YMU YouTube makeup database (YMD) facial makeup database and IIIT-Delhi Contact Lens iris database. The results demonstrate that the proposed cosmetic detection scheme is significantly improved compared to the other schemes implemented in this study. Introduction The recognition performance of face and iris modalities has been considered as two promising biometric traits over the past decade [1][2][3][4][5][6][7][8][9][10][11]. However, the presence of cosmetics has posed challenges related to the performance degradation of face and iris biometrics [12,13]. This study considered the effect of facial makeups and iris contact lenses on the recognition performance of biometric systems. Generally, facial makeups and iris contact lenses are considered as two popular types of cosmetics that are publicly acceptable in several parts of the worlds. Makeups affect the color, shape, texture and format of face images. There are three main categories of makeups that can be applied on faces: light makeup, medium makeup and heavy makeup [14]. On the other hand, the iris contact lenses are divided into two categories: transparent (soft) and color cosmetic (texture) contact lenses. The use of contact lenses, especially textural lenses, alters the texture, appearance and color of iris patterns [15]. Therefore, designing an efficient method to detect facial makeups and contact lenses in face and iris images would benefit face-iris biometric recognition systems in terms of security and recognition performance. The impact of makeup and contact lens on face and iris images has been discussed in some studies. The authors of a previous study [12] demonstrated the reduced performance of face recognition schemes in the presence of makeup. An automatic facial makeup detection method was presented in another study [16], which captured the local and global information of facial images. In reference [17], the authors applied the Canonical Correlation Analysis (CCA) to learn the meta subspaces for maximizing the correlation of feature vectors belonging to the same individual. The authors of a previous study [18] proposed that the correlation between several facial parts can be learnt using the Partial Least Square (PLS) to improve the verification performance in presence of cosmetics. On the other hand, for iris recognition with contact lenses, a gray-level co-occurrence matrix was proposed for training a support vector machine, which will ultimately improve the classification rate of the cosmetic detection method [19]. The impact of the contact lenses on the iris recognition performance has been analyzed in a previous study [15] using two different datasets. In reference [20], a multimodal biometric system using both irises was applied to investigate the effect of soft and texture contact lenses on the recognition performance of both unimodal and multimodal systems. In reference [21], three different techniques based on iris-textons and the co-occurrence matrix were proposed for detecting texture contact lenses, measuring the iris edge sharpness and characterizing the texture of irises. The authors of a previous study [22] detected texture lenses using Gaussian-smoothed and Scale-Invariant Feature Transform (SIFT)-weighted Local Binary Patterns. In this study, an efficient scheme is proposed to detect makeup and texture contact lens in face and iris images by utilizing both the global and local information of the modalities. The proposed scheme fuses color-, shape-and texture-based features extracted from the face and/or iris with cosmetics, before the Support Vector Machine (SVM) [23] was applied to detect face-iris cosmetic in the input images. We proposed the extraction of the texture and shape characteristics of facial and iris modalities using a multi-scale local-global technique to collect the microtexton information of local primitives efficiently along with the global features with makeup and texture contact lenses. Therefore, the Log-Gabor transform (L-Gabor) shape descriptor is applied in this study to produce a set of Gabor filters and consequently, the microtexton information of the global and local primitives is extracted using an Overlapped Local Binary Pattern (Ov-LBP). Additionally, in order to collect the color-based information of images with cosmetics, the present work computes the overlapped color moments of the face and iris images to detect cosmetics. Therefore, the current work is the first common scheme that was applied for both the face and iris traits with makeup and texture lens, which fuses the advantages of color, shape and texture patterns to efficiently detect spots, flat areas, edges, edge ends, curves and colors. Indeed, the main contribution of this work is related to the results obtained from extracting and preserving the detailed pattern information of both modalities and utilizing them to decide the presence and/or absence of cosmetics for face and iris biometrics. In the other words, the original contribution of this work can be summarized as the proposal of a multi-modal cosmetic detection system for both face and iris biometrics according to their shapes, color and textures. Therefore, the proposed biometric detection scheme can be applied in any unimodal and/or multimodal face-iris recognition system to improve the security and recognition performance of the system. The proposed cosmetic detection scheme is evaluated on the YMU [12,24] facial makeup database and IIIT-Delhi Contact Lens iris [13] database. The proposed face-iris cosmetic detection scheme is presented and compared with the existing facial and texture lens detection methods in this work using the Classification Rate (CR) and Total Error Rate (TER). The rest of this paper is organized as follows. Section 2 describes the facial makeup and iris contact lens approaches applied in this study. The explanation of proposed face-iris cosmetic detection scheme is presented in Section 3, while Section 4 concentrates on cosmetic databases and experimental results. Finally, Section 5 draws some conclusions. Cosmetic Detection for Face-Iris Biometrics Although the main objective of using contact lenses is to correct individual eyesight as an alternative to spectacles/glasses, they can be also used for cosmetic purposes. In general, the use of contact lenses, especially textural lenses, alters the texture and color of iris images and leads to confusion in the natural iris patterns [15]. In addition, facial makeup can be relevant to the aesthetics of an individual face and affects the texture, color and shape of face images [16], resulting in reduced performance of face recognition systems. Therefore, introducing a robust detection scheme is needed for both face and iris biometrics in order to increase the reliability and security of face-iris recognition systems. In this study, we attempted to design a common detection scheme for face-iris biometrics with cosmetics based on their color, shape and texture information. In fact, the color, shape and texture characteristics of both face and iris traits can be affected if an individual uses makeup and contact lens. Therefore, the aim is to utilize the information obtained using these factors and combine them in a scheme to detect makeup and contact lens for both the face and iris. The Local Binary Pattern (LBP) [25] feature extraction method is considered to be a powerful micro-pattern descriptor to analyze the texture of facial and/or iris images. In order to detect the presence of makeup/contact lens, LBP and numerous variants of LBP has been applied in several studies [15,16,22,24] as successful texture-based approaches. In this work, we aim to use multi-scale Overlapped Local Binary Pattern (multiS-Ov-LBP) technique to collect microtexton information of local primitives from facial makeup and iris contact lens images. Recently, it was shown that extracting detailed texture information from the irises of uniform LBP patterns provides more representative histograms, which is better than analyzing the texture patterns [26]. Uniformity also plays a major role in characterizing the micro-features of facial makeups [16]. On the other hand, the investigation of different combinations of LBP operators utilized the extraction of more micro-texture details to discriminate real and fake face images [27]. Therefore, the focus of this study for discriminating makeup and non-makeup images is on multi-scale LBP operators, including LBP u2 8,1 , LBP u2 8,2 and LBP u2 16,2 . In fact, each operator extracts the histogram of a whole image globally, before the concatenation of the histograms provides a feature set of length 361 (59 + 59 + 243) bin. The extraction of detailed micro-texture information of local primitives using overlapped blocks leads to better recognition and detection of spots, flat areas, edges, edge ends, curves, appearance and colors [16,26], which are considered as important factors for cosmetic applications of face and iris biometrics. Therefore, we intend to apply the overlapped blocks of multi-scale LBP operators to extract more detailed local primitives for cosmetic detection. In order to obtain the local bin histograms of each operator, the images are divided into 3 × 3 overlapping regions with an overlapping size of 15 pixels. The concatenation of three operators leads to 3249 (9 × 59 + 9 × 59 + 9 × 243) bin histograms for one image. Due to the high dimensionality of the features produced by this method, we applied principal component analysis (PCA) and linear discriminant analysis (LDA) to reduce the dimensionality of the feature sets. The proposed scheme combines the global and local features extracted using the LBP texture descriptor to improve the robustness of scheme for cosmetic detection. Additionally, in order to analyze changes in shape due to cosmetics for the face and iris biometrics, the Log-Gabor transform (L-Gabor) [28] was applied to reflect the frequency response of images more realistically. This study considers five different scales and eight orientations to produce the Log-Gabor transform, with a down-sampling rate of six based on several trial results. The final size of the L-Gabor transformed image was reduced to 40 × 80 for all 40 image outputs. This work considered the computation of color-based features using the overlapped color moments within the overlapped blocks of images. Our experimental result section demonstrated a high cosmetic detection rate for both face and iris modalities with overlapped color moments compared to non-overlapping blocks. To extract the color moments from the entire image, we divided the images into 3 × 3 overlapping regions with an overlapping size of 15 pixels. For each block, the mean, standard deviation and skewness of pixels were calculated as the first, second and third order moment, resulting in 81 color feature sets. Proposed Face-Iris Anti-Cosmetic Scheme Our proposed cosmetic detection framework combines local and global information extracted from face and/or iris images ( Figure 1). The framework improves the cosmetic detection rate of system by combining the local and global information of each modality. The detailed steps applied to design the proposed cosmetic detection scheme for face and iris biometrics are as follows: Step 1 The image preprocessing step is carried out on all face and iris images separately to detect, scale and localize the face and irises. After this, the images are cropped, aligned and resized to dimensions of 60 × 60 prior to our cosmetic detection experiments. These undergo the histogram equalization method and mean variance normalization. Step 2 All face and/or iris images are used as inputs for the L-Gabor algorithm for analyzing changes in shape, which produces 40 image outputs. Each one of these 40 output images is considered separately in the local and global feature extraction steps to exploit the features. Step 3 The global feature extraction step extracts the histogram of a whole image globally for all 40 output images of one individual using three different operators (LBP u2 8,1 , LBP u2 8,2 , LBP u2 16,2 ) in the multi-scale manner. After this, the concatenation of histograms is considered as the global microtexton information of textures, which is presented in Equation (1): where GFV presents the extracted Global Feature Vector; and S and O describes scales and orientations employed to produce the Log-Gabor transform (five scales and eight orientations). On the other hand, the proposed pipeline simultaneously extracts details of the micro-texture information of local primitives using the overlapped blocks through multi-scale operators of LBP. After this, the concatenation of overlapped histograms is used to improve cosmetic detection. Additionally, the local feature extraction step extracts the color moments of images through overlapping regions. Subsequently, the result features are concatenated to produce the color feature sets according to Equation (2). It should be stated that the global and local feature extraction steps are separately applied on all 40 output images produced using the L-Gabor texture descriptor of Step 2: where LFV represents the extracted Local Feature Vector; S and O describes the scales and orientations employed to produce the Log-Gabor transform; M and N are the sizes of overlapping windows used to divide the images (3 × 3); and ρ, σ and γ specify the extracted mean, standard deviation and skewness feature vectors of overlapped color moments. Step 4 The scheme concatenates the advantages of the features achieved using the local and global feature extraction steps as a high dimension feature set according to Equation (3): where CFVF represents Concatenated Feature Vector Fusion. Step 5 In order to reduce the dimensions of the concatenated features in the global step, local step and concatenated feature set of Step 4, the proposed scheme employs PCA and LDA to obtain appropriate feature subsets of the face and/or iris by eliminating irrelevant and redundant information. Step 6 The classification is conducted using the SVM classifier to detect cosmetics in 4 different ways for all 40 output images of individuals in the global feature extraction step, local feature extraction step (histogram concatenation and color feature vector) and global + local concatenation step. Step 7 The last step fuses all decisions achieved using the SVM classifier through the majority voting [29] decision level fusion technique. In fact, for one individual, 160 different decisions (40 × 4) are fused to make the final decision for the detection makeup in face/iris images. In fact, the majority voting combines all 160 decisions obtained by SVM classifiers to produce a final fused decision. In the majority voting technique, usually all classifiers provide an identical confidence in classifying a set of objects via voting. This technique will output the label that receives the majority of the votes. The prediction of each classifier is counted as one vote for the predicted class. At the end of the voting process, the class that received the highest number of votes wins [29]. Symmetry 2018, 10, x FOR PEER REVIEW 5 of 9 Experimental Results and Databases This section concentrates on the experimental analysis of the proposed cosmetic detection scheme for face and iris biometrics. The robustness of the proposed scheme against the presence of makeup and contact lens is tested using the experiments. The YouTube makeup database (YMD) introduced by Dantcheva et al. [12,24] is used to evaluate the performance of our proposed pipeline for face makeup images. The database contains 151 Caucasian female subjects, with four samples per subject (two samples before makeup and two samples after the makeup) that vary from subtle to heavy degree of makeup. This study considers 600 images of 150 individuals to perform the experiments, including 300 makeup and 300 non-makeup images. The IIIT-Delhi Contact Lens (IIITD CL) [13] iris database includes 6570 iris images of 101 subjects captured from both eyes variations of lens, including no lens, transparent lens and colored texture lens, which was captured Experimental Results and Databases This section concentrates on the experimental analysis of the proposed cosmetic detection scheme for face and iris biometrics. The robustness of the proposed scheme against the presence of makeup and contact lens is tested using the experiments. The YouTube makeup database (YMD) introduced by Dantcheva et al. [12,24] is used to evaluate the performance of our proposed pipeline for face makeup images. The database contains 151 Caucasian female subjects, with four samples per subject (two samples before makeup and two samples after the makeup) that vary from subtle to heavy degree of makeup. This study considers 600 images of 150 individuals to perform the experiments, including 300 makeup and 300 non-makeup images. The IIIT-Delhi Contact Lens (IIITD CL) [13] iris database includes 6570 iris images of 101 subjects captured from both eyes variations of lens, including no lens, transparent lens and colored texture lens, which was captured using two iris sensors. In order to evaluate the robustness of the proposed method for iris contact lenses, we construct a dataset containing 100 individuals with six samples from left and right irises randomly. These six samples include two samples without lens, two samples with transparent lens and two samples with colored texture lens. In order to validate the performance of proposed scheme with cosmetics, the whole databases of face and iris are divided into two equal sets as represented in Table 1. In general, 75 individuals are used to construct the training dataset of the face, 50 individuals to construct the training dataset of iris, while the rest of the individuals are used in the test dataset. It should be stated that the individuals used to construct the training dataset are different to the individuals used in the testing dataset for both biometrics. This study reports the performance of the proposed cosmetic detector scheme and implemented methods in terms of the Classification Rate (CR) and Total Error Rate (TER). The classification rate is the percentage of correct classified cosmetic and non-cosmetic images, while the total error rate is the sum of FAR and FRR, which is equal to twice the value of EER in one biometric system [30,31]. The first set of experiments examines the robustness of proposed cosmetic detector and other color, shape and texture descriptors implemented in this study, such as LBP, L-Gabor, color moments and their global and local variations. Table 2 analyzes the performance of face biometrics in the presence of cosmetics, while SVM is used as the classifier. The best classification and total error rates belong to the proposed scheme detection scheme, which were 91.67% and 3.18%, respectively. On the other hand, the investigation of results in Table 2 shows that applying the color moment extractors leads to a classification rate improvement of 5.67% and 9.67% compared to the L-Gabor shape descriptor and LBP texture descriptor. However, the overlapping results caused better improvement in terms of classification rate and total error rate for color moments and the LBP texture descriptor. Our analysis in Table 2 demonstrates that utilizing the microtexton information of local overlapped regions of the multi-scale LBP texture descriptor along with the fused overlapped color moments can improve the classification and total error rates of makeup detection for face images by 86.00% and 5.16%. Generally, the fusion of local and global overlapped features of shape, color and texture of face images in an efficient system, such as the proposed system, can improve the detection and classification rate for make-up. We also conducted the experiments to detect the iris cosmetics of transparent and color texture contact lenses separately in Table 3. The analysis in Table 3 demonstrates the superiority of the proposed method for cosmetic detection specifically in the presence of color cosmetic contact lenses for both classification and total error rates. The classification and total error rate of the proposed scheme in the transparent contact lens dataset is 71.50% and 8.83%, respectively. In the color cosmetic contact lens dataset, these values increase to 78.50% and 6.80%. The combination of the microtexton information of the local overlapped regions of the multi-scale LBP texture descriptors with fused overlapped color moments improved the cosmetic detection of soft and texture images as shown in Table 3. However, the most interesting result is obtained when using the color moment descriptors. Color moments achieved better detection rates for texture cosmetic lenses compared to L-Gabor shape and LBP texture descriptors. However, in the transparent dataset, L-Gabor and LBP obtained higher detection rates. Additionally, the overlapping and multi-scale LBP improves the classification for both transparent and texture contact iris lenses. In order to show the effectiveness of our proposed cosmetic detection scheme, we prepared a comparison with the state-of-the-art approaches using the datasets constructed in this study in terms of classification rate, with the results shown in Table 4. [17] 54.34 50.50 51.50 CCA + SVM [17] 63.00 60.00 57.00 CCA + PLS +SVM [32] 66.67 60.00 61.50 LBP + Gabor +GIST + EOH + Color-Moments + SVM [16] 87.00 58.50 63.00 LGBP + HOG + SVM [33] 91 As shown in Table 4, the best classification rate is obtained using the proposed scheme for face and iris cosmetics with improvements of 0.33%, 0.83% and 2.50% compared to the best classification rates of state-of-the-art techniques. As described above, all the experiments in Table 4 have been conducted on the same datasets used for evaluating the proposed method and therefore, the results depend on the conditions that exist in these datasets. In order to classify the images using the SVM classifier, this study applied the Radial Basis Function (RBF) kernel function by iterative trials. The regularization constant and kernel width of RBF function (C and γ) have been set to 1 and 2, respectively, during the experiments. The number of eigenvectors used for the projection of images to reduce dimensions is set to L − 1, where L is the number of individuals in each dataset. MATLAB R2009a on a 64-bit windows operating system with Intel Core i5-5200U CPU at 2.20 GHz and 4.00 GB RAM is used to implement and perform the experiments. Conclusions In this paper, we present a novel cosmetic detection scheme for detecting makeup and contact lenses. The proposed scheme fuses color-, shape-and texture-based features extracted from the face and/or iris with cosmetics, before classification is conducted using a SVM classifier. In general, a multi-scale local-global technique is used in this study to efficiently collect the microtexton information of global and local primitives from faces and/or irises with makeup and contact lenses. Therefore, we applied the L-Gabor shape descriptor in this paper to produce a set of Gabor filters and consequently, the microtexton information of global and local primitives is extracted using Ov-LBP. Additionally, in order to collect the color-based information of images with cosmetics, the present work computes the overlapped color moments of face and iris images using the proposed scheme. This present work provides the first common scheme applied for both face and iris traits with makeup and texture/soft lenses, which fuses the advantages of color, shape and texture patterns to efficiently detect spots, flat areas, edges, edge ends, curves and colors. The experimental results of the proposed scheme demonstrated the robustness of our biometric system compared to the state-of-the-art methods implemented in this study. The proposed scheme obtained classification rates of 91.67% for facial makeup detection in addition to 71.50% and 78.50% for the detection of transparent and color cosmetic contact lenses, respectively. Author Contributions: The authors performed the experiments and analyzed the results together. Introduction, cosmetic detection algorithms and proposed scheme sections have been written by Omid Sharifi; while experimental results and conclusion sections have been written by Maryam Eskandari. Conflicts of Interest: The authors declare no conflict of interest.
v3-fos-license
2014-10-01T00:00:00.000Z
2013-10-01T00:00:00.000
11022681
{ "extfieldsofstudy": [ "Materials Science", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/1996-1944/6/10/4847/pdf", "pdf_hash": "8ce6972bbf0a5569158f82659f44b1736cf1acda", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43659", "s2fieldsofstudy": [ "Engineering", "Materials Science" ], "sha1": "8ce6972bbf0a5569158f82659f44b1736cf1acda", "year": 2013 }
pes2o/s2orc
Shear Behavior Models of Steel Fiber Reinforced Concrete Beams Modifying Softened Truss Model Approaches Recognizing that steel fibers can supplement the brittle tensile characteristics of concrete, many studies have been conducted on the shear performance of steel fiber reinforced concrete (SFRC) members. However, previous studies were mostly focused on the shear strength and proposed empirical shear strength equations based on their experimental results. Thus, this study attempts to estimate the strains and stresses in steel fibers by considering the detailed characteristics of steel fibers in SFRC members, from which more accurate estimation on the shear behavior and strength of SFRC members is possible, and the failure mode of steel fibers can be also identified. Four shear behavior models for SFRC members have been proposed, which have been modified from the softened truss models for reinforced concrete members, and they can estimate the contribution of steel fibers to the total shear strength of the SFRC member. The performances of all the models proposed in this study were also evaluated by a large number of test results. The contribution of steel fibers to the shear strength varied from 5% to 50% according to their amount, and the most optimized volume fraction of steel fibers was estimated as 1%–1.5%, in terms of shear performance. Introduction Fiber-reinforced concretes (FRCs) are made with various types of fiber materials, such as steel, carbon, nylon, and polypropylene, which are generally known to have enhanced tensile performance and crack control capability compared to conventional concrete [1][2][3][4][5][6][7]. In particular, it has been reported that steel fibers have an excellent effect on the enhancement of the shear behavior [1][2][3][4][5], and thus, many studies have been conducted on the shear performance of steel-fiber-reinforced concrete (SFRC) members. Most of the previous studies, however, proposed shear strength equations that were empirical based on their experimental results [8][9][10][11][12][13][14], which cannot estimate shear behavior along the loading history of the members, i.e., they cannot provide the shear strains or stresses of the members at a loading stage, except for the ultimate strength. In addition, there are only few shear behavior models for SFRC members, and they mostly modified the tensile stress-strain relationship of concrete to fit for SFRC members. Although they are able to estimate the shear behavior of SFRC members, they cannot identify the strains and stresses in steel fibers, which make it difficult to assess the enhancement of shear performance in detail according to the properties of steel fibers. In this study, therefore, steel fibers were modeled as independent reinforcing materials in the analytical models, and the shape, length, and volume fraction of the steel fibers were reflected in evaluating the shear behavior and strength of SFRC beams. The shear strength models proposed in this study are the smeared crack models that were modified from the softened truss models (STM), which can predict the shear behavior of SFRC members relatively fast, compared to the discrete crack model, by defining the steel fibers on the average that are randomly distributed in concrete without any constant direction. The accuracy of the proposed models was also examined by 85 specimens that were carefully collected from previous studies and by comparison to the shear strength equations proposed by other researchers [9][10][11][12]. In addition, since the proposed models can estimate the stresses in steel fibers, an attempt was also made to evaluate the effectiveness of the steel fibers as a shear reinforcing material by assessing the contribution of the steel fibers to the total shear resistance of SFRC beams. Shear Strength Models In the 1960s, Romualdi and Mandel [15] reported on the tensile strength enhancement of concrete by steel fibers, and Batson et al. [16] presented the shear strength enhancement of SFRC beams based on the experimental tests on 102 SFRC beams with the key variables of shear span ratio and volume fraction of steel fibers. Later Swamy and Bahia [17] reported that the shear strength was enhanced due to the steel fibers that deliver the tensile forces at the crack surface in the SFRC beams without shear reinforcement. Sharma [9] performed the experimental study on SFRC beams with the hooked-types of steel fibers, and based on the experiment results, proposed the shear strength (ν u ) equation for the SFRC beams in a relatively simple form, as follows: (1) has been used since ACI Committee 544 adopted it in 1988 [1]. Narayanan and Darwish [10] conducted the experiments on SFRC beams, with the primary variables of the splitting tensile strength (f sp ); shear span ratio (a/d); tensile reinforcement ratio (ρ); fiber coefficient (F 1 ) and bond strength of steel fibers (τ); and proposed the shear strength (ν u ) equations for SFRC beams, as follows: where e is a non-dimensional coefficient considering the arch action, which is 1 for the shear span ratio of greater than 2.8, and 2.8 d/a for the shear span ratio of less than 2.8. In addition, F 1 is a fiber coefficient that equals to, (l f /d f )V f α where l f , d f , and V f are the length, diameter, and volume fraction of steel fibers, respectively; and α is a bonding coefficient, which is 1.0 for hooked-type fibers, 0.75 for corrugated fibers, and 0.5 for straight fibers. Ashour et al. [8] performed the tests on high-strength SFRC beams, having the compressive strengths of greater than 90 MPa, and proposed the following shear strength (ν u ) equation for the SFRC beams with high-strength concrete: which is a modified form of the shear strength equation for reinforced concrete (RC) beams presented in the ACI318 [18]. In addition, Ashour et al. [8] also proposed the shear strength (ν u ) equations for SFRC members by modifying the Zsutty's equation [19] for RC beams, as follows: which consider the shear span ratio (a/d); tensile reinforcement ratio (ρ s ); fiber coefficient (F 1 ); and compressive strength ( c f  ). In Equation (5), ν b is an additional shear resistance by steel fibers in the deep SFRC members, which was recommended as 1.7(l f /d f )· V f ·ρ f based on the Swamy et al.'s research [20]. Kwak et al. [11] also conducted the experimental study on the SFRC beams, having the compressive strengths of greater than 60 MPa and mixed with hooked-type steel fibers, and proposed the shear strength (ν u ) equation of the SFRC members by adding the term for the contribution of steel fibers into the Zutty's [19] shear strength equation, as follows: Oh et al. [12] tested the SFRC beams reinforced by angles in tension, instead of reinforcing bars, and proposed the shear strength (ν u ) equation, as follows: where e is a non-dimensional coefficient considering the arch action, which is 1 for the shear span ratio of greater than 2.5, and 2.5d/a for the shear span ratio of less than 2.5. The shear strength equations for SFRC members mentioned [9][10][11][12] here slightly differ from one another, but they are all derived empirically based on test results and mostly include the tensile strength (or compressive strength) of concrete, fiber volume fraction, tensile reinforcement ratio, and shear span ratio as the key influencing parameters. In addition, they have very simplified forms, which are good for their easy application, but, on the other hand, their prediction accuracy can be limited. (Refer to Table 2 and Figure 4 in Chapter 4). Dinh et al. [13] proposed a theoretical model for shear strength estimation of SFRC members, in which the shear resistance is calculated by the summation of contributions of the concrete in compression zone and the steel fibers in tension zone. Note that their strength model has not been examined in this paper because its theoretical background is quite different from STM models that authors would like to focus on. Shear Behavior Models Compared to the many equations on the shear strength of SFRC members based on experimental test results, there are only a few studies on the shear behavior models of SFRC members based on analytical research. As shown in Figure 1a,b, Tan et al. [21] modified the compression and tension curve of concrete for the rotating angle softened truss model (RA-STM) [22], which took account of the compressive ductility increase and the tension stiffening effect by steel fibers. In other words, his analysis model reflects the effects of steel fibers on the shear behavior of the members through the material curves of SFRC, which is a common modeling for composite materials, and, in fact, provided a good accuracy. It has, however, disadvantages in that it cannot estimate the stresses or strains in the steel fibers, it cannot simulate their residual bond stress or pullout failure, and it cannot count the effects of the fiber volume fraction. Later, Tan et al. [23] proposed a shear behavior prediction model that modified the concrete tensile stress-strain relationship for the modified compression field theory (MCFT) [24], as shown in Figure 1c, in which the volume fraction of steel fibers was considered in the tension stiffening effect. As this model was established with insufficient experimental data, it is uncertain whether the volume fraction of steel fibers was properly considered, and other characteristics of steel fibers, such as the shape and length, were not taken into account. As mentioned, the shear behavior models for SFRC members proposed so far use the stress-strain material curves of SFRC to account for the effect of steel fibers. Thus, they have difficulties in considering the characteristics of steel fibers in details, and cannot consider the failure modes of steel fibers [10,11,25], which often leads to an overestimation of the member ductility. Thus, this study proposed the shear behavior models based on the softened truss models (STM) [22,[26][27][28][29][30][31][32], which can estimate the contribution of steel fibers on the shear resistance by modeling them as independent tensile elements, and can simulate their pullout failure modes by reflecting the bond strengths of steel fibers. [21]; (b) Tensile stress-strain relationship for RA-STM modified by Tan et al. [21]; (c) Tensile stress-strain relationship for modified compression field theory MCFT modified by Tan et al. [23]. Modified Shear Behavior Models Based on the Softened Truss Models The shear behavior models of SFRC members proposed in this study are based on four softened truss models, which are summarized here. Rotating Angle Softened Truss Model (RA-STM) RA-STM [22,26] is a shear behavior model in which the concrete compression softening and the tension stiffening effect are considered. Since this model is a rotated angle model, wherein the crack angles vary depending on the stress state under the assumption that crack angles are consistent with principal stress angles, the shear stress-strain relationship at the crack is not required. Thus, it is the most simple analysis method for estimating the shear strength and behavior among the four models presented here. Table A1 in Appendix shows the equilibrium, compatibility, and constitutive equations used in RA-STM. As shown in Equation A-1, the horizontal stress, longitudinal stress, and the shear stress can be derived by rotating the stresses in the principal stress direction (d − r direction) to the direction of l − t by the principal stress angle (α), as shown in Figure 2a,b. In addition, the compatibility Equation A-2 can be derived using Mohr's strain circle, as shown in Figure 2c. As for the constitutive equations [33,34], Equation A-3, which considers the compression softening effect, was used for the compressive stress-strain relationship of concrete, and Equation A-4, which reflects the tension stiffening effect, was used for the tensile stress-strain relationship. Equation A-5 was used as the constitutive equations of the longitudinal and shear reinforcements, which considers the hardening phenomenon after the yielding and also the earlier yielding point in a steel bar embedded in concrete compared to the bare bars. Fixed Angle Softened Truss Model (FA-STM) As it was assumed, in RA-STM, that the crack direction coincides with the principal stress direction, it was impossible to theoretically consider the shear resistance mechanism at the crack surface, i.e., the aggregate interlock. FA-STM was proposed to solve out such a contradiction in RA-STM. As shown in Figure 2d,e, the shear stresses at the crack surface were considered by fixing the initial crack angle caused by external forces, and the equilibrium equations in FA-STM were derived as shown in Equation A-6 in Appendix. The compatibility equations are also shown in Equation A-7. The constitutive equations of the steel reinforcement and the tensile stress-strain relationship of the concrete are identical to those in RA-STM, but the compressive stress-strain relationship of the concrete was modified to include the reinforcement capacity ratio (η) in the softened coefficient (ζ) as shown in Equation A-3(a and d,f). The analysis has the following stages. First, before the crack occurs, assume that the crack angle α 2 by external force is fixed in 2-1 direction. Then, the principal stress angle α of the d − r direction is determined from the principal stress and the shear stress after cracking, the strains are calculated using the compatibility equations, and the calculated strains are substituted into the constitutive equations to determine the corresponding stresses and the forces. The shear strength can be calculated by iterating the calculation process until the determined forces satisfy the equilibrium condition. In this study, the Zhu et al.'s [35] model was used, which is a modified version of the Pang and Hsu's model [28] that requires more iteration process. Smeared Membrane Model (SMM) The Poisson effect could not be considered in the STM mentioned above they were based on the uniaxial strains of concrete. Thus, Hsu and Zhu [36,37] derived the Hsu/Zhu ratio through a panel experiment, which is basically a Poisson ratio, and they implemented it in SMM [30]. SMM is capable of providing the more realistic strains by considering the Poisson effect in the strain compatibility condition. Equation A-14 in the strain compatibility condition gives the equivalent strains in the uniaxial direction considering the Poisson effect by the Hsu/Zhu ratio. The constitutive equations are the same as those in FA-STM, but the shear stress-strain relationship at the crack surface was simplified using the rational shear modulus proposed by Zhu et al. [35]. Transformation Angle Truss Model (TATM) Although the shear stresses at the crack surface seemed to be considered in FA-STM conceptually by fixing the crack angle, most of the analyses by FA-STM actually assumed that the stresses at the crack surface are the same as the principal stresses. Therefore, its application is limited because the difference between the normal stresses (1-2) on the crack surface as shown in Figure 2e and the principal stresss (d − r) as shown in Figure 2f increases as the difference between the crack angle and the principal stress angle (β) becomes greater. In addition, the constitutive equations in FA-STM were derived from the panel test results, in which the range of the reinforcement capacity ratio was 0.2 < η < 0.5. Thus, it cannot be applied in the cases wherein the reinforcement capacity ratio is below 0.2, which can be often the case in practice. Also, the flexural moment cannot be considered in FA-STM. Thus, Kim and Lee [27,31,32] proposed TATM, modifying FA-STM, in which, as shown in Figure 2g, the principal stresses and strains are obtained by rotating the stresses and strains at the crack surface by β, and the equilibrium equations and the compatibility conditions in the l − t coordinate system are derived by rotating them again by α. This process requires the shear stress-strain correlation at the crack, for which the equation proposed by Li et al. [38] was used, as shown in the first term of A-13(a). In the cases where the axial forces are applied, the Yoshikawa et al.'s equation [39], as shown in -13(a), was superimposed. In addition, in order to consider the flexural moment effect, the steel ratio required to resist the flexure was subtracted, and the remained reinforcement ratio was assumed to resist the shear. Proposed Model: Softened Truss Model with Steel Fibers (STM-SF) In this study, steel fibers are considered as independent reinforcement materials, and it is assumed that a certain number of steel fibers, which are distributed randomly according to the fiber volume fraction, resist the tensile stress perpendicular to the crack surface, as shown in Figure 3a [4]. In addition, steel fibers are assumed to show full composite behavior with concrete before the pull-out of steel fibers occurs, from which, the strains of steel fibers can be considered to be the same as the average strains of concrete at the same location. As shown in Figure 3b, the tensile resistances of steel fibers are added to the equilibrium conditions of the softened truss models in the normal direction. Thus, the additional term by the steel fibers in the equilibrium equations in the l − t direction can be derived by rotating the stress of the steel fibers at the crack surface by the crack angle (α 2 ), as follows: 2 12 σ σ sin α ff l  The stress-strain relationship of steel fibers can be expressed, assuming their elastic-plastic behavior, as follows: where σ f is the stress of steel fibers; f yf is the yield strength; E f is the elastic modulus and 200 GPa can be used [40], and ε 1 is the tensile stress at the crack surface. The tensile force resisted by the steel fibers (T f ) can be calculated by multiplying the number of the steel fibers on the crack plane (n) by their tensile stress (σ f ) and their cross-sectional areas (A f ), as follows: Romualdi et al. [15] proposed the number of the steel fibers on the crack surface per unit area (n w ) considering the orientation of the steel fibers, which was adopted in this study, as follows: where V f is the volume fraction of the steel fibers, and λ is the directional coefficient that considers the orientation of the steel fibers, for which 0.41 is used in this study as recommended by Romualdi et al. [15]. Then, by substituting the number of steel fibers on the crack surface (n) in Equations (14) and (15) to that in Equation (13) where A fp is the average surface area of steel fibers, on which the bond stress is developed, and the maximum bond stress (τ max ) can be calculated as follows: where τ u is the bond strength of hooked-type fibers, for which 6.8 MPa is used in this study as proposed by Lim et al. [40]; and d f is the shape factor of steel fibers, for which Narayanan and Darwish [10] proposed 1.0 for hooked-type fibers, 0.75 for crimp-type fibers, and 0.5 for straight type fibers. Therefore, the ultimate bond strength of steel fibers (σ fp ) in an average sense, considering their shapes and the corresponding maximum bond stress (τ max ), can be summarized as follows: The steel fibers are randomly distributed and typically short compared to the member size, the embedded lengths (l b ) of the steel fibers at cracking cannot be determined accurately. Accordingly, as shown in Figure 3c, it is assumed that one-fourth of the fiber length is the average bond length. Then, Equation (19) can be modified by as follows [ Accordingly, the equilibrium equations for SFRC members, including the tensile resistance of steel fibers, can be expressed as follows: 22 The compatibility equations and constitutive relationships of materials are used as in each softened truss model, shown in Appendix. In addition, the SFRC member is considered to reach its maximum strength either when the pull out failure of steel fibers occurs or when the principal compressive strain (ε d ) reaches the maximum strain of concrete (ζε 0 ), the SFRC member is considered reach their maximum strength. Evaluation of the Proposed Models For the purpose of evaluation on the shear behavior models proposed in this study, the shear test results of SFRC beams has been collected from literature [2,8,10,16,25,[41][42][43][44], as shown in Table 1. Of the total of 132 specimens collected, the specimens that had flexural failures or that were deep beams with a shear span-to-depth ratio (a/d) of 2.5 or less were excluded, and thus, a total of 85 shear specimens was used in this study. The steel fiber volume fraction of the collected specimens ranged from 0.22% to 2.0%, and the size of steel fibers used in the specimens ranged widely from the small ones with the length of 25.4 mm and the diameter of 0.25 mm to the big ones with the length of 60 mm and the diameter of 0.8 mm. In addition, the steel fibers included straight, crimped and hooked types. The concrete compressive strengths ( c f  ) also ranged widely from 20.6 to 93.8 MPa, including normal-strength concrete and high-strength concrete. All the specimens that were used for the evaluation did not have shear reinforcements, and the tensile steel ratio (ρ s ) ranged from 1.1% to 5.7%. Figure 4 shows the analysis results of the shear strength equations presented in Equations (1), (2), (5), and (6), which are also summarized in Table 2 with other analysis results. In Figure 4a-d, the vertical axis represents the ratio of the test results to the analysis results (ν test /ν analysis ), and the horizontal axis represents the fiber volume fraction. Also, the mean, standard deviation (SD) and coefficient of variation (COV) of the ν test /ν analysis values are presented in each graph. The equation proposed by Sharma [9], which has been adopted by the ACI Committee 544 [1], and the one recently proposed by Oh et al. [12] showed relatively good accuracy with the low COVs of 0.26 and 0.25, respectively. The equations proposed by Narayanan and Darwish [10] and Kwak et al. [11] are, however, showed a large scatter, especially for the specimens cast with normal-strength concrete. Figure 5 shows the analysis results of the softened truss models with steel fibers (STM-SF) proposed in this study, which are also summarized in Table 2 with other analysis results. Note that, while Figure 5 shows the ν test /ν analysis values versus the fiber volume fraction in the graph, it also gives the data ranges in terms of the compressive strength and the shear span-depth ratio, as indicated at the bottom of the graphs. As shown in Figure 5a, the modified RA-STM with steel fibers provided a mean of 1.11 and a COV of 0.30, which was a relatively larger scatter compared to the other STM-SF analysis models. This model tended to overestimate the specimens with high-strength concrete, and was relatively inaccurate for the specimens with low steel fiber volume fractions. The principal stress angle is assumed to be identical with the crack angle in RA-STM, but their difference becomes bigger in the specimens with a low steel fiber volume fraction [45], which leads to underestimate the tensile resistance of the steel fibers on the crack surface in such cases. The modified FA-STM with steel fibers showed a relatively high accuracy, with a mean of 0.87 and a COV of 0.18, as shown in Figure 5b, and there was no bias in the ν test /ν analysis values. However, this model tended to overestimate, in particular, the specimens with a high shear span-depth ratio, which seems to be because FA-STM cannot consider the flexural moment effects. [9]; (b) Narayanan and Darwish [10]; (c) Kwak et al. [11]; (d) Oh et al. [12]. The modified TATM with steel fibers, as shown in Figure 5c, provided a good accuracy, with a mean of 1.08 and a COV of 0.23. In particular, this model provided more reasonable analysis results for the cases with large shear span ratios (a/d), which is considered to be because this model can take account of the flexural moment effect. In addition, this model can reflect the difference between the crack angle and the principal stress angle (β), which indeed improved the analysis accuracy in overall. The analysis results of the modified SMM with steel fibers are shown in Figure 5d. It provided a high accuracy with a COV of 0.19, and had no bias along the volume fractions of steel fibers. The improved accuracy in this model seems to come from the consideration of the Poisson effect, and it could be even more accurate if the Poisson ratio after cracking could be obtained from SFRC panel experiments [46]. Overall, all the modified STM models, except the modified RA-STM with steel fibers, provided a good level of accuracy on the shear strength of SFRC members, which implies that the characteristics of steel fibers are well reflected in these models proposed in this study. The existing empirical equations showed relatively larger scatter for those test results that were not within the variable ranges included at the time of their formulation. It is also worth noting that the proposed models are based on the Smeared Crack Model [22,[26][27][28][29][30][31][32] that uses the average stress and average strain relationship, and that they successfully simulate the shear failure modes of SFRC beams, i.e., the pullout failure of steel fibers considering their bond strengths. As aforementioned, the contribution of steel fibers to the total shear resistance can be estimated by the proposed models because the steel fibers are modeled as an independent tensile element. Figure 6 presents the contribution of steel fibers to the shear resistance (ν sf /ν n ) at ultimate according to fiber volume fractions (V f ), where ν n is the calculated shear strength and the shear resistance of steel fibers (ν sf ) is calculated from Mohr's stress circle, as follows: In all the analysis models, the shear contribution of the steel fibers increased as the steel fiber volume fractions increased. In the modified RA-STM with steel fibers, the shear contribution ratio of steel fibers (ν sf /ν lt ) was calculated as approximately 10% at the lowest fiber volume fraction of 0.22%, and as high as 30% at the maximum fiber volume fraction of 2%. In addition, the increase rate of the shear contribution ratio of steel fibers significantly changes at 1%-1.5% steel fiber volume fractions, and it becomes almost flat at 1.5%-2.0% steel fiber volume fractions. The modified FA-STM with steel fibers showed shear contribution ratio similar to that of the modified RA-STM for the SFRC members with the low fiber volume fractions, but demonstrated higher shear contribution ratios for those with the volume fractions of 1% or higher. Also, the shear contribution ratio of steel fibers showed a considerable variation at the volume fraction of 1%. The modified TATM with steel fibers provided very close results to the modified FA-STM, which showed the shear contribution ratio of approximately 30% at 1%-1.5% steel fiber volume fractions. The modified SMM with steel fibers showed higher shear contribution ratios of steel fibers than other models, in which the contribution ratios ranged from 10 to 50%. The model showed a significant variation at the 1% volume fraction, similar to FA-STM, and the increase in the shear contribution ratio of steel fibers also dropped at the 1%-1.5% fiber volume fractions. The observations above confirm the substantial contribution of steel fibers to the improvement of the shear strength of SFRC members, and it is also clear that the steel fiber volume fraction is the key influencing parameter on the shear strength of SFRC members. The shear contribution ratios of steel fibers ranged from 8% to 45% at the steel fiber volume fractions below 1%, and it ranged from 13%-50% at the steel fiber volume fractions over 1%. It was also found that the increase rate of the steel fiber contribution significantly reduced at 1%-1.5% steel fiber volume fractions, and that it was almost flat at 1.5%-2.0% steel fiber volume fractions. This is because the inclined compression strut of concrete first reaches at failure, even if the steel fiber volume fraction increases. Therefore, the optimal volume fraction ratio in terms of shear performance appears to exist between 1% and 1.5%, which is also consistent with the observations in previous studies [47,48]. Conclusions Most of shear strength equations for SFRC members are relatively simple, but provide a low accuracy, as they have been derived empirically based on experimental test results. Some analytical models can estimate shear behavior and strength of SFRC members, but cannot provide the contribution of steel fibers to the shear strength and cannot demonstrate the pullout failure of steel fibers. In this study, the softened truss models were modified appropriately for SFRC members, in which the steel fibers were modeled as independent tensile elements so that the proposed models can reflect the details of steel fibers such as the effects of the shape, length, and volume fraction of steel fibers. The proposed models were also compared to the test results of 85 specimens collected from literature. From this study, the following conclusions were drawn. 1. The softened truss models were modified to be suitable for the analysis of SFRC members by modeling steel fibers as independent tensile elements, which, in particular, can estimate the stresses of steel fibers according to the detailed characteristics of the steel fibers. 2. All the STM-SF models proposed in this study, except for the modified RA-STM with steel fibers, showed a good level of accuracy on the shear strength of SFRC members compared to the empirical equations presented in previous studies. 3. The proposed models adequately simulated the pullout failure of steel fibers, which is the characteristic failure mode in SFRC members, based on the average ultimate bond strength of steel fibers. 4. The modeling method, applying the stress of fibers perpendicular to crack direction directly, was considered more appropriate in FASTM than RASTM; it is, because, as expected, the fixed angle model could reflect the stress of fibers at crack more accurately. 5. The contribution ratios of steel fibers on the shear strength of SFRC members were calculated by the proposed models, which was found to be approximately 30% at the 1%-1.5% steel fiber volume fraction. 6. Based on the observations of the shear contribution ratio of steel fibers, the optimal range of the steel fiber volume fraction, in terms of shear performance, is 1%-1.5%.
v3-fos-license
2018-12-19T14:03:51.776Z
2018-12-01T00:00:00.000
56176377
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://ijponline.biomedcentral.com/track/pdf/10.1186/s13052-018-0587-5", "pdf_hash": "af2e35ad0a02133618817e3237579e7cb3a7bd0d", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43660", "s2fieldsofstudy": [ "Medicine" ], "sha1": "e45ef2e46a7b41ea40b866ea0af28fa264d4e4ca", "year": 2018 }
pes2o/s2orc
Evaluation of serum 25-Hydroxy vitamin D levels in children with autism Spectrum disorder Background Vitamin D plays an important role in etiology of Autism Spectrum Disorders (ASDs). We aimed to evaluate the serum 25 - hydroxyl vitamin D level among children with ASDs in Ahvaz city, Iran. Methods It was a cross-sectional study which had conducted on 62 subjects in two groups: a case group (n = 31) consisted of ASD children who study in especial schools; and a control group (n = 31) of healthy children who were selected by simple random sampling from regular schools in Ahvaz city, Iran during 2016. Maching between two groups has done regarding Socioeconomic status, type and amount of food intake, place of living and age. The levels of serum 25 - hydroxyl vitamin D were assessed in early morning means fasted state and also measured using ELISA method. Data were analyzed using Statistical Package for Social Sciences (SPSS) version 20. The significant level was considered < 0.05. Results In ASD children, the average serum 25-hydroxyvitamine D level was 9.03 ± 4.14 ng/mg. In ASD group, 96.8% (30 subjects) had vitamin D deficiency. In healthy children group, average serum 25-hydroxyvitamine D level was 15.25 ± 7.89 ng/mg. Average serum 25-hydroxyvitamine D level in intervention group was significantly lower than the control group (P > 0.001). Although the parents of patients in control group reported longer exposure to sun (27.42 m per day against 33.06 m per day), no significant difference was observed between these groups in terms of exposure to sun (P < 0.05). Conclusions A significant difference was observed between serum 25-hydroxyvitamine D levels between the healthy and ASD children. It is recommended to use vitamin D supplement in children with ASDs under medical care. Introduction Vitamin D may play an important role in etiology of Autism Spectrum Disorders (ASDs). Vitamin D is a neuroactive steroid affecting brain development and function. It plays an essential role in myelination, which is important for connectivity in the brain. Studies have shown that decreased vitamin D levels, decreased maternal vitamin D levels during pregnancy, and decreased exposure to solar UVB might increase the risk of ASD [1]. Despite extensive studies on ASD, the etiology of this disorder is quite unknown and studies are ongoing [2,3]. ASD has a dominant genetic origin. However, environmental and genetic factors have interaction in the incidence of this disorder [3][4][5]. The results of studies have shown that in disease etiology, risk factors such as prenatal and postnatal infections [6,7], and exposure to Valproic Acid of alcohol during pregnancy [8,9], the age of mother [10], and abnormal nutritional and metabolic factors [3] are effective. During the recent years, the incidence of ADSs has been significantly increased. In previous studies, the incidence of this disorder was 10 in 10,000 [11], whereas the incidence of this disorder is now estimated as 90-250 in 10,000 [12][13][14][15]. In addition, in 2010, CDC reported the incidence of autism disorder in the United States as 1 in 68 and this indicates 78% increase in the incidence level compared with 2002 [16]. However, a part of this sudden increase is probably the result of increased awareness and better reports about autism disorder as well as improved diagnostic criteria, but the exact causes for this sudden increase should be determined in future studies [17]. Increased incidence of the disease can impose a heave financial burden on the society. It is estimated that medication costs for each patient will be 40,000 to 60,000 dollar per year [16]. During the past decades, numerous studies were conducted on the role of vitamin D in neuropsychological disorders [18][19][20][21][22][23]. The findings of these studies showed that vitamin D deficiency is one of the risk factors of evolutional neuropsychological disorders such as schizophrenia [24] and autism [19,[25][26][27][28]. However, studies on the relationship between vitamin D and autism in different parts of the world such as Sweden [29], Egypt [20], Saudi Arabia [30], and China [31,32] indicate lower 25 (OH) D level in patients with ASD in different ages compared with the control group. Moreover, some studies [33,34] have shown different findings and no significant difference was observed between serum levels of vitamin D in ADS and control groups. To our knowledge few studies have been conducted in this regard in Iran and no study has been conducted in Ahvaz city, Southwestern Iran. Therefore, the present study aimed to evaluate the serum 25 -hydroxyl vitamin D level among children with ASDs in Ahvaz city, Iran. Methods It was a cross-sectional study which had conducted on 62 children in two groups: a case group (n = 31) consisted of ASD children who study in especial schools; and a control group (n = 31) of healthy children were selected from regular schools by using simple random sampling approach in Ahvaz city, Iran; 2016. The two groups were matched in terms of gender, age, weight, height, head circumference, adequate breastfeeding (for at least six months), type and amount of food, socioeconomic status (the ratio of the number of family members to bedrooms was used as a measure of socioeconomic status) [35,36], average income, family size, and exposure to smokers. Inclusion criteria Students with ASDs entered the study by confirming the diagnosis by a neurologist and based on DSM-IV criteria and obtaining written informed consent from the parents. In the control group, the informed consent of parents was among the inclusion criteria, too. Exclusion Criteria of the study were the presence of epilepsy and the use of vitamin D supplements were considered as exclusion criteria. Clinical evaluation of patients with autism Diagnosis of patients with ASD based on medical experience, clinical examination, and two criteria of DSM-IV and ADI-R was confirmed by a neurology expert. Evaluation of serum 25-hydroxyvitamine D level The levels of serum 25 -hydroxyl vitamin D were assessed in early morning means fasted state. An expert nurse collected 5 mL blood of children to measure 25 (OH) D serum levels in a Blood Transfusion Center in Ahvaz city, Khuzestan, Iran. The serum samples were isolated after centrifugation and kept at − 20°C until the laboratory assessments. The serum 25 (OH) D levels were measured using the ELISA method (Euroimmun kit, Medizinische Labordiagnostika AG, Germany, EQ. 6411-9601). According to the guidelines of the American Endocrine Association, vitamin D level is defined with the concentration of 25-hydroxyvitamine D3 in blood. Natural, insufficient, and deficient vitamin D levels were determined with 25-hydroxyvitamine D level lower than 20 ng/ml, 21-29 ng/ml, and 30 ng/ml, respectively [36]. All the tests in this study were performed in Nargess Laboratory in Ahvaz city under the supervision of a doctor of medical laboratory sciences. Tests were performed two times and the averaged values were used in the analyses to increase the accuracy of the results. Ethical considerations Prior permissions from educational authorities, school principals, and class teachers were obtained, and then written informed consent form was taken from the parents' children participate. The procedures of this study were approved by Independence Ethics Committee of Ahvaz Jundishapur University of Medical Sciences, Ahvaz, Iran (IR.AJUMS.REC.1394.199) also we have thanks to all subjects and their parents to participate in this study. Parents of children had informed consent to participate in this study. Statistical analysis Kolmogorov-Smirnov test was performed prior to statistical analysis to examine the normality of the variables. The results were presented in the form of statistical tables and numeric indicators. Chi-square test, t-test, and nonparametric test (Mann-Whitney U test) were used to analyze the data. Variable values were expressed as frequency, mean ± standard deviation (SD). Statistical calculations were performed using Statistical Package for Social Sciences version 20 (SPSS Inc., Chicago, IL, USA). For all statistical analyses, P value less than 0.05 was considered as significant. Results The finding has shown that 31 children with ASD and 31 healthy children with age range of 5-12 years old. No significant difference was observed between two groups in terms of age (P = 0.80). In addition, no significant difference was observed between the mean of mother's age at the birth of the child in both groups (P = 0.28). Most of the subjects in the case and control groups were male (83.9 and 90.3%, respectively) and no significant difference was observed in both groups in terms of gender (P = 0.45). Most of children in the case group were Bakhtiari (35.5%) and in the control group were Arab (Table 1). In ASD children, the serum 25-hydroxy vitamin D level was significantly lower than the control group (P > 0.001). In the ASD group, all children showed deficient or insufficient level of serum 25-hydroxy vitamin D (96.8% or 3.2%, respectively). In both groups, the use of direct daily sun was equal (67.74%). No significant difference was observed between the groups in exposure to direct sun (min/day) (P-0.56) ( Table 2). Discussion Recently, the role of vitamin D deficiency is identified as an environmental risk factor for some of autoimmune disorders [37,38]. A study by Patrick and colleagues showed that vitamin D ma influence some of social behaviors of children with autism. He emphasized that vitamin D is a gene activator that creates tryptophan hydroxylase enzyme. This enzyme converts tryptophan into serotonin in the brain. Therefore, a sufficient level of vitamin D to produce serotonin in the brain than functions as a neuron transmitter improves social behaviors by positive effects on behavior [39]. In a clinical trial study by Feng and colleagues on 37 children with autism, for three months, these children received 150,000 IU as intramuscular injection(monthly) and 400 IU orally (daily). These researchers reported that disease symptoms and behavioral checklist in children (3 years old and older) with autism improved [40]. In most of the studies conducted on ASDs and vitamin D, lower 25-hydroxyvitamine D level in children with autism was taken into consideration. Our findings showed low level 25-hydroxyvitamine D level in the ASD children, compared with the healthy counterparts. Testes et al. reported that the mean of serum 25-hydroxyvitamine D level in children with autism with different ethnicities was lower than the control group by 35 nmol/L [41]. Moreover, Duan et al. showed that serum 25-hydroxyvitamine D in patients with autism was significantly lower than the control group [31]. Bener et al. attributed some biological and lifestyle factors such as birth, kinship, body mass profile, and physical activity and 25-hydroxyvitamine D level with the incidence of autism. Their findings showed that serum 25-hydroxyvitamine D level in ASD children was lower than the control group with similar ethnicity, age, and gender (P = 0.004) [42]. This finding was supported by the study of Meguid et al. [20]. Saad et al. [18] reported an inverse relationship between the averaged serum 25-hydroxyvitamine D level and severity of ASD (P > 0.001), which was not evaluated in our study. In another study in Saudi Arabia on comparing serum 25-hydroxyvitamine D level and MAG among 50 children with autism (5-12 years old) and 30 healthy children, a significant negative relationship was observed between serum 25-hydroxyvitamine D level and incidence of autism (P > 0.001) [30]. Neumeyer and colleagues reported that the ration of male children with ASD with serum 25-hydroxyvitamine D level lower than 80 nmol/L was higher than healthy subjects (77% against 37% and p = 0.02). However, the results of a study by Molloy [33] and Esparham [3] in the United States showed that there is not significant relationship between serum 25-hydroxyvitamine D level in two groups of children with ASD and without ASD. Ugur and colleagues investigated vitamin D3 level of 54 children with autism and 54 healthy children between 3 and 8 years old in Turkey. They did not observe any significant difference t vitamin D3 serum level between these two groups [35]. The results of a study by Hashemzeh and colleagues in Iran showed no significant difference between vitamin D in children with autism and healthy children. Also, no significant relationship was observed between serum vitamin D level and severity of the disease symptoms [34]. Conclusion There significant difference was observed between serum 25-hydroxyvitamine D levels in two groups of this study and different studies confirm that and also there was no significant difference between two groups in time of exposure to sun. Therefore it is recommended to use vitamin D supplement in children with ASDs under medical care.
v3-fos-license
2018-12-16T06:53:50.754Z
2018-08-29T00:00:00.000
55935151
{ "extfieldsofstudy": [ "Environmental Science" ], "oa_license": "CCBY", "oa_status": "GREEN", "oa_url": "https://openresearchlibrary.org/ext/api/media/7b335863-5479-4cf0-a8bc-592d1dac7dce/assets/external_content.pdf", "pdf_hash": "3e666bd32aff830b6730dc0817726869516696d9", "pdf_src": "ScienceParsePlus", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43667", "s2fieldsofstudy": [ "Environmental Science", "Agricultural and Food Sciences" ], "sha1": "1dfd902831b9f16629826d0228747ab16aa7e571", "year": 2018 }
pes2o/s2orc
Physical Properties of Soils Affected by the Use of Agricultural Waste Physical Properties of Soils Affected by the Use of Agricultural Waste This chapter provided an overview of the physical properties of soils and their importance on the mobility of water and nutrients and the development of a vegetation cover. It also gives some examples of why the use of agricultural residues can affect positively soil physical properties. The incorporation of agricultural wastes can be a sustainable practice to improve soil characteristics, favoring a model of zero waste in agricultural production and allowing better management of soils. We review and analyze the effect of the use as amendments of different agricultural residues, on physical properties of the soil (e.g., bulk density, porosity, and saturated hydraulic conductivity), especially related to the movement of water in the soil. Introduction The major environmental problems all over the world are the production and accumulation of wastes. Many considerations should be taken into account but, especially, those from the targets given by the European Union (EU). These problems related to wastes, together with the exhaustion of many resources, direct the European Union (EU) toward a strategy of zero waste through the circular economy. The transition to a more circular economy, where the value of products, materials, and resources is maintained in the economy for as long as possible, and the generation of waste minimized is an essential contribution to the EU's efforts to develop a sustainable, low-carbon, resource-efficient, and competitive economy [1]. In the EU plan action for the circular economy, we can find targeted actions for various types of waste. Agricultural wastes can be reflected in two aspects of this plan: recycling of nutrients and biomaterials. Recycled nutrients are a distinct and important category of secondary raw materials, for which the development of quality standards is necessary. They are present in organic waste and can be returned to soils as fertilizers. Their sustainable use in agriculture reduces the need for mineral-based fertilizers, the production of which has negative environmental impacts, and depends on imports, e.g., phosphate rock, a limited resource [1]. Bio-based materials, e.g., those based on biological resources (such as wood, crops, or fibers), can be used for a wide range of products (construction, furniture, paper, food, textile, chemicals, etc.) and energy uses (e.g., biofuels). The bioeconomy hence provides alternatives to fossil-based products and energy and can contribute to the circular economy. Bio-based materials can also present advantages linked to their renewability, biodegradability, or compostability. On the other hand, using biological resources requires attention to their life cycle environmental impacts and sustainable sourcing. The multiple possibilities for their use can also generate competition for them and create pressure on land use [1]. Agriculture is one of the major activities that produces wastes and consumes space, the agricultural soils. It is important to find a synergy between this activity and the soil. In this sense and following the considerations of the EU, crop residues are an important source of plant nutrients and organic matter [2]. Reuse of organic materials is desirable in order to reduce waste streams and to take advantage of the soil benefits associated with added organic matter and associated plant nutrients [3]. Nowadays, it is well known that the application to the soil of organic amendments derived from urban, agricultural, industrial, or municipal activity has several agronomic and environmental effects [4]. This addition can be a good strategy to maintain or even increase the levels of organic carbon in the soil [5]; to improve physical properties such as stability of aggregates and soil porosity [6][7][8]; to incorporate nutrients such as N, P, and K, thus avoiding the high fossil energy costs and therefore the impact on global warming due to the production and the use of synthetic fertilizers [9]; and to help cushion climate change through the sequestration of atmospheric CO 2 by the organic compounds of the soil [10]. Considering the physical properties and the soil organic carbon (SOC), organic matter amendments can increase water holding capacity, soil porosity, water infiltration, and percolation while decreasing soil crusting and bulk density [11][12][13]. One of the main measurable effects of the repeated application in the soil of organic wastes is the increase of soil porosity and, therefore, the decrease in the bulk density of the soil [8,14]. It is also expected to be beneficial for the work of tilling the soil, thus reducing the draft force and, consequently, a possible decrease in tractor fuel [15]. The energy saved due to the lower resistance that the soil offers when being worked if we apply waste is being ignored from the waste treatments that imply the application to the soil of this in the environmental evaluations. However, reducing greenhouse gas emissions can be important [15]. This chapter pays attention to the physical properties of the soil due to their importance in plant growth and soil stability and the possibilities associated to the use of agricultural wastes. Moreover, it is centered in applying the circular economy concept and zero waste in agricultural systems that can be able to reuse their own wastes. Agricultural wastes can be used as a source of organic matter and nutrients for soils and influence the physical properties of soils. They can also be easily applied as mulching, providing numerous advantages [16]. So, this chapter gives an overview of the positive effects of recycling vegetable wastes and soil physical properties. Importance of the physical properties of the soil The physical properties of the soil are very important for agricultural production and the sustainable use of soil. The amount and rate of water, oxygen, and nutrient absorption by plants depend on the ability of the roots to absorb the soil solution as well as the ability of the soil to supply it to the roots. Some soil properties, such as low hydraulic conductivity, can limit the free supply of water and oxygen to the roots and affect negatively to the agricultural yield. Soil structure Soil structure is one of the most important soil's physical factors controlling or modulating the flow and retention of water, solutes, gases, and biota in agricultural and natural ecosystems [17,18]. Soil structure is very important in soil productivity and is a limiting factor of crop yield [19,20]. Soil structure controls many processes in soils. It regulates water retention and infiltration, gaseous exchanges, soil organic matter (SOM) and nutrient dynamics, root penetration, and susceptibility to erosion [21]. For these reasons, soil structure stands out among the physical properties of the soil, since it exerts an important influence on the edaphic conditions and the environment. The term "structure" of a granular medium refers to the spatial arrangement of solid particles (texture) and void spaces. Most soils tend to exhibit a hierarchical structure. That is, primary mineral particles, usually in association with organic materials, form small clusters or "firstorder aggregates." These form larger clusters or "second-order aggregates" [22]. Aggregate hierarchy in soils is reflected in increasing aggregate size with each successive level. However, the term "structure" in soil cience generally carries a connotation of bonding mechanisms in addition to geometrical configuration of particles [22]. Organic matter acts as a cement that can help the formation of aggregates and, therefore, the soil structure. Without hierarchical structure, medium-and fine-textured soils such as loams and clays would be nearly impermeable to fluids and gases [22]. Moreover, the soil organic carbon has a greater effect on aggregation especially in coarse-textured soils [23]. Thus, structure plays a crucial role in the transport of water, gases, and solutes in the environment and in transforming soil into a suitable growth medium for plants and other biological organisms [22]. Aggregation is an indicator of soil structure and results from the rearrangement of particles, flocculation, and cementation [24][25][26]. Organic matter has been clearly identified as one of the key components of soil structural stability. However, in agricultural soils, it is progressively being depleted by intensive cultivation, without adequate yield of plant biomass. The loss of soil structure is increasingly seen as a form of soil degradation [27] and is related to the activities that are carried out in the soil and by the crop. Maintenance of optimum soil physical conditions is important for sustaining plant growth and other living organisms in soils. Poor soil structure results in poor water and aeration conditions that restrict root growth, thus limiting efficient utilization of nutrients and water by plants [28]. Soil structure also determines the depth that roots can penetrate into the soil [29]. Aggregate stability Soils with high organic matter content tend to have larger, stronger, and more stable aggregates that resist compaction, whereas the opposite is true for soils with less organic matter. An improvement in soil aggregate stability has several consequences for an agroecosystem, including reduced risk of soil compaction and erosion [30]. The quality of soil structure greatly depends on the soil organic carbon (SOC) content [31], especially on the fraction of labile SOC (also called the "particulate organic matter" because of this fraction cycles relatively quickly in the soil). Labile organic matter also plays an important role in maintaining soil structure and providing soil nutrients [32]. Aggregate stability is a keystone factor in questions of soil physical fertility and can be enhanced by means of an appropriate management of organic amendments, which can maintain an appropriate soil structure. This agronomic procedure could improve pore space suitable for gas exchange, water retention, root growth, and microbial activity [9]. Aggregate stability at the soil surface is affected mainly by exposure to rainfall (drop impact and runoff). A bare soil (e.g., a soil from which crop residues have been exported or incorporated into the soil by plowing) is in direct contact with raindrops, which facilitates a breakdown of soil aggregates, increasing soil erodibility. Aggregate degradation can lead to surface sealing and crust formation, which reduces the water infiltration rate and increases the risk of soil erosion and the loss of valuable topsoil [33]. High silt content, together with low organic matter content, results in soils that are more prone to aggregate breakdown and surface crusting [29,34]. Organic matter applied on the topsoil protects to the erosion and favors the aggregation of mineral particles. Soil compaction Soil compaction is a form of physical degradation in which soil biological activity and soil productivity for agricultural and forest cropping are reduced, resulting in environmental consequences. Compaction is a process of densification and distortion in which total and airfilled porosity and permeability are reduced, strength is increased, soil structure are partly destroyed, and many changes are induced in the soil fabric and in various characteristics [35]. Generally, four indicators quantify soil compaction: total porosity, pore size distribution, bulk density, and penetration resistance. Given that root growth is impeded by soil compaction, these indicators are probably negatively correlated with root growth and rooting depth [29]. Even more, these properties are closely related to water movement, water availability for plants, and soil gas exchange. Porosity Porosity is a main indicator of soil structural quality. Therefore, its characterization is essential for assessing the impact of adding organic matter to a soil system. Reduced porosity results from the loss of larger pores and the increase of finer pores [36]. A soil's porosity and pore size distribution characterize the pore space of the portion of the soil's volume that is not occupied by solid material. The basic character of the pore space governs critical aspects of almost everything that occurs in the soil: the movement of water, air, and other fluids; the transport and the reaction of chemicals; and the residence of roots and other biotas. By convention, the definition of pore space excludes fluid pockets that are totally enclosed within solid material. Thus, porous space is considered a single and a continuous space within the body of soil. In general, it has fluid pathways that are tortuous, variably constricted, and usually highly connected among themselves [37]. The relationship between the storage capacity and the movement of water in soils with porosity is evident and fundamental. However, not only the total number of pores defines the water behavior of the soil but also and in many cases predominantly the shape, size, and distribution of the pores. From the agronomic point of view, the size distribution not only affects the amount of water that can hold the soil but also regulates the energy with which it is retained, the movement toward the plant, toward the atmosphere, and toward other zones of soil. The use of agricultural wastes as soil amendments facilitates the maintenance of the porosity in two forms: directly, if the agricultural wastes are ligneous matters with high resistance to biodegradation and, indirectly, after the transformation of the initial organic matter into humic substances and forming aggregates and enhancing the soil structure. Bulk density One of the most prominent indicators of soil structure is soil bulk density (dry bulk density (BD)), its determination does not require any specific expertise or expensive equipment, and it is based on sampling undisturbed soil. Bulk density (BD) is calculated as the ratio of the dry mass of solids to soil volume. The values of both bulk and particle density are necessary to calculate soil porosity [38]. Porosity can then be derived from BD, knowing or approximating the particle density value [21]. This physical property is dynamic and varies depending on the edaphic structural conditions. It can also be modified by soil biota, vegetation, and mechanical practices, trampling by livestock, agricultural machinery, weather and season of the year, etc. [39,40]. Bulk density is an important indicator of soil quality, productivity, compaction, and porosity. BD is mainly considered to be useful to estimate soil compaction. Root length density, root diameter, and root mass were observed to decrease after an increase in BD [41]. However, the interpretation of BD with respect to soil functions depends on soil type, especially soil texture and soil organic matter (SOM) content [21]. Hydraulic conductivity One of the properties most directly related to the structure and movement of water in the soil is hydraulic conductivity. It is known that water movement in soils occurs both vertically and horizontally, depending on the humidity conditions. In saturated conditions, which occur below the groundwater level, the movement is predominantly horizontal and in a lesser proportion in a vertical direction. In conditions of non-saturation, when the large pores are filled with air, the flow is preferably vertical. The ability of soil to transmit water depends on the presence of interlinked pores and their size and geometry [42]. The saturated hydraulic conductivity (Ksat) of soil is a function of soil texture, soil particle packing, clay content, organic matter content, soil aggregation, bioturbation, shrink-swelling, and overall soil structure [43][44][45][46]. The Ksat is one of the main physical properties that aids in predicting complex water movement and retention pathways through the soil profile [47,48], and it is also widely used as a metric of soil physical quality [49]. Water holding capacity Water holding capacity is the ability of a soil to storage water. Thus, the importance of this storage is that water can be available for plants. Environmental conditions such rain, temperature, and isolation join to the soil properties of soil organic matter, texture, and structure and determine the capacity of a soil to retain water. In rainfed agriculture of arid and semiarid environments, the capacity of the soil to store water plays an important role in the success of crops. Infiltration and evaporation are the most important processes that determine the storage of water in the soil. Surface conditions play an important role in determining the infiltration and evaporation rates of water in the soil. Tillage is the most effective way to modify the characteristics of the soil surface due to its effect on the porous space (shape, volume, and continuity of the pores). The roughness of the soil surface is another property of the soil that influences the balance of water, since it increases the storage capacity in soil depressions [50,51]. In agricultural soils, the roughness of the surface is influenced by tillage, vegetation, soil type, and rainfall intensity [51]. The use of waste as surface cover has been shown to be effective in reducing the evaporation of water from bare soil, which translates into a greater potential availability of water for plants [16]. This reduction is due to the isolation of the soil from the sun's rays and the temperature of the air and the increase in the resistance to the flow of water vapor by reducing the wind speed [52,53]. However, it is also necessary to determine the influence on the movement of water in the soil profile. In the arable layer, it is determinant for the proper functioning of agricultural soils. Therefore, the determination of hydraulic conductivity becomes very relevant information to predict the proper behavior of water against infiltration and storage capacity or loss by the soil. The use of agricultural wastes in soils Agricultural residues used as soil amendments or fertilizers may represent an excellent recycling strategy [54]. They are important to improve soil physical (e.g., structure, infiltration rate, plant available water capacity), chemical (e.g., nutrient cycling, cation exchange capacity, soil reaction), and biological (e.g., SOC sequestration, microbial biomass C, activity, and species diversity of soil biota) properties as organic soil conditioners [55][56][57][58]. Cultivating crops that produce substantial amounts of residues can increase SOC in the soil profile, depending on the tillage practices used [29]. Incorporated residue can beneficially influence soil chemical and physical properties, especially in non-flooded soils [57]. Organic residues can contribute to the development of soil structure with a binding agent in the formation of aggregates. The application of organic wastes to soils reduces bulk density; increases total pore space, mineralization, available nutrient elements, and electrical conductivity of soils; and increase microbial activity [26,59,60]. Crop residue application offers several environmental and ecological benefits for the soilwater-plant system, including improved soil structural quality, which ensures optimum soil functions. Generally, the incorporation of crop residues increases soil porosity (especially the large pores) and reduces soil bulk density, regardless of tillage operations. Large pores are particularly favored because organic matter is much less dense than mineral particles. The application rate can affect the extent of compaction. The effect of crop residues in a given tillage practice also depends on soil type and depth. When they are mechanically incorporated, crop residues can reduce the bulk density at depth. Conservation tillage with the incorporation of crop residues increases SOC content near the soil surface, whereas in conventional tillage, soil C is distributed throughout the plowed area. Soils with higher organic matter content tend to have higher aggregate stability and therefore less risk of compaction and soil erosion [29]. With regard to soil hydraulic properties, the presence of crop residues on the soil surface tends to increase hydraulic conductivity at the surface, whereas tillage affects soil hydraulic properties both at the soil surface and below it because of the destabilization of soil aggregates [61]. The influence of residue management on crop production is complex and variable and results from direct and indirect effects and interactions. A direct effect is, for example, the presence of residues on the soil surface, which constitutes a direct obstacle to crop emergence. Indirect effects include residue mineralization, which leads to more nutrients available for the plants or the presence of organic matter from residues modifying the soil structure and therefore modifying the root system development [29]. Incorporation of vegetable crop residues affects soil quality not only in terms of nutrient supply but also by influencing soil food web organisms and improving soil physicochemical properties, resulting in a better environment for crop growth and improved productivity [62][63][64][65][66][67][68][69]. The application of organic residues on carbon and nitrogen mineralization and biochemical properties in an agricultural soil led to a significant increase in soil microbial biomass size and activity [54]. Poppy waste, a suitable seed-free, inexpensive source of non-animal-based organic carbon, was used to evaluate its effect on soil organic carbon content and production of Bocane spinach (Spinacia oleracea) [70]. Application of poppy waste at 200 m 3 /ha increased soil organic carbon content, soil pH, and soil salinity. Wheat stalk, cotton stalk, millet stalk, and soybean stalk were used as the main material, and oven-dried lentil straw was used as an additive material in 100:10, 100:15, and 100:20 w:w ratios for 100 g of main material (70% moisture content) to cultivate Pleurotus ostreatus and try to improve the total harvest amount [71]. Composted agricultural wastes Agricultural wastes can be composted before their application to soil. The composting process, with other residues or alone, facilitates the transformation into a stable organic matter, more complex and more resistant to the biodegradation. However, the control of the process should be undertaken in order to obtain a mature compost [72]. Green tea waste and rice bran were composted, while various parameters such as compost pile temperature, pH, electrical conductivity, nitrate content, and carbon to nitrogen ratio were measured regularly. There was no further change in the state of the compost pile after 90 days indicating that it could be used for agricultural applications [73]. The possible bioconversion of wet olive cake by low-cost biostabilization (vermicomposting process) has been evaluated [74]. Wet olive cake fresh (WOC), pre-composted (WOCP), or mixed with biosolids (WOCB) were vermicomposted for 6 months to obtain organic amendments for agricultural and remediation purposes. The application of composted organic amendments derived from different crop residues, generally, has a positive impact on the physical, chemical, and biological properties of soils [75]. Crop residues are composed of lignin, cellulose, hemicellulose, micro-, and macronutrients. The degradation of these residues varies depending not only on their lignin and cellulose content and their C/N ratio, which is crop dependent, but also on the environment and soil conditions. Residues with a high C/N level (e.g., wheat straw) decompose slowly, sometimes resulting in the immobilization of soil N. This can be positive in no-tillage systems, creating a mulch that protects the soil from erosion and evaporation, but it also means that there are fewer nutrients available for the next crop. Residues with a low C/N level mineralize quickly, releasing more N and nutrients for the next crop. Only specialized fungi and some microorganisms can degrade lignin. Residues with high lignin content will take longer to decompose than those with low lignin content [29,76]. Examples of the use of agricultural wastes and the effects on some physical properties The physical properties of soils condition their quality and, in particular, the porosity which affects different processes related to the transformations of organic matter, gas exchange, the growth of plant roots, and movement of water in the soil, as before it was indicated. Soil porosity is the property that, due to the effect of compaction, is being altered largely in the European Union (and developing countries), together with the loss of organic matter from soils [77], and, for this reason, our management of the soils should allow maintaining this property at adequate levels. The use of plant residues as soil amendments is a sustainable alternative to improve the physical properties [28], although we must take into account the characteristics of the waste to ensure its efficiency. Once incorporated into the soil, the waste can be mineralized more or less rapidly, depending on characteristics such as its degree of lignification, its C/N ratio, and environmental conditions [78]. Fresh vegetable residues, such as tomato (C/N = 12) and onion (C/N = 15) residues [79], with high water content, decompose quickly [80] modifying the composition of soil organic matter [9]. However, there are residues with high C/N ratios, such as wheat or rice wastes (C/N = 105), more lignified, which degrade more slowly [81], lasting for more time the modifications they produce on certain physical properties of the soil. In this second type of waste, we can consider the cereal straw and the palm tree leaves ( Figure 1). Both, with high lignin composition and after a conditioning process (drying and crushing), can be used to modify the physical properties of the soil such as bulk density, porosity, and hydraulic conductivity. These agricultural wastes have a similar total organic matter (determined by loss on ignition) content but a different density, bulk, and particle density ( Laboratory experiments were performed on cylinders similar to those used for the determination of densities of organic materials, according to UNE-EN13040:2008 and the methods of soil analysis of SSSA-ASA [82][83][84]. These experiments showed that the agricultural residues applied (hay straw and palm tree leaves, air dry and cut with a size of approximately 4 cm in length) modified the density of soils and improved their porosity. Figures 2 and 3 show the changes of the particle (PD) and bulk (BD) densities in two soils (soil 1: sandy clay loam; soil 2: clay loam), when these wastes were added in a proportion (waste/ dry soil): 0, 3, and 6% (w/w). The agricultural residues reduced the densities of the two soils, depending on the dose applied. The apparent densities were clearly affected, which indicates that the addition of the amendments favors that the soils were less compacted. Depending on the physical characteristics of the agricultural waste, it will be more or less efficient. In this sense, straw residues reduce the bulk density more than that of palm tree leaves. Bulk density decreases in the soils, which means that the porosity, spaces that can be filled with air and water, increases. This is observed in Figure 4, where the changes in the porosity of the two soils were showed. Porosity increased when the amount of agricultural wastes applied was greater. Hay straw residue increased the porosity more than palm tree residue. Obviously, the types of waste that improve the porosity of soils also favor the movement of water. This fact is very important because it allows a better root growth. One of the parameters that gives information on the movement of water in soils is the saturated hydraulic conductivity (Khs), based on Darcy's law, and calculated by using a constanthead permeameter. The texture of the soils determines the quantity and size of the pore, and, therefore, we should expect that more clay soils have lower Khs values than those with a sandy texture. Physical Properties of Soils Affected by the Use of Agricultural Waste http://dx.doi.org/10.5772/intechopen.77993 Figure 5 shows how the addition of agricultural wastes affected the saturated hydraulic conductivity of soils. It is observed that, without the addition of residues, the clay loam soil (soil 2) has a lower value of Khs than the sandy clay loam soil (soil 1). The positive effect of the incorporation of the amendments on the hydraulic conductivity of the two soils used was clear. Hay straw produced a greater increment than palm tree residues in both soils. This example of addition of vegetable wastes to the soil demonstrates the positive influence on some physical properties, and the importance of recycling of agricultural wastes in origin can help the strategy of zero waste of the European Union and, moreover, improve the quality of our soils. Conclusions It is important to consider which type of soil characteristics should be improved when applying agricultural wastes. For the physical properties, vegetable wastes with a high content of lignified organic matter can be used successfully, influencing soil density, porosity, and hydraulic conductivity. However, if the objective is to increment the nutrient availability, less lignified and more labile residues may be added to the soil, although in this case a possible imbalance of nutrients in soil may be found. The main objective in the EU and, in fact, in the planet, is to reduce the production and increase the recycling of agricultural wastes, participating on the valorization of the residues and introducing them in the strategies of the circular economy and zero wastes. Joining soil and organic matter amendments allows us to get better soils and the best agricultural management, favoring the carbon sequestration under the present climate change scenery.
v3-fos-license
2021-05-13T00:03:00.089Z
2020-12-31T00:00:00.000
234382713
{ "extfieldsofstudy": [ "Chemistry" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://journals.ardascience.com/index.php/bes/article/download/107/26", "pdf_hash": "0cb9861737b9332e3d40fa5178e166cf16e1d614", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43668", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "sha1": "5be71eef21836da4e3b347569d80202a336d1ed9", "year": 2020 }
pes2o/s2orc
Antioxidants enzyme activity in Brassica oleracea var. acephala under Cadmium stress When a plant is under heavy metals stress, it has different mechanism of coping with it. Brassica oleracea var. acephala (kale) is a plant that has an ability of heavy metal accumulation and removal of heavy metals from the ground. The plants were exposed to 50, 100, 200, and 500 μM of CdCl2 for 5days, in controlled in vitro conditions. Root length was measured to confirm the Cd effect on plant growth. There are five key antioxidants enzymes responsible for the regulation of heavy metals stress: superoxide dismutase (SOD), catalase (CAT), ascorbate peroxidase (APX), Peroxidase (POD) and Polyphenol oxidase (PPO). All enzymes showed significant activity, especially triggered by 500 μM CdCl2 in both varieties. The domestic sorts seem more resistant if compared to hybrid variety, showing significant lower expression of antioxidants enzymes at higher concentrations. In general, significant percentage of enzymes is more expressed in the hybrid Italian sort, Nero di Toscana, indicating the ability of domestic sorts to be more resistant to heavy metal stress. Introduction The periodic system of elements contains 90 natural elements. Of those 90, 53 are classified as heavy metals [1]. Some metals are micronutrientsmeaning they are essential to human in small doses, but some, such a Pb, As, Hg and Cd are nonessential and toxic to human. However, there is no widely accepted definition of heavy metals, but they are considered as elements that has a density more than 5g/mp 3 [2]. Out of all heavy metals -Cadmium is one of the most toxic one. It is a non-essential element that can appear it the nature both naturally and by anthropogenic factor [3,4]. Human can be exposed to cadmium trough food, water, as well as tobacco, drugs, cosmetics or dietary supplements [5][6][7]. Main foods through which cadmium is ingested into the body are the vegetables and cereals but can also be through animalskidney and liver since they accumulate cadmium. Through creating oxidative stress, cadmium is inducing tissue injury, epigenetic changes, inhibiting or upregulating different transport pathways, inhibition of heme synthesis [5][6][7][8]. Cadmium and his compounds are, according to International Agency for Research on Cancer (IARC) classified as Group 1 -meaning ''carcinogenic to humans''. According to IARC there is enough evidence that cadmium contributes to development of lung cancer, and limited evidence for development of prostate and kidney cancer [9]. The major site for cadmium accumulation in the body are the kidneys [10], and it is associated with kidney cancer [11]. It also affects the gut microbiota, the abundance, and its relative population. In the intestinal walls it induces cell damage and inflammatory response [12]. Cadmium can also accumulate in the endometrium. Ascorbate Peroxidase (APX) is a key enzyme responsible for the removal of excess hydrogen peroxide under both stress and normal conditions. [34]. It is responsible for reduction of hydrogen peroxide to water and monodehydroascorbate, using ascorbate as a substrate [35]. The genus Brassica is including a vast variety of more than 30 species, with different varieties and hybrids. Brassica vegetables contains glucosinolate, polyphenol, carotenoid, provitamin A, Vitamin C, K, B9 and B2, Calcium, [36], as well as some antioxidant enzymes, CAT, POX and SOD [37]. Brassica oleracea varieties include variety of vegetables that human is using for consumption, such as broccoli, cauliflower, cabbages, Brussel sprouts and others. They are used as a food, oil production, animal fodder and others [38]. Among the Brassica vegetables, Brassica oleracea var. acephala (kale) has more calcium, vitamin C, K, A, B2 B9 content that the others [39]. The Brassica vegetables have beneficial properties such as anticarcinogenic properties, protecting against cardiovascular diseases, protecting against ageing processes, etc. Some Brassica species has been found out to be good heavy metal accumulators, and thus, used in phytoremediation process. [40]. They have the ability to accumulate heavy metals in the parts above the ground, tolerance to the high concentration of heavy metals in the soils, rapidly grow and highly accumulating biomass and they are easy to harvest and to grow as an agricultural crop [2,38]. The aim of this study is to examine different sorts of Brassica oleracea var. acephala for their enzymatic activity in kales of domestic and hybrid origins. Seed germination and root analysis The seeds that were used was a hybrid variety kale NT (Nero di Toscana, an Italian sort) and a domestic variety sort from Stolac (village Ravine) from the region of Herzegovina. The seeds were germinated using the tap and paper method [41]. Two layers of paper tissues were placed in each petri dishes (9 cm diameter). Each petri dish was moistened with different concentration of cadmiumchloride ( Table 1). The stock solutions were prepared from cadmium-chloride (CdCl2) from Sigma-Aldrich and distilled water. Thirty seeds were placed in each petri dish. Each petri dish contained seeds with different concentrations of CdCl2. The petri-dishes were then placed in a Growth Chamber for 5 days, at the temperature of 27°C with 16 hours of light per day. After 5 days the seeds were collected, roots were measured for each plant using a ruler. The plants from each petri dish where placed in separate tubes (i.e., concentration) were stored at -80°C fridge, until it is needed for further experiments. Plant tissue extraction for enzyme activity determination Extraction buffer of 100 mM phosphate (pH = 7) is made. The root tissues from plants were placed in a mortal and grounded under the liquid nitrogen using a pestle. Weight of 0.3 -1.0 g of obtained plant powder is transferred to the 1.5 mL Eppendorf tube, and triple amount of phosphate buffer of the plant powder of is added. Stir. The tubes are transferred to the centrifuge and centrifuged at 13.000 x g for 20 minutes and at 4°C. After the centrifugation, the supernatant is carefully taken with the pipette and transferred to new Eppendorf tubes [42]. Peroxidase (POX) activity determination assay A 3 mL of 0.1 M phosphate buffer (pH = 6.5) was added to the tube. Then, 100 μL of 20 mM guaiacol (prepared under the hub) and 100 μL of enzyme was added. To initiate the reaction, 30 μL of 12 mM hydrogen peroxide was added. The process is repeated for each obtained enzyme (i.e., each petri dish). Control group is made, with the hereby reaction mixture, but without the enzyme. A 200 μL from each tube and place it in each well of the 96-well plate. Measure the absorbance at 436 nm at 0 min and 2 min [43]. Peroxidase is catalyzing the hydrogen removal from large number of molecules, as well as decomposition of hydrogen peroxide. POX will bind hydrogen peroxide and the complex obtained, [POD-H2O2], can oxidize different hydrogen donors. By formation of an oxidized compound, among other methods, the activity of POX can be determined. Guaiacol is a non-cancerogenic substrate, which acts as a substrate for POX, where one mole of guaiacol is oxidized by one mole of hydrogen peroxide. The end product of this reaction is tetraguaiacol which is measured spectrophotometrically at 436 nm [43][44][45]. In the experiment performed, guaiacol and enzyme source are firstly added, where the guaiacol acts as a substrate. For reaction to start, hydrogen peroxide was added, and as a result tetraguaiacol was formed. Thus, it is expected that the absorption is increasing, with the increasing concentration of cadmium (more cadmium, more POX, more tetraguaiacol). Superoxide dismutase (SOD) activity determination assay A 1.3 mL of 50 mM of sodium carbonate buffer is added to the test tube. Then, 500 μL of 96 mM Nitroblue tetrazolium (NBT) (pH = 10.0) and 100 μL of 0.6% Triton X-100 was added. To initiate the reaction, 100 μL of 20 mM hydroxylamine hydrochloride (NH2OH.HCl) (pH = 6.0) was added. A 70 μL of enzyme is added. The process is repeated for each obtained enzyme (i.e., each petri dish). Control group is made, with the hereby reaction mixture, but without the enzyme. A 200 μL from each tube and place it in each well of the 96well plate. Measure the absorbance at 540 nm at 0 min and 2 min [46]. Hydroxylamine will autoxidize and generate superoxide radicals. NBT, which is colorless, will be reduced by superoxide radicals, and turn NBT to blue formazan, which can be measured spectrophotometrically. The SOD, which is scavenging the superoxide radicals is going to inhibit the blue color formation. Thus, as more SOD is present, less color will appear [46,47]. Polyphenol oxidase (PPO) activity determination assay A 1.5 mL of 50 mM phosphate buffer (pH = 6.5) was added to the tube. Then, 200 μL of the enzyme is added. To initiate the reaction, 200 μL of 100 mM catechol was added. The process is repeated for each obtained enzyme (i.e., each petri dish). Control group is made, with the hereby reaction mixture, but without the enzyme. A 200 μL from each tube and place it in each well of the 96-well plate. Measure the absorbance at 495 nm at 0 min and 2 min [48]. For Polyphenol Oxidase, catechol is used as a substrate, which will, in the present of oxygen form quinone. The amount of quinone is measured spectrophotometrically. Thus, as more PPO is present, more quinone is generated [49]. Catalase (CAT) activity determination assay A 1.9 ml of 50 mM PBH (pH = 7.0) was added to the tube. Then, 0.1 ml of the enzyme was added. To initiate the reaction, 1.0 mL of 0.075% mM hydrogen peroxide was added. The process is repeated for each obtained enzyme (i.e., each petri dish). Control group is made, with the hereby reaction mixture, but without the enzyme. A 200 μL from each tube and place it in each well of the 96-well plate. Measure the absorbance at 240 nm at 0 min and 2 min [50]. Catalase is decomposing hydrogen peroxide to water and oxygen. The decomposition of hydrogen peroxide is measured at 240 nm [50]. Thus, as more CAT is present, less hydrogen peroxide will be present, as the catalase will decompose the hydrogen peroxide. Ascorbate peroxidase (APX) activity determination assay A 1.5 ml of 100 mM phosphate buffer (pH = 7.0) is added to the tube. Then, 300 μL of 5mM ascorbate, and 600 μL of enzyme is added to each tube. To initiate the reaction, 0.5 mM of 600 μL of hydrogen peroxide is added. The process is repeated for each obtained enzyme (i.e., each petri dish). Control group is made, with the hereby reaction mixture, but without the enzyme. A 200 μL from each tube and place it in each well of the 96-well plate. Measure the absorbance at 290 nm at 0 min and 2 min [51]. Ascorbate Peroxidase is an enzyme that scavenge hydrogen peroxide, using ascorbate as an electron donor, which will oxidase. The amount of ascorbate is measured spectrophotometrically [52]. As more APX is present, less ascorbate will be available. Figure 1. Average root length in domestic and hybrid sorts; with standard deviation From the Fig. 1 it can be observed that the hybrid sorts have larger roots for all CdCl2 concentrations, except for 500 µM. The root length decreases with the increase of CdCl2 concentration, except for control group in both domestic and hybrid sorts. The root length was largest for the concentration of 50 µM, while the root length was smallest for the concentration of 500 µM, for both domestic and hybrid sorts. Also, root length is decreasing for both domestic and hybrid sorts, from the concentration of 50 µM. Root measurements In table 2. descriptive analysis of domestic and hybrid root lengths. The p-value (T<=t) for both one-tail and two tail are greater than 0,05, which represent insignificant difference. In Fig .3 we show that the first control group (0 Cd), the values at 0 min and 2 min for domestic sort was 0,036 for domestic sort and 0,051 for hybrid sort. Since no enzyme is present (POX) the reaction is not catalyzed. The activity of POX enzyme is larger in the hybrid sort, whereas in the domestic, it is 2 to 3 times smaller. For both sorts, POX activity is decreasing over time. In the domestic sort, POX activity at both 0 min and 2 min is smallest for the second control group and is greatest at the concentration of 200 at 0 min, and 100 µM at 2 min; after which activity of POX activity is increasing. For the hybrid sort, POX activity is smallest at 500 µM for 0 min and 200 µM for 2 min and is largest at 100 for both 0 min and 2 min. In Table 3. descriptive analysis of domestic and hybrid for POX activity is shown. The p-value (T<=t) for both one-tail and two tail are less than 0,05, which represent significant difference. Figure 4. SOD activity in domestic and hybrid sorts In Fig. 4 we see that the control group the absorption for domestic sort was 0,44 and 0,47 at 0 min and 2 min respectively, while for the hybrid sort the absorption was at both 0,51 for 0 min and 2 min, which is a mistake due to wrong pipetting. The absorption was the smallest for both domestic and hybrid sort. In domestic sort the SOD levels are fluctuating, rising from control to 50 µM, decreasing from 50 µM to 100 µM, again rising to 200 µM, and finally decreasing from 200 µM. In hybrids, SOD levels are increasing until the concentration of 50 µM, after which its levels are decreasing. In Table 4, descriptive analysis of domestic and hybrid for SOD activity is shown. The p-value (T<=t) for both one-tail and two tail are less than 0, 05, which represent significant difference. Figure 5. PPO activity in domestic and hybrid sorts In the first control group the absorption for both domestic and hybrid at 0 min and 2 min were 0,05 (±0,005). The highest vales for domestic sorts were at concentration of 100 µM, and at 50 µM for hybrid sorts, after which the absorption decreases, for both 0 min and 2 min. The lowest values, omitting control groups, were at the concentration of 500 µM. In table 5. descriptive analysis of domestic and hybrid for PPO activity is shown. The p-value (T<=t) for both one-tail and two tail are less than 0,05, which represent significant difference. Figure 6. CAT activity in domestic and hybrid sorts In the first control group the absorption for both domestic and hybrid at 0 min and 2 min were 3,45 (±0,005). The CAT activity did not change through time and remained almost the same with the change of Cd concentration for the domestic sort. For the hybrid sort, the absorption decreased through time, indicating more CAT utilization; but remained relatively stable through time. Furthermore, in hybrid, at 0 min, CAT levels were slightly increasing for 0 min, and slightly decreasing for 2 min. In table 6. descriptive analysis of domestic and hybrid for CAT activity is shown. The p-value (T<=t) for both one-tail and two tail are less than 0,05, which represent significant difference. Figure 7. APX activity in domestic and hybrid sorts The values for the first control groups were 2,53 and 2,65 at both 0 min and 2 min, for domestic and hybrid sort, respectively. The amount of APX through time is increasing, for both domestic and hybrid sorts. The amount of APX throughout different concentrations is fluctuating. In domestic sort, the highest values of APX were at 100 µM at both 0 min and 2 min, after which the amount of APX is decreasing. In hybrid sort the highest values were at control group and 100 µM at 2 min and 0 min, respectively. After concentration of 100 µM, the amount of APX is also decreased, but is again rising after 200 µM. In table 7. descriptive analysis of domestic and hybrid for APX activity was done. The p-value (T<=t) for both one-tail and two tail are greater than 0,05, which represent insignificant difference. Discussion The aim of this study was to examine the enzyme activity of domestic and hybrid sorts of Brassica oleracea var. acephala under cadmium stress. The damage can be repaired by antioxidants, which can be nonenzymatic and enzymatic. The presence of heavy metals cause H2O2 production, which will increase the level of oxidative enzymes [25]. Since the root is in direct contact with the heavy metal, they are affected more than other parts [38]. This study showed that the root length was highest for the concentration of 50 µM, followed by 100, µM, 0 µM (control), 200 µM and 500 µM. [38]. In the study conducted by Meng et al.,2009, the root length was higher at small concentrations (10 µM of Cd), positively affecting it [53]. According to them, the Cd was transported to shoots, which can explain this phenomenon. Also, a similar study explained this phenomenon as the disruption of the homeostasis occur, which will lead to overcompensation response [38]. The CAT activity for domestic kale was rising until the concentration of 100 µM, at both 0 min and 2 min, after which it started to decrease. These results are in consistence with the study of Taylor et al., 2013 [54], which showed that the CAT levels were rising until the concentration of 100 µM CdCl2 in Brassica juncea. However, for the hybrid sort, the CAT levels decreased after the concentration of 200 µM and 50 µM at 0 min and 2 min, respectively. The levels or SOD in our study were rising, decreasing, rising, and decreasing in domestic, while in hybrid it was rising, and then decreasing. For POX, its levels were rising, and then dropping after concentration of 100 µM for domestic, and for hybrid rising and dropping, after the concentration of 50 µM. In the study conducted by Zhang et al., 2019 which was conducted on Brassica juncea L. (Indian mustard) and Medicago sativa L. (alfalfa), the SOD levels were increasing until the concentration of 150 mg/kg of Cd, after which the levels of that enzyme were dropping until the final concentration of 600 mg/kg [55]. For POX, the levels dropped until the concentration of 75 mg/kg of Cd. In this study, the CAT activity was also examined, in which its levels were rising until 150 mg/kg, and then dropping. According to them, the rising and dropping levels of SOD and POD enzymes is a usual defense mechanism that plants have against stress caused by Cd. The CAT levels also remained relatively stable for different concentrations in Indian Mustard, with levels decreasing from the concentration of 100 µM. The trend of decreasing of CAT may be because CAT is most sensitive to heavy metal stress, which activity is firstly inhibited, which will result in H2O2 clearance obstruction, so that the superoxide can be only removed by the POD enzyme, and thus POD levels are increased. But with the Cd levels being increased, the accumulation of ROS results in membrane lipid peroxidation, which is also increased, and thus POD will finally decrease. The APX level were fluctuating, but the final increase of APX was recorded at the concentration after 100 µM for hybrid, and final decrease after 100 µM for domestic. A study conducted by Mobin & Khan, 2007of Varuna cultivar of Brassica juncea L., the APX levels were also increasing with the Cd increasing [56]. However, there is a threshold, a sub-lethal concentration, which induced maximum level of APX, after which the APX levels are decreasing. The APX, together with SOD is one of the key enzymes in H2O2 scavenging. The APX levels were higher in plants that are known as metal accumulators, such as Brassica juncea, Sedum alfredii, Triticum aestivum, as the upregulation of APX increases the ability to overcome Cd stress [57]. Furthermore, Armas et al., 2015 stated that in B. juncea, APX and CAT are not competing, but rather different enzyme classes that scavenge H2O2; APX for fine ROS modulation, and APX for excess ROS that accumulated during stress [58]. For PPO enzyme, this study showed that its levels decreased from concentration of 50 µM and 100 µM for hybrid and domestic sorts, respectively. However, in a study conducted by Kapoor et al., 2014 on Brassica juncea, the PPO levels were fluctuating, as they were rising, decreasing and again rising, with the highest levels at the concentration of 600 µM of Cd [59]. Finally, our study showed that out of 5 enzymes, taking in account samples at 0 min and 2 min, PPO and SOD showed more activity in domestic sorts, as well as POX for 0 min. Furthermore, considering both 0 min and 2 min, 50% of enzymes showed decrease at 100 µM, and 30% of enzymes showed decrease from 200 µM for domestic sorts; not considering CAT, which levels were rising. But for hybrid sorts, 66% present showed decreasing levels of enzymes from the concentration of 50 µM, and 16 % decreasing levels from the concentration levels of both 100 µM and 200 µM; not considering CAT and APX, which levels were rising. The decreasing effects of enzymes at higher concentrations can be explained in a way that plants start to die at higher concentrations, as it was explained by Hayat et al., 2007, where the plant starts to die at 200 µM and 250 µM. Lastly, for 4 out of 5 (80%) enzymes there was significant difference between the enzyme activity in domestic and hybrid sorts [60]. Conclusion Heavy metals that are present in the nature can be accumulated by plants and cause various damaging effects on them. Present of heavy metals cause oxidative stress to occur. However, plants have certain mechanism by which they cope with different stresses that affect them, and among themheavy metal stress. The most effective way of protection is the initiation of antioxidants proteins, both non-enzymatic and enzymatic. Our study compared domestic and hybrid sorts of Brassica oleracea var. acephala to examine which are more resistant to heavy metal stress by examining their enzyme activity. The conclusion that was made from this study is that 80% of enzymes showed significant difference between domestic and hybrid sorts, meaning that domestic sorts are more resistant to heavy metal stress. The domestic sorts are slightly more resistant if compared to the hybrids, showing decreased enzyme activities at higher concentrations.
v3-fos-license
2018-12-12T09:40:17.138Z
2014-02-01T00:00:00.000
125027388
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://www.per-central.org/items/perc/3709.pdf", "pdf_hash": "8793c7c4080eeef94b3faa512a602b3c278021d6", "pdf_src": "ScienceParseMerged", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43670", "s2fieldsofstudy": [ "Physics", "Education" ], "sha1": "8793c7c4080eeef94b3faa512a602b3c278021d6", "year": 2014 }
pes2o/s2orc
Cogenerative Physics Reform Through CMPLE We describe a physics teacher's successful pedagogical changes, which were based on the teacher’s attempts to match the physics learning environment with her students’ learning preferences. The pedagogical changes were observed during the teacher’s implementation of the Cogenerative Mediation Process for Learning Environments (CMPLE). CMPLE is a formative intervention designed to help students and instructors collaborate to improve their classroom environment through a combination of cogenerative dialogues and time allotted to work towards their collective goals. The teacher’s change in pedagogy resulted from her students' involvement in reforming their classroom. For this instrumental case study, we examined a veteran high school teacher's semester-long use of CMPLE in her Modeling Instruction classroom. Analysis of classroom videos and teacher interviews indicates that the teacher used CMPLE to adapt her pedagogy in complex ways, in order to balance her past experience and teaching values with her students’ desires to be taught in ways seemingly counter to her then-current methods. We will trace her teaching practices and her self-described awareness of her students’ prior experiences, to highlight notable changes concerning a particular cogenerative goal. INTRODUCTION Within the PER community, extensive advocacy efforts have been undertaken to develop and disseminate high quality research-based instructional materials to physics teachers [1].The produced materials have been curricular as well as pedagogical in their focus.Yet during implementation, instructors have a tendency to alter those materials for a variety of reasons.Studies suggest that in practice, science educators draw on, adapt, and change such materials "in ways that they view as useful for their students and that are consistent with their personal pedagogical beliefs, classroom climate, and students' past experiences [2]."Such findings seem reasonable to us, especially in light of our teaching experiences while acting as physics instructors, and at some point or another having to adjust our curriculum, pedagogy, or both, for reasons similar to those described above.Related research has in fact shown several affective and academic benefits of instructors using analyzed survey results of their students' experiences and learning preferences to inform their local reform efforts [3].However, instead of using surveys, our previous work has focused on designing an intervention that facilitates instructors in making these adaptations to their teaching for their specific and varied classroom contexts.A key feature of our intervention is a method for instructors to include students' input, active participation, and increased agency in developing, guiding, and implementing their reform efforts.This paper will describe some of the pedagogical changes made by a high school physics teacher while using our intervention, as well as her students' influence on those changes. CMPLE: A FRAMEWORK FOR COGENERATIVE REFORM The method we identified as most useful for instructors to include their students in classroom reform is a type of discussion known as "cogenerative" discussion.A cogenerative discussion (cogen) is an egalitarian discussion enacted between instructors and students in order to negotiate and produce goals for changing their teaching and learning practices [4].Cogens have been successful for that purpose in a number of physics learning contexts [5].Using cogens, instructors can gain understandings from their students about their learning preferences, opinions about the classroom environment, and past learning experiences.Likewise, cogens can function as an opportunity for students to understand instructors' pedagogical beliefs and intentions.Then, through negotiation, goals can be set to adjust the teaching and learning to best match these new understandings. Our designed intervention is called the Cogenerative Mediation Process for Learning Environments (CMPLE), because cogens are the PERC Proceedings centerpiece of its overarching and cyclical framework [6].CMPLE entails participants reflecting on their learning preferences, describing their learning preferences and experiences to each other, using them to develop classroom goals (via periodic cogens), and a time period of working towards their goals.CMPLE is a "formative" intervention, and its design principles have been borrowed from the sociocultural field of Activity Theory."Formative" interventions differ from their "linear" counterparts in that participants can 1) choose their own issues for the intervention to address, 2) negotiate the process of the intervention, and 3) generate "locally appropriate" solutions [7].As comparison, designers of linear interventions dictate its contents and goals, and seek to control for the presumed variables prior to implementation.These labels are analogous to the observed categories of "emergent" "prescribed" for reform change strategies in undergraduate science instruction [8]. Through the use of formative interventions, participants can develop agency.In the case of CMPLE, the agency of instructors can derive from their knowledge that their pedagogical changes are connected to their students' preferences and experiences; whereas students' agency can develop from their ability to influence how they are being taught. RESEARCH CONTEXT AND DESIGN This research is part of a larger instrumental case study [9], in which we are interested in how instructors' use of CMPLE facilitates their local cogenerative reform efforts.These data were collected in an honors physics course taught by a participating teacher, Dr. Lana Mendez (alias).At the time of the study (spring 2012 semester), Dr. Mendez had been teaching for 11 years at a private school, and had been using the research-based reform "Modeling Instruction in High School Physics" curriculum for the previous 2 ½ years.She told us that she chose to participate in this study and enact CMPLE with her 5 th period honors physics class because she had been looking for meaningful ways to give her students more "voice" in her classroom.CMPLE, she felt, provided her with a step-by-step framework for her to hear what her students had to say about her class, but also adjust her teaching methods to respond to their ideas. Our research question for this portion of the research is, "How did CMPLE influence Dr. Mendez's pedagogical shift from providing teacher-led examples to providing subtle hints?"The first author gathered over 50 hours of classroom video, via two simultaneously recording cameras, and was not directly involved in any teaching and learning activities.Periodically Dr. Mendez was interviewed concerning her views on the progress of her CMPLE implementation (more than 6 hours audio were recorded).All audio interviews and videos of the two CMPLE cogens were transcribed.The interviews and video data were then used to track implementation of, and teacher reflection on the cogenerative goals.The following narrative will briefly describe how Dr. Mendez and her students negotiated, implemented, and subsequently modified one cogenerative goal through using CMPLE over the four month time period.We will answer our research question by providing evidence that pedagogical changes occurred as a result of the teacher following the CMPLE process.In addition, we will focus on the transcribed interviews with Dr. Mendez's to inform us (and report here) why she made those changes, and some effects they had on her understanding of her students' learning and her own teaching. COGEN #1: BUILDING NEW MODEL OF PRACTICE Dr. Mendez introduced CMPLE to her class when they returned from their winter break.She did so by assigning an open-ended homework reflection (as described in the CMPLE Users' Guide [10]) designed to elicit students' learning preferences.The next day of class (January 12 th ), she led a whole-class cogen in which she and her students compared their learning preferences with her then-current pedagogical practices.She began by asking students to describe their 1 or 2 most important learning preferences.A student "scribe" volunteered to help keep track of the preferences being discussed.During the cogen, several students voiced their preference to have more "one-onone" help from the teacher.Dr. Mendez responded that she would prefer to teach in a way that generally helps students as groups, rather than as individuals all the time.Through a brief round of negotiation, a goal was agreed upon for the teacher to lead (or have students lead) worksheet examples at the front board, before the students work problems on their own (or as groups).While setting this and the several other CMPLE goals, Dr. Mendez wrote them clearly on the large whiteboard at the front of the classroom. During the research interview immediately following this cogen, Lana revealed that she was "torn" concerning this new model of practice.She was concerned that as a result of her making this change, students would start "matching what I'm doing, as opposed to figuring out how to do it on their own."However, Dr. Mendez remained optimistic enough to enact this and all of the cogenerative changes upon which she and her students had agreed. COGEN #2: ADJUSTING NEW MODEL OF PRACTICE Within a week of the first cogen, Dr. Lana Mendez, the teacher, posted the agreed-upon CMPLE goals on printed sheets of paper on a wall in her classroom.Throughout the next three months, Lana was observed implementing instruction designed to address that goal (as well as several other goals developed during the first CMPLE cogen). During that time, video recordings and researcher observations showed her leading solutions to the first problems on the worksheets she handed out in class.Sometimes she stood in front of the class and wrote on the board, while other times she sat in the back of the class and asked for student volunteers.In both cases, she implemented that change as an interactive activity; utilizing such pedagogical techniques as Socratic questioning, drawing multiple representations, and requiring active student participation.Prior to the second CMPLE cogen, she informed the researcher that she felt confident her students would be satisfied with her implementation of that goal. On April 4 th , Lana introduced the second cogen by asking students to discuss their progress (as well as her progress) in achieving their goals.Regarding the goal of "teacher-led examples", some students surprisingly expressed the opinion that Lana "could do more teacher-led examples".Students also discussed how Lana only led the "easy" problems on the board, and requested that she increase the overall number of problems she led.Lana responded by expressing her concern that the students wanted to rely on her to "make connections" to their prior knowledge, for the harder problems.Since she was not willing to increase the amount of worked-out problems, she negotiated by asking the students, "What can we do in class to help you make that connection?"One student suggested, "What helps me is if you make little hint-offs".After further discussion, Dr. Mendez agreed to give students "hints on new concepts", in order to get them "to think along the right lines." In the research interview immediately following the second cogen, Lana referred to a similar tension in this issue, because "it defeats the purpose if I do too much."Nonetheless she was cautiously willing to change her pedagogy again, because she felt the cogen was fruitful for both her and her students' understanding of each other's positions, and the new "hints" goal was perhaps an improvement over the original goal. END OF YEAR: REFLECTING ON CMPLE Similar to her efforts to actualize the "teacher-led examples" goal from the first cogen, Dr. Mendez attempted to change her instruction in line with the new "hints" goal from the second cogen.In an interview shortly before the end of the semester, approximately 7 weeks after cogen #2, Dr. Mendez was asked about her progress of working toward the "hints" goal.In response, she explained that she occasionally gave verbal hints on problems she considered to be "well designed to target the misconception", but that tend to leave students "stuck".As an example, she described a recent class discussion about a problem from the Electric Fields unit, in which a field diagram is useful for constructing a solution.She explained that due to the wording of the question however, students instead drew force diagrams.As a result, the students were having difficulty solving the problem, and none had proceeded beyond that point.Dr. Mendez saw this as an opportunity to give students a hint, rather than working through the problem on the front board.She recalled that after she suggested students think of the physical situation in terms of a field, an excited student immediately responded, "Oh, the field lines are going to cancel!"Dr. Mendez informed us that student was "not even my strongest student", and was impressed because she didn't need to draw the field representation before responding.Dr. Mendez reflected that in general, her hints helped students feel "like they came up with it on their own," instead of waiting for the her to come tell them the answer.Extending the theme of student agency in the classroom, Lana noted that, "…if [students] can feel they figured it out on their own, it's so good for their confidence."A student corroborated Dr. Mendez's perceptions during a class discussion less than an hour after the interview quoted above.Dr. Mendez initiated the discussion to get feedback from her students about the effects of the cogenerative pedagogical changes that took place during the semester.When Dr. Mendez asked a student named "Jenny" how she was affected by the implementation of the "giving hints" goal, the student first recalled times when she felt "not confident" in her problem solving when she "couldn't get past" a certain step.Yet, she further explained that Dr. Mendez's new hints "triggered something."Jenny cited a recent example when Dr. Mendez suggested she draw a force diagram for a particular problem.As a result, Jenny explained that she thought to herself at the time, "Oh, ok.I can do that!"During this end-ofyear discussion, her newfound self-efficacy (confidence in her ability to solve physics problems [11]) was more important to Dr. Mendez than whether that particular problem was solved correctly. DISCUSSION The students' original preference to have more oneon-one attention from the teacher resulted in pedagogical changes of teacher-led examples (of varying difficulty), and then to the teacher giving hints.Those pedagogical changes were the teacher's attempts to address her students' preferences, prior experiences, and most importantly, their learning of physics.Throughout her time implementing CMPLE, Dr. Mendez continuously made efforts to listen and respond (verbally and through pedagogical changes) to her students' positive and negative criticisms of her course.We are aware that some of Dr. Mendez's changes described in this paper may not be considered aligned with commonly accepted physics reform teaching practices.However, we take the position that because some amount of local recontextualization of reform practices by teachers is to a degree inevitable [2], the reasons for those changes should be explicitly relevant to the classroom participants.We remain impressed with this teacher's ability to balance the goals and content of the Modeling Instruction curriculum with her students' (and her own) sometimes conflicting, but always evolving conceptions and experiences of teaching and learning physics. With regards to the CMPLE's influence on Lana's pedagogical changes, we have verified that changes were made.Those changes (described throughout this paper) were introduced and/or adapted after each CMPLE cogenerative discussion.In the time between the cogens Dr. Mendez worked to actualize the agreedupon changes.In an Activity Theory sense, her holistic change process is reminiscent of Engeström's "expansive cycle of learning", in which activities shift from abstract questioning to making actual changes in the real world [7]. Ultimately, instructors are responsible for implementing and facilitating changes to their classroom, no matter how cogenerative in nature those changes are.Dr. Mendez explained that although she was previously interested in including students' opinions into her course structure, she knew of "no formal way of addressing the class and saying let's look at [your] preferences."She would have had to rely on students to take the initiative to tell her they would like to try something new, but "that almost never happens." CMPLE provided her with a framework for eliciting and pedagogically responding to those suggestions.The combination using the CMPLE framework to work towards and experience cogenerative reform gave Dr. Mendez a deeper understanding of her students' teaching and learning perceptions, while giving students the opportunity to affect how they were taught. For future research, a more complete analysis of CMPLE in Dr. Mendez's classroom is being prepared for publication.For our implementation work we are considering CMPLE for use with in-service physics teacher professional development.Specifically we are investigating and developing ways that CMPLE could be built into in-service professional development programs as a formative teaching assessment. , edited by Engelhardt, Churukian, and Jones; Peer-reviewed, doi:10.1119/perc.2013.pr.067Published by the American Association of Physics Teachers under a Creative Commons Attribution 3.0 license.Further distribution must maintain attribution to the article's authors, title, proceedings citation, and DOI.
v3-fos-license
2022-11-23T16:18:38.778Z
2022-11-21T00:00:00.000
253791032
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://www.cambridge.org/core/services/aop-cambridge-core/content/view/F05CFC25EB594A750B15C61BF6DE3281/S0034412522000713a.pdf/div-class-title-there-can-be-only-one-a-response-to-joseph-c-schmid-div.pdf", "pdf_hash": "135302dd1786d9ed7bfe4eaad7975a810719dc25", "pdf_src": "Cambridge", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43671", "s2fieldsofstudy": [ "Philosophy" ], "sha1": "123781c367ef15ad8538aa5407b26f36b67a97bc", "year": 2022 }
pes2o/s2orc
‘There can be only one’: A response to Joseph C. Schmid Abstract Recently, in response to an article of mine, Joseph C. Schmid has argued that some traditional theistic arguments for God's unicity are problematic in that they presuppose a controversial principle and conflict with Trinitarian theology. In this article, I answer Schmid's concerns. I defend one of the original arguments while advancing new ones, and I vindicate my abductive argument for theism over naturalism. differentiating feature between two Gods and, to make things worse, (iii) actually conflict with Trinitarianism. Schmid's response is both thoughtful and valuableit provides opportunity for clarification and further discussion of arguments related to the gap problem, which is slowly gathering attention in the philosophy of religion.However, I don't think Schmid's criticisms succeed.In what follows I attempt to advance the debate by showing, first, that one of my original arguments, with some modifications, does work; second, that Schmid's parody argument against Trinitarianism is invalid; and third, that other arguments can be given without the above controversial principle, strengthening my case.I will also discuss the role this argument can play in the project of worldview comparison. Defending an argument for God's unicity God's unicity compromised?IoI As I explained in my previous article, the classical theist's picture of God is that of a purely actual reality, something which is pure being (esse) itself.From the nature of something which was thus, I wrote, it follows it would have to be unique: [S]uch a thing could not be multipliable, because it could not be subjected to any differentiating feature, as a genus (animal) is multiplied in its species (human) by the addition of a specific difference (rationality) or a species (human) in its individuals . . .by the addition of matter.There is nothing outside pure being that could act, with respect to it, as a differentiating feature, as the specific difference rationality is outside the genus animal or as matter is outside form, because 'outside' pure being there is only non-being, and non-being is nothing.So pure being could not be differentiated, as pure being, into multiple instances of itself . . .Hence, a purely actual reality that was pure being itself . . .would have to be unique.(Gel (2021), 3) 3 Schmid (2022, 6) helpfully formalizes said argument thus: (1) For there to be more than one thing that is pure esse, there would have to be some feature(s) that differentiate(s) each from the other(s).(2) But nothing that is pure esse could have such a differentiating feature. (3) So, there cannot be more than one thing that is pure esse.(1, 2) (4) But whatever is purely actual is pure esse. (5) So, there cannot be more than one purely actual thing.(3, 4) Schmid's first complaint is that (1) essentially amounts to the controversial principle of the Identity of Indiscernibles (IoI), which is here just assumed without argument.IoI states that 'if x is distinct from y, then there is some feature that one has that the other lacks' (ibid.)inshort, that there cannot be two distinct indiscernible things.Given the controversial nature of IoI, anyone mounting an argument on it should be ready to give some argument for it -Schmid is right in pointing this out.Now, one way to advance the discussion here would be to forget IoI altogether and put forward other arguments for God's unicity that didn't depend on itand this I will do below.However, I don't think we need to abandon IoI that quickly.Though a full-blown defence of IoI far exceeds my purposes, 4 I would like to briefly sketch a reason in its favour, in order to show that the above argument does not stand on intolerably unreasonable ground.And the reason is this: I think that, without IoI, our ontology runs the risk of getting chaotically overcrowded very quicklyor at least the possibility of this should force us to remain agnostic as to the number of ordinary objects we encounter in everyday experience. For instance, I have one pencil on my desk.But if I allow it is possible that, were I to see one pencil, there are in fact two distinct indiscernible pencils, I'm not sure I can continue to be confident that there is only one pencil on my desk.Consider also that, presumably, if it is possible for there to be two distinct indiscernible objects, it is also possible for there to be three, four, ten, or a million of them.Hence, without IoI or some principle like IoI, we would constantly be in the dark as to how many objects we encounter in everyday experience. 5 Maybe someone would argue that, even without granting IoI, the rational thing to do is to assume there is only one pencil on my deskafter all, it is rational to assume that things are as they seem to me, and it seems to me that there is only one pencil on my desk.But I don't think this objection works.For, yes, it is rational to assume thus . . .unless I have a reason to think things might not appear to me the way they are.And I think denial of IoI gives us precisely such a reason. Consider a thought experiment.Mary is kidnapped by a mad philosopher and wakes up in a large room, chained to a wall.In front of her, she sees a nice little pine tree, and so, naturally forms the belief 'There is a pine tree in front of me.'But then, the kidnapper informs her that, before constructing the room, he flipped a coin to decide whether to plant one pine tree (heads) or more than one (tails)with the condition that, were the coin to turn up tails, he would plant the additional trees so perfectly aligned behind the first one that, from Mary's perspective, nobody could tell whether there was more than one tree or not.Assuming Mary trusts her kidnapper (she knows he is a Kantian and would not lie, for instance), it seems to me that the rational thing for her to do in this situation is to remain agnostic as to how many trees there are in the room.For all she knows, there might be only one, sure, but there could also be two, three, four, etc. Mary has now a reason for not taking at face value how things appear to her. 6 I propose that the one who denies IoI finds himself in a parallel situation.He, like Mary, has a reason for not taking at face value how things appear to him.After all, one pencil will appear to him as only one pencilbut so would two distinct indiscernible pencils (and three, four, five, etc.).As the saying goes, if it looks like a duck, swims like a duck, and quacks like a duck . . .well, without IoI, maybe it is two ducks. One could say that we don't need the full-blown principle to avoid these (and other) 7 undesirable consequences.Maybe it suffices to take IoI as a sort of rule-of-thumb that admits of exceptions, and to restrict these to very rare occasions.Personally, I would want to know why IoI should admit of exceptions, and why these ones and not others.It seems to me that, in the absence of a plausible story as to how contained and limited these exceptions are (and why), the previous sceptical conclusions followfor we would be in the dark with respect to the situations in which application of IoI is warranted or not.Having said this, I am not entirely opposed to this rule-of-thumb approach.But then, I don't see either why the unicity argument would need more than a rule-of-thumb IoI.Sure, the argument would be stronger with a totally universal principle, but that a weaker one is conceded need not mean the argument is therefore without any merit.In the absence of any reason to think that beings of pure esse are not subject to IoI, the fact that no differentiating feature can be found between them should suffice to reasonably conclude that there can't be more than one. Finally, it seems to me there is a way to tweak the above unicity argument to make it depend on a principle of identity not of indiscernibles simpliciter, but of necessary indiscernibles (that is, entities which are necessarily indiscernible, indiscernible in every possible world). 8This would have the advantage of being truer to premise 2, which states that there could not be any differentiating feature between beings of pure esse.If there is a possible world w where two beings of pure esse are distinguished by a differentiating feature, then one of the two is not a being of pure esse in w.Hence, beings of pure esse are indiscernible across every possible worldthey are necessary indiscernibles. 9And while there may be some motivation to question the identity of indiscernibles, I can think of no reason to question the identity of necessary indiscernibles. What about Schmid's objection to IoI?After suggesting that 'the principal motivation behind IoI seems to be explicability', for if there are no differentiating features between two distinct objects, 'their individuation would seem to be primitive or brute', he writes: Why can't individuation or distinctness simply be primitive?In that case, there need not be some feature that grounds things' distinction. . . .Indeed, there seems to be a prima facie plausible argument that individuation or distinctness must ultimately be primitive.For we can equally ask: in virtue of what are those individuating features of x and y individuated?If they're not individuated by anything, then we have primitive individuation, which is precisely what IoI sought to avoid.If they have some further differentiating features, then we're off on a vicious regress.For we can further ask, of those features, in virtue of what are they individuated?And so on ad infinitum.It seems, then, that we must ultimately bottom out in primitive individuation.(Schmid (2022), 6) It's not clear to me, though, how this objection is supposed to work.Consider two distinct physical objects, a rectangular object and a circular object.They are differentiated (among other things) by the one having the feature of being rectangular and the other that of being circular (or, if preferred, by the one being rectangular and the other not).Is there any need to appeal to something else in virtue of which the feature 'being rectangular' is different from the feature 'being circular' (or 'not being rectangular')?It doesn't seem so: their difference appears to be self-evident or self-explicative. 10Is this something the proponent of IoI seeks to avoid?Not really: what he seeks to avoid is diversity without discernibility.There is no indiscernibility between 'being rectangular' and 'being circular' (or 'not being rectangular'), but there would be between two objects that shared all and only all features in common. Additionally, even if it is true that we must accept primitive (understood in the sense of brute) individuation at some level, it doesn't follow that we need to accept it at all and any levels.In fact, we have just seen that there are compelling reasons against accepting primitive individuation for things or objects ('substances'), to which proponents of IoI usually restrict the principle. 11Hence, it seems that a proponent of IoI could concede that we must ultimately bottom out in primitive individuationonly that we had better not have to do it with things or objects.And that's all the above argument for unicity needs. 12 So, IoI, though certainly controversial and in need of a more in-depth defence, is not without warrant.Having said this, I think Schmid's points can help make the unicity argument more modest, which need not be a bad thing.Insofar as one finds IoI plausible, to that measure one has reason to think that there could only be one being of pure essegranting that there couldn't be any differentiating feature between two hypothetic beings of pure esse, something to which I now turn. Distinguishing beings of pure esse We have now dealt with Schmid's criticisms of (1).But what about premise 2, that there can be no differentiating feature between two hypothetic beings of pure esse?Schmid complains that the justification given for ( 2) is sketchy at best, since it is unclear what 'outside' means in the context of the argument: 'It certainly can't mean 'distinct from', since there most definitely are things distinct from pure being.But if it doesn't mean distinction, I struggle to see what it could mean ' (ibid., 7).This is fair enough, 13 and I think a better and more straightforward justification for (2) can be given, following Edward Feser (2017,(121)(122). Under classical theism, God just is pure being itself -Aquinas's Ipsum Esse Subsistens.But if there were two Gods, two beings of pure esse, they would have to be distinguished by some differentiating feature (premise 1).However, if pure being A was distinguished from pure being B by having a feature F which B lacked, it would cease to be true that A just is pure being itselfinstead, A would be being plus feature F. Add anything to A in order to distinguish it from B -A stops being something which just is pure being itself.Alternatively, being pure esse, both A and B are supposed to possess the fullness of being.But if A possesses a feature F which B lacks, then either A has the fullness of being and something else, which doesn't make sense, or B does not possess the fullness of being, in lacking F. Either way, one of the two stops being pure esse. Consider further that feature F would have to be either an essential property of A (something which flowed from A's nature) or an accidental property A could have or not.But F could not be an essential property of A, since in such a case B would exhibit F as well.A and B, after all, are supposed to be two distinct beings with a shared nature, that of something which just is existence itselfotherwise, it is not the God of classical theism which we are multiplying.Hence, if F flowed from A's essence, it would also flow from B's essence.But neither could F be an accidental property of A, for then A would stop being something which just is existence itself, as was said above.So, nothing that was pure esse could have a feature that differentiated it from another being of pure esse.Thus, now (2) seems to be justified and we are in a better position to deal with Schmid's other objections. Schmid's second complaint against premise 2 is that there seem to be plausible candidates for features that differentiate among beings of pure esse.He writes, Consider, first, that most Thomistic classical theists think that being pure esse is compatible with being Trinitarian (i.e.existing as three persons).But if that's so, surely being pure esse is also compatible with being (say) Unitarian (i.e.existing as one person).It is not as though Jews and Muslims are prevented from affirming the traditional [Doctrine of Divine Simplicity] (and, with it, God's being identical to his existence) by dint of their Unitarianism.It would also seem intolerably ad hoc and inexplicable if Trinitarianism but not Unitarianism (or Binitarianism, or etc.) was compatible with God's being pure esse.If all this is correct, then we have on our hands a clear candidate for a differentiating feature among purely actual beings of pure esse: the number of persons in which they exist.In principle, one being of pure esse could be Unitarian; another could be Binitarian; still another could be Trinitarian; and so on.(Schmid (2022), 7) Admittedly, Schmid does not claim that these are 'genuine metaphysical possibilities', only that 'the argument that there cannot in principle be something that differentiates beings of pure esse fails' (ibid.).The idea seems to be that it is the theist who has the onus to prove that the number of persons can't be a differentiating feature between beings of pure essesay, because it is not metaphysically possible that said number be different.Until then, the number of persons could be, 'in principle', such a differentiating feature.Now, this is a fair criticism given the original unclear presentation of the argument.But given how I have just defended premise 2, it should be clear what is wrong with it.For the justification offered for ( 2) is completely generalthe point is that any feature F which pure being A had and pure being B lacked would imply that A (or B) was not, after all, a being of pure esse, contrary to hypothesis.Hence, whatever the number of persons in the Godhead is, such a feature (if we can speak this way) will have to follow necessarily from God's nature as pure esse and not be something which could vary from one being of pure esse to another.And this, after all, is what almost every classical theist participant in this debate will claim.Also, it need not be ad hoc nor inexplicable -Unitarians will typically claim that it is impossible for there to be more than one person in the Godhead (Trinitarianism being incompatible, for instance, with absolute divine simplicity); Trinitarians, that it is metaphysically necessary for God to be three persons. 14(I know of no Binitarian, or etc.).This prevents no-one (Jew, Christian, or Muslim) from affirming the key tenets of classical theismit just means that one party in the debate is mistaken about what is or is not compatible with God's being pure esse. Let's now address Schmid's last objection to premise 2. Schmid asks us to consider the distinction between being identical to one's own act of existence and being identical to existence simpliciter or existence as such.Thomistic metaphysics already admits that there are (roughly speaking) different acts of existence.My act of existence, for instance, is not the same as God's act of existence . . .God, then, is identical not to the existence of you or me or trees; he is identical to his own act of existence.But in that case, it's not clear why there cannot be two things which are identical to their acts of existence.They could presumably each be identical to their own respective acts of existence, which are different from one another.(Schmid (2022), 7) 15 I don't think, though, that this will work.In Thomistic metaphysics, my act of existence is different from yours (or from Fido's) because I am different from you (or from Fido).It is not, so to speak, that there is something in my act of existence that makes it different from yours or Fido's act of existence, but that our acts of existence are rendered different because they actualize something othernamely, different substances or essences (Wippel (2000), 151-152, 187-190), taking 'essence' technically as 'the matter-form composite itself' (Kerr (2015), 41). But now take a being A whose essence is identical to its act of existence.What is the 'content' of A's essence?What does A's essence consist in?Simply, A's essence is to be, A's essence just is existence.What this means is that, pace Schmid, there is no real distinction between being identical to one's own act of existence and being identical to existence simpliciter or existence as such.And hence, to ask whether there could be two beings, A and B, each of which was identical to its own act of existence is not really anything different from asking whether there could be two beings, A and B, who just were existence or being itself.And we have already argued that this cannot be the case.Hence this last objection fails as well. Trinitarian trouble? I have now given a clearer defence of premise 2 and shown why Schmid's defeaters fail.Assuming (1) is true, does the Trinitarian need to worry?Schmid thinks yes.For anyone who accepts the above argument for God's unicity, he argues, should also accept the following parody argument against Trinitarianism (ibid.): (6) For there to be more than one divine person that is pure esse, there would have to be some feature that differentiates each from the other(s).( 7) But nothing that is pure esse could have such a differentiating feature.(8) So, there cannot be more than one divine person that is pure esse.(6, 7) (9) Anything divine is pure esse.(Classical theism) (10) Any divine person is divine. (11) So, any divine person is pure esse.(9, 10) (12) So, there cannot be more than one divine person.(8,11) Of course, if a sound argument for God's unicity is incompatible with Trinitarianism, so much the worse for the Trinitarian!That need not affect my overall case that theism has an advantage over Oppy's naturalismand to be fair, Schmid is not claiming that it should.But does the Trinitarian really need to worry?I don't think so.For Schmid's parody argument, I contend, is invalid under a traditional account of the Trinityone which Christian classical theists will often espouse.And hence, acceptance of the unicity argument does not force acceptance of Schmid's parody argument. To see why, let's get clear on some background claims.The doctrine of the Trinity states that there is only one God who is three persons: Father, Son, and Holy Spirit.Under the traditional account of the Trinity I want to present, the three divine persons are subsistent relations within the Godhead, so that each of the persons is identical to one and the same God but really distinct from the other persons. 16The Father is God, the Son is God, the Spirit is Godbut the Father is not the Son nor the Spirit, the Son is not the Father nor the Spirit and the Spirit is not the Father nor the Son.This usually invites the retort that, if each person is truly identical to one and the same God, then it follows that they should all be identical between themselves, which conflicts with Trinitarianism (see Cartwright (1987)). One common solution to this problem that will help us advance our purposes here consists in pointing out that the objection equivocates on two distinct notions of identityidentity in being and identity in person. 17For the premises to be true to Trinitarianism, they must be understood in the first sense of identity (both the Father and the Son are identical in being to the one and only God), but for the conclusion to conflict with Trinitarianism, it must be understood in the second (the Father being the same identical person as the Son).But such a conclusion simply does not follow from the premises as understood aboveall that follows from them is that the Father is identical to the Son in being, which is precisely what traditional Trinitarianism claims!The divine persons are the same one being, but they are distinct persons/subsistent relations within the same one being.In the words of Gilles Emery, The Son is 'an other' (alius) from the Father, but he is not 'something else', and the Holy Spirit is 'an other' from the Father and the Son without being 'something else' than the Father and the Son are. . . .The alterity of the Father, the Son, and the Holy Spirit is . . .an alterity of persons based on a relation-distinction, but not an alterity of essence, nature, or substance.(Emery (2007), 133) 18 Now, the Father, Son, and Spirit being identical in being, each of them simply is the one same God.How is it, then, that the three persons are distinguished from one another?By way of what's called their relations of originthe Father is the unoriginated origin, the Son is generated from the Father, and the Spirit proceeds ('spirates', in technical terminology) from the Father and the Son. 19And hence, '[e]ach [divine] person has a unique proper characteristic' (Pawl (2020), 106) that grounds their distinctionpaternity for the Father, filiation for the Son, and spiration for the Spirit.The Father is not the Son nor the Spirit, for he proceeds from no-one and is the origin of the Son and the Spirit; the Son is not the Father nor the Spirit for he is generated from the Father and contributes to the procession of the Spirit, and so on (Leftow (2004), 315; Pawl (2020), 105).Thus, the divine persons are subsistent relations in God that are distinguished because of their mutual or relative oppositionthat is, because they do not relate to each other in the same way.Each one is the one God (each one has the one and only divine nature), but in a distinct relational way: the Son has the same divine nature of the Father, but in a filial way, as one who receives it from the Father; etc. (White (2022a), 445-447). Now, what this all amounts to is to the claim that the one and only being or substance which is God admits of ad intra differentiation or distinction by way of internal immanent processionsthat the one and only divine nature subsists in three personal modes which are relationally distinct according to an order of derivation (White (2022a), 409-424).And this is what will allow us to see the equivocation in Schmid's parody argument.For now we can distinguish, for lack of a better terminology, between ad intra differentiation and ad extra differentiation. 20While the argument for God's unicity denies the possibility of any ad extra differentiating feature between two distinct beings of pure esse, it remains silent about the possibility of ad intra differentiation between subsistent relations or persons within the same one being of pure esse.For all the argument is committed to, this may or may not be possible.So, with this in mind, let's recover the first half of Schmid's parody argument: (6) For there to be more than one divine person that is pure esse, there would have to be some feature that differentiates each from the other(s).( 7) But nothing that is pure esse could have such a differentiating feature.(8) So, there cannot be more than one divine person that is pure esse.Now, the conclusion is somewhat ambiguous and admits of two possible readings.For (8) to really conflict with Trinitarianism, it must be interpreted as (8a) There cannot be more than one divine person that is the same one being of pure esse. If, instead, we were to interpret it as (8b) There cannot be more than one divine person that is, each, a different being of pure esse, this will certainly make Tritheists object, but no traditional Trinitarian will complain.So, for this really to constitute an argument against Trinitarianism, (6) and ( 7) must establish (8a).But the same ambiguity is present in the way Schmid phrases the premises.For, again, (6) can be understood either as (6a) For there to be more than one divine person that is the same one being of pure esse, there would have to be some feature that differentiates each from the other(s), in which case it will be true for the Trinitarian (understanding the idea of a differentiating feature in a broad enough sense), for it refers to the ad intra differentiation that takes place within the Godhead, due to the distinct relations of origin between the divine persons. 21Or we can understand (6) as (6b) For there to be more than one divine person that is, each, a different being of pure esse, there would have to be some feature that differentiates each from the other(s), in which case it is also true, but not what the traditional Trinitarian has in mind when saying that there is a Trinity of divine persons.Likewise, (7) can be understood either as (7a) Nothing that is a being of pure esse can have a feature that distinguished it from another that was the same one being of pure esse (for short: Nothing that is pure esse can admit of ad intra differentiation), in which case such a premise is nowhere to be found in the unicity argument, explicit or implicit.Or we can understand (7) as (7b) Nothing that is a being of pure esse can have a feature that distinguished it from another being of pure esse (for short: Nothing that is pure esse can admit of ad extra differentiation), in which case it is true and part of the unicity argument.But then, we find that there is in Schmid's argument an equivocation that makes the inference to (8a) invalidan equivocation, precisely, between the ad intra differentiation of the persons within the same one being of pure esse and the ad extra differentiation between two hypothetical beings of pure esse.For (6) to be true to Trinitarianism, it must be understood in the sense of ad intra differentiation, as (6a)but for (7) to be true to the unicity argument, it must be understood in the sense of ad extra differentiation, as (7b).Hence, if we are speaking of ad intra differentiation, then ( 6) is true but ( 7) is false or unjustified, and (8a) does not follow. 22And if we are speaking of ad extra differentiation, both ( 6) and ( 7) are true, but (8a) still does not followwhat follows is (8b), something which no traditional Trinitarian denies. At this point, could someone claim the problem to be that any justification for (7b) will inevitably carry over to (7a), creating a bridge between the unicity argument and the parody argument?Might one say, for instance, that if the Son has his proper characteristic (filiation) in distinction to the Father, then the Son can't be the same being of pure esse as the Father, but being plus filiation?Not really, not without misconstruing traditional Trinitarianism altogether.For the idea is that each person's proper characteristic is not something extra that gets 'added on' to the person or to the divine nature, like an accident to a substance.Given divine simplicity, there are no accidents in God and everything that is in God is God's own substance.And so, the persons are relative in all that they are, that is, the Father just is his paternity, the Son just is his filiation, and paternity and filiation just are, in turn, the one divine nature, despite being relationally distinct from one another (White (2022a), 431-434 and 448-449). 23Thus, the argument for unicity defended above is not incompatible with a traditional account of the Trinity.Traditional Trinitarians need not worry about Schmid's parody argument. More arguments for God's unicity but no more 'IoI-ing' I have now defended one of the unicity arguments from Schmid's objections.However, the controversial nature of IoI haunts it, and so it would be nice to my overall case if there were other arguments for God's unicity that did not depend upon IoI and that could appeal to someone who denied it.Are there any such arguments?I will explore two. 24 From simplicity to unicity In Summa Theologiae, I, q. 11, a. 3, Aquinas gives three arguments to the effect that God is one.Our interest here is in the first one, an argument from simplicity.According to classical theism, God is absolutely simple, composed of no parts whatsoever.There is in God no composition of essence and existence, form and matter, substance and accidents and, for our purposes now, nature and subject, essence and individual.This means that God is identical to his Deityor as Schmid himself puts it, 'God is God's essence' (Schmid (2022), 1).But then, reasons Aquinas, there can be only one God.Why? Because, in God, that which makes him God is identical to that which makes him this God.Deity, then, can't be shared between multiple individuals, as humanity canwhatever is God (whatever has Deity) will, by that same token, be this God, the same one God. 25 Consider for comparison that if Socrates was identical to humanity, there could only be one human being -Socrates.If Socrates is identical to humanity and Plato is not the same being as Socrates, then it follows that Plato can't be human.Likewise, if this God is identical to Deity and X is not the same being as this God, it also follows that X can't be divine. 26Again, given divine simplicity, only that which is identical to this God can be divine.In other words, Deity is hacceity, and hence, when it comes to God, 'There can be only one.' Note how this argument does not depend on the truth of IoI.Even if there could be, in general, two distinct indiscernible objects, the point is that, in God's case, we could be certain that such a thing could not take place.There could not be two distinct indiscernible Gods, nor two distinct discernible ones, because given divine simplicity Deity is not an essence that can be shared by multiple individual substances.Hence, whatever is distinct from this God will be anything except another God. From perfection to unicity The second argument follows Brian Leftow (2012) and goes from perfection to unicity.In doing so it will have the advantage of being neutral between classical and non-classical theism. 27The crux of the argument is that, plausibly, unicity is a perfection, or else follows from something which, also plausibly, is a perfection.And so, a perfect being (God) would have to be unique.Apart from direct intuition that unicity is a perfection, there are several indirect paths we could take to arrive at the same conclusion. First, consider that F is a perfection if it is 'objectively and intrinsically such that something F is more worthy of respect, admiration, honor, or awe than something not F, ceteris paribus' (Leftow (2012), 178).But it seems that something unique is more worthy of respect, admiration, honour, or awe than something not unique.Hence, being unique seems to be a perfection.But there does not appear to be any incompatibility with being unique and other properties a perfect being ought to have.Hence, we can say that, plausibly, a perfect being would be unique. Consider now that a perfect being would plausibly possess supreme or absolute value.But something is more valuable in the same measure as it is more uniqueor at least that seems reasonable enough and congruent with how we measure value.Hence, a perfect being would plausibly be unique. Consider also that it seems to follow from the notion of a perfect being that it could not have a superior, that nothing could be greater in perfection than it.But there is also a case to be made that 'there cannot be something wholly distinct from [God] and as great as He is' (ibid., 207)that is, that a perfect being could not have an equal.Indeed, it seems greater to be unmatched in perfection than not to be.As Leftow puts it, '[i]t would be greater to be intrinsically such as to be the greatest possible being among commensurable rivals than not to be.No constellation of attributes could confer more perfection than one that made one thus greatest' (ibid.).Hence, it seems to follow once more that a perfect being would plausibly be uniqueit would have no superior and no equal. Finally, consider what Leftow calls the GSA-property (short for 'God, Source of All'): x has the GSA-property if, for any concrete substance wholly distinct from x, x and only x makes 'the creating-ex-nihilo sort of causal contribution' to its continued existence (ibid., 21).As Leftow argues, the GSA-property is either a perfection or a constituent of other perfections.Why think this?First, consider that '[b]eing a potential ultimate source of some proportion of what benefits things is a good property to have ' (ibid., 22).But being the ultimate source of all that benefits things would be the maximal degree of this good property, and hence, given that 'a property is a perfection iff it is the maximal degree of a degreed good attribute to have' (ibid.),being the ultimate source of all that benefits things is a perfection.Now, such a perfection supervenes on the GSA-propertyand so, either the GSA-property, by a plausible supervenience principle, is itself a perfection or it is a necessary condition of a perfection.In either case, a perfect-being will have the GSA-property. Consider also that the GSA-property, together with the ability to freely exercise one's own power, constitutes the property of having complete control over all other concrete objects.But '[i]t is good to have power over other things' existence . . .Power over existence is degreed.Complete power over all other concrete things' existence is its maximum, and so plausibly a perfection' (ibid.).In this case, the GSA-property is a constituent of another perfection, and so a perfect being would have the GSA-property. But it seems clear that there could only be one being which had the GSA-property.For suppose there are two distinct gods, Alpha and Omega, which both have the GSA-property.Because of that, Alpha and Omega would simultaneously be causally dependent on each other, which is viciously circular -Alpha will be creating Omega only insofar as Omega will be creating Alpha, but Omega will be creating Alpha only insofar as Alpha will be creating Omega.So, at most only one thing can have the [192][193].But if a perfect being would plausibly have the GSA-property, it follows that there could only be one perfect being. Again, none of these arguments from perfection to unicity relies on IoI.Even if IoI is false and we can have two distinct indiscernible beings, we still could not have two distinct perfect beings, indiscernible or not, for the reasons given.Sure, the arguments are far from being apodictic proofs.As Leftow himself acknowledges (ibid.12), perfect-being arguments rely on intuitions about perfections, and our intuitions are fallible.Because of this I have explored several routes to support the same conclusion (and maybe more could be added), so that the argument has more force.Even so, modesty in argumentation need not be a bad thing.Insofar as someone finds these intuitions plausible, to that measure he has reason to think that there could not be more than one perfect being. Does this reasoning conflict with Trinitarianism?If unicity is a perfection that any perfect being ought to have, some will say, then for a divine person to really be divine (and hence, perfect) it would also have to be unique.And so, the same intuitions would support the conclusion that there can only be one divine person.But at least the traditional account of the Trinity presented above can easily deal with this objection.The ad intra differentiation that takes place within God does not make it so that now we have more than one perfect being, and each divine person is still perfect in being identical to one and the same perfect substance, God.Also, further considerations about perfection could support the case that the one and only perfect being should be, internally speaking, more than one person (see, again, Sijuwade ( 2021)). Can these arguments be of use to the naturalist? Let's recapitulate.In my original article I argued that theism has an advantage over Oppy's naturalism as a theory of the First Cause because theism can answer how many first causes or fundamental entities there are and why.This throws additional light onto the First Cause, shaving off one brute fact to which Oppy's naturalism, as it stands, seems committed or unable to eliminate.Adopting the theist's hypothesis for a First Cause, we get to understand something that, adopting Oppy's, seems condemned to remain unintelligible.And this, ceteris paribus, is a point in favour of theism vis-à-vis Oppy's naturalism. I have now defended one of my original arguments from Schmid's objections and put forward two more that do not depend on the controversial IoI.It seems to me, then, that the whole case is strengthened and poses a challenge to the naturalist.Can the naturalist appropriate the theist's unicity arguments and adapt them to a naturalistic First Cause?I briefly considered this question in my previous article (Gel (2021), 6 and 8), but it is worth pondering it once more. I think the answer is clearly 'No' with respect to the arguments that go from perfection to unicity.Surely, to accept that the First Cause is a perfect being would be to abandon naturalism, at least in any relevant sense of the word.Could the naturalist borrow from the other arguments, and say, for instance, that the First Cause is absolutely simple, purely actual, or pure esse but still a natural reality?Here, I want to say that it dependsit depends on whether the rest of the divine attributes follow from the nature of something which was so.Classical theists, old and new, typically claim that they do. 28However, further discussion is needed, given that 2nd-stage arguments (as they are sometimes called) tend to be ignored by those who do not concede the 1st-stage ones. Anyhow, I want to address some remarks of Schmid that are relevant here.In his article, Schmid takes issue with my suggestion that a purely actual reality would have to be immaterial.Schmid claims that it is not at all clear that every material thing is both mutable and potential in many ways.He writes: Consider atemporal wavefunction monism.According to this view, there exists a fundamental, physical, non-spatiotemporal entity: the universal wavefunction.This is a perfectly respectable view that has seen a blossoming of interest in philosophy of physics.If we understand 'material' and 'physical' to be synonymous, then it simply follows that there are perfectly respectable views on which there is a fundamental or foundational, unchangeable, timeless, material thing.We can also suppose that (a) the fundamental layer of reality is necessary (as Gel himself supposes in his second argumentative path) and (b) the fundamental layer of reality is cross-world invariant.From all of this it simply follows that the fundamental atemporal wavefunction has no potencies for change, cross-world variance, or non-existence.We therefore seem to have a perfectly respectable naturalist view on which the foundation of reality is a material, unchangeable, purely actual thing.(Schmid (2022), 9-10) Surely, atemporal wavefunction monism is an interesting view in its own right.Still, as a hypothetical example of a purely actual material thing, in the Aristotelian-Thomistic sense of 'material' with which I was operating, it is bound to be incoherent.For a material thing, in Aristotelian-Thomistic philosophy, is that which has matter, and matter is that which persists through substantial change and is thus characterized as pure potentiality to receive any form (Feser (2019), 28-29).A purely actual material thing, then, in this sense of 'material', makes no senseit would have to be something which lacked all potentiality and still was potential in some way. Schmid's point here turns on the key phrase 'If we understand "material" and "physical" to be synonymous', but if this move allows for there to be a purely actual material thing, then Schmid needs to tell us what 'physical' means in this context and how it is opposed to 'immaterial' in the Aristotelian sense.For if it is not so opposed, we would simply be changing the subject, not speaking of material in the Aristotelian sense, but in another sense, material*.But then, a purely actual thing could both be necessarily immaterial in the Aristotelian sense and maybe also material in the material* sense.That does nothing to invalidate the classical theist's inference to the immateriality of the First Causeit is no more proof that there could be a purely actual material thing than saying that if we understand 'round' as synonymous with 'red', then there could be a round square. Is this advantage worth the price? Schmid argues repeatedly in his article that, even if classical theism has a simpler account of the First Cause than naturalism, naturalism is simpler tout court, when both are compared as overall theories, and that it is this that should primarily concern us when assessing theories according to their simplicity (Schmid (2022), 4). I have my doubts that this is entirely correct, but let's concede it for the sake of argument. 29Let's assume also that I am right and there are sound unicity arguments such as those I have defended.Now, is the theoretical advantage of theism identified here worth the price of theism's added complexity?It is not easy to saythere is no straightforward equation when comparing gains in explanation and costs in simplicity.But it is important to remember that the advantage we have been discussing can be taken as 'an additional or supplementary reason to be weighted jointly with any other available evidence ' (Gel (2021), 8).Maybe this advantage, on its own, does little to tip the scales in favourof theism, but it can still play an interesting role in a more overarching cumulative case that ends up doing just that. Consider, for instance, that perfect-being theism can explain all or mostly all properties ascribed to God by appealing to just one basic propertyperfection.If the traditional arguments for deducing the divine attributes are correct, classical theism can do so too.But there is nothing comparable in naturalism, and no expectation that there will be (Leftow (2017), 330-332).That a being is perfect, or purely actual, or pure esse, also seems to make sense of why it is necessary (see, for instance, Byerly ( 2019)).But in naturalism, and especially in Oppy's naturalism, the fundamental natural entities are necessary and that's it, full-stop (see Oppy and Pearce (2022), 113).Putting all of this together, it seems that theism could have the tools to explain the number of what is most fundamental, its nature and its necessityand so, less and less is brute at the fundamental level in theism.Someone could add considerations from fine-tuning, beauty, and other arguments and the scales may begin to tip for him as more and more advantages in explanation are gained for the same price of some extra-ontology.And that seems to me a pretty good deal. In conclusion In my previous article, I argued that theism has an advantage over Oppy's naturalism in that theism can answer the double question of how many first causes there are and why, while Oppy's naturalism seems lost on both fronts.In this article, I have defended one of my original arguments for God's unicity from Schmid's objections and offered two more that don't rely on the controversial IoI principle, thereby strengthening my overall case.In addition, I have discussed whether the naturalist could appropriate the theist's first cause while remaining a naturalist and concluded that the prospects of such a move appear slim, though more work needs to be done on this front.Finally, I have considered the role this argument can play in a more overarching cumulative case for theism. While I have been critical of Schmid's arguments, I think he provided an engaging response and much needed push-back.His objections have allowed us to go a step further than beforeclarifying one of my original arguments, showing how it is no threat to the Trinitarian, and exploring additional arguments for God's unicity.If this article advances the discussion in any degree, as I hope it does, it is indeed to Schmid's credit.Notes 1.In my original article, I tested how the argument could go on two different paths, one in which causal finitism is granted and another in which a foundational layer of reality is granted.For simplicity's sake, throughout the article I will speak only of 'the First Cause', but this should be understood as referring either to a First Cause in the distant past-history of things or to a necessary Foundation that grounds the existence of everything else.Although at the time of writing said article I wasn't aware of this, a similar argument to mine was mentioned in passing in Leftow (2017), 329-330.2. Morbillion: the number of tickets sold by Morbius, which is (I'm told) one of the movies ever made.3. I gave, throughout my article, three more arguments for God's unicityfrom simplicity, omnipotence, and absolute perfection.Schmid's treatment of these arguments is also interesting and valuable but to keep things focused I will not engage with it on this occasion.Readers are advised to evaluate whether the responses I will lay out here can be used to vindicate these other arguments.4. For some particularly strong ones, see Vaught (1968), Bahlul (1988), andDella Rocca (2005).5.One may even be able to argue for a stronger conclusionthat, without IoI, I should be almost certain that there is more than one pencil where I only see one, to Ockham's despair.And this because there is only one way for there to be only one pencil, but infinite ways for there to be more than one pencilthere could be two distinct indiscernible pencils, three, four, five . . .But the more modest conclusion suffices for my purposes.6.The mad philosopher rejoices in mad philosophiness, for he is also a Cartesian and enjoys instilling doubt in people.7. Bahlul comments that denial of IoI leaves us a deeply divided world where 'the possibilities of interaction are severely limited by the fact that no asymmetric action can take place between indiscernible doubles' (Bahlul (1988), 413).8. See, for instance, Cross (2011).Such a principle will be immune to many purported counterexamples to IoI, such as that of Adams (1979), which turn on two distinct indiscernible objects being possibly discernible (discernible in some possible world).9.This would be reinforced if we brought to the table other commitments of classical theism, such as God's immutability and trans-world invariance, which classical theists argue follow from God being pure esse.10.I owe this example to Pat Flynn.11.Leibniz himself famously did so (Leibniz (2020), 14).One could not be faulted if tempted to abbreviate this Leibnizianly restricted principle as IoIlz.12.To bring the point home.Even if, ultimately, we must bottom out at primitive individuation, surely the less we have of it, the better.And that's what IoI affords us: to shave off primitive individuation when it comes to objects, which can be distinguished according to their respective properties or features.He who rejects IoI will have to deal with the same primitive individuation as the proponent of IoI and more at the level of objects.13.I was mainly relying on Gaven Kerr's presentation of the De Ente argument (see Kerr (2015)), but Kerr's formulation is more attentive than the one I gave.Where I said that 'outside' pure being there is only non-being, Kerr is careful to qualify that 'whatever is distinct from esse tantum is either (i) subject to esse tantum or (ii) nothing ' (ibid., 152-153).14.For a very interesting and innovative argument to this conclusion, see Sijuwade (2021).Aquinas's use of the psychological analogy also is aimed at supporting the intelligibility of the Trinitysee Summa Theologiae, I, q. 30, a. 2; Compendium Theologiae, I, qq.40-46; and also White (2022a), 409-424 andEmery (2007), 130-131.If one still wants to maintain that this would be ad hoc, the Trinitarian could concede so but claim it is not 'intolerably' ad hoc but justified in light of the authority of his religious tradition (see Tweedt (2022), 8).15.I omit Schmid's additional suggestion that these acts of existence 'could presumably be primitively distinct' (Schmid (2022), 7) because that trades on his objections to IoI, with which I have already dealt.16.See, for instance, Aquinas's treatment of the Trinity in Summa Theologiae, I, qq.27-43, excellently explored in White (2022a).For a contemporary relational account of the Trinity, see Koons (2018).Also, for the compatibility of this understanding of the Trinity and divine simplicity, see White (2016a), (2016b), (2022b) and Dolezal (2014).17.I am not necessarily endorsing this solution to the Logical Problem of the Trinity, but merely using it as an entry point into the doctrine.See Pawl (2020) for an illuminating discussion of the problem and some proposed solutions.Be that as it may, all that matters for our purposes now is just the following: that traditional Trinitarianism affirms that the Father, Son, and Holy Spirit are one in being (the same one being or substance) and three in person (three distinct persons).18.Also, '[r]elative opposition as to origin makes the relations [i.e., the persons] really distinct from one another, but each of them is really identical to the single divine essence or substance' (Emery (2007), 145).As Gregory of Nazianzus put it: '[N]either is the Son Father, for the Father is One, but He is what the Father is; nor is the Spirit Son, . . .but He is what the Son is' (quoted in White (2022a), 146; my italics). 19.Or whatever the distinct mode of procession for the Spirit iswe need not settle the filioque controversy here.20.I will speak of ad intra and ad extra 'differentiation' to maintain uniformity with the expression 'differentiating feature', which I have been using throughout, following my original article and Schmid's response.But I shall make mine Aquinas's (nitpicky?)caveat, that when speaking specifically of differentiation between the divine persons (that is, of ad intra differentiation), 'differentiation' should be understood simply as 'distinction', to avoid the connotation of a diversity of substance (which the Trinitarian denies between the divine persons).See Aquinas, Summa Theologiae, I, q.31, a. 2 and Emery (2007), 134-135. 21.Alternatively, maybe the Trinitarian would want to deny the use of the expression 'differentiating feature' in the context of the distinction between the divine persons.In that case, the Trinitarian would consider (6a) to be false and deny application of IoI to the divine persons, on the basis that IoI should be restricted to substances and that the distinction of divine persons is not a distinction between different substances.Still, given that the divine persons are distinguished because of some difference or distinction in their relations of origin, the Trinitarian could endorse a Stronger IoI, such that for any distinct x and y (substances or not), there is in principle some intelligible difference between x and y.I owe this point to John DeRosa.22.Or ( 6) is false and ( 7) is true, if we follow the alternative path on note 21. 23.I thank Pat Flynn for discussing this point with me.Sure, someone might think this account of the Trinity is problematic for independent reasons, but that is not what is at issue here.Instead, what is at issue is whether this traditional account of the Trinity is compatible with the reasoning present in the unicity argument, and that I claim is the case, for the reasons given.Also, could someone try a reverse bridge, from not-(7a) to not-(7b)?If relations of origin allow for the Father and the Son to be distinct and, still, the same one being of pure esse, maybe relations of origin between two different beings of pure esse, A and B, would also allow for them to be distinct and, still, each a being of pure esse.But this won't work either, for this kind of ad extra origination would just be creation (A creating B, for instance), and no being of pure esse can be created.24.I think more could be added.According to Gaven Kerr (personal correspondence), neither Aquinas's De Ente argument for pure esse's unicity nor his presentation of it in Aquinas's Way to God rely on IoI nor do they appeal to any principle of difference.Instead, the argument is that, for something to be multiplied, it needs to be subject to something other which multiplies it (as form is multiplied in matter), but that pure esse cannot be so subject to anything (Kerr (2015), 18-30).This, though, I leave for another occasion.25.Doesn't Aquinas say that 'angels' (separated intellects) are also identical to their own essences?Despite answering in the affirmative in earlier texts, Aquinas's final position on this question appears to be 'No'.Assuming angels exist, they are (as all creatures) composites of essence and existence (esse).Hence, not everything in the angel is identical to its essence, and so the individual angel can't be identical to its essence eitherin fact, the angel is not identical to any of its components.Hence, only something which was absolutely simple, lacking all composition, could be identical to its own essence.See Aquinas, Compendium Theologiae, I, q. 15; Quodlibeta, II, q. 2, a. 2, and Wippel (2000), 238-253 for discussion of the relevant texts about this issue.26.I have been careful with my wording to make it clear that no incompatibility with Trinitarianism can be found here.The Father is identical to this God and the Son is not the Father, but the Son is still divine because, despite him not being the same person as the Father, it is false that the Son is not the same being as the Father (at least according to the traditional view of the Trinity I sketched above).Again, the point of this argument is that nothing ad extra of this God can be God, because God is his own essence.But divine simplicity implies that whatever is in God is the same one God.27.Both classical and non-classical theists can utilize the methods of perfect-being theologythey will just disagree as to whether simplicity, impassibility, etc. count as perfections or not.28.See, for instance, Aquinas, Summa Theologiae, I, qq.3-26 or Feser (2017), ch. 6. 29.There is a case to be made that what matters to simplicity is what a worldview takes to be basic or fundamental.See Schaffer (2015), Dougherty andGage (2015), 60-61, andOppy andPearce (2022), 64.This would be relevant, since Schmid (2022, 4-5) grants it is unclear whether theism or naturalism is simpler in this sense when it comes to qualitative, ideological, and theoretical simplicity, but concedes (ibid., 12-13 n. 9) that theism may be ahead when it comes to fundamental quantitative simplicity.Another problem I see is that Schmid relies on the idea that 'Oppy's entities are a proper subset of the classical theist's' (Schmid (2022), 4).But this does not seem true, since Oppy's ontology contains something which does not figure in the theist'san uncaused necessary initial physical state with a beginning.Also, while theism posits additional kinds Oppy does without (non-physical, unlimited, perfect), because of this the theist is able to give a more unified account of the kinds Oppy recognizes.For the theist, all that is physical falls under the kinds contingent and caused.For Oppy, some of what is physical falls under the kinds contingent and caused, but other physical things fall under the kinds necessary and uncaused.It seems that the denial of the additional theistic kinds comes at the price of additional naturalistic kinds (or subkinds).This appears to be a multiplication of overall complexity difficult to compare with that of the theist.
v3-fos-license
2020-06-04T09:12:28.461Z
2020-06-01T00:00:00.000
219314929
{ "extfieldsofstudy": [ "Medicine", "Biology" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/2072-6643/12/6/1628/pdf", "pdf_hash": "a6dd12d265bebc7efc33d6fee9787d5a3644fff5", "pdf_src": "MergedPDFExtraction", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43672", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "sha1": "a9c522af6d0c34f82ad4b7cab7fd3976bff0a3e4", "year": 2020 }
pes2o/s2orc
Nutritional Composition and Microbial Communities of Two Non-alcoholic Traditional Fermented Beverages from Zambia: A Study of Mabisi and Munkoyo Traditional fermented foods and beverages are common in many countries, including Zambia. While the general (nutritional) benefits of fermented foods are widely recognised, the nutritional composition of most traditional fermented foods is unknown. Furthermore, fermentation is known to add nutritional value to raw materials, mainly by adding B-vitamins and removing anti-nutritional factors. In the case of traditional fermentation, the composition of microbial communities responsible for fermentation varies from producer to producer and this may also be true for the nutritional composition. Here, we characterized the nutrient profile and microbial community composition of two traditional fermented foods: milk-based Mabisi and cereal-based Munkoyo. We found that the two products are different with respect to their nutritional parameters and their microbial compositions. Mabisi was found to have higher nutritional values for crude protein, fat, and carbohydrates than Munkoyo. The microbial community composition was also different for the two products, while both communities were dominated by lactic acid bacteria. Our analyses showed that variations in nutritional composition, defined as the amount of consumption that would contribute to the estimated average requirement (EAR), might be explained by variations in microbial community composition. Consumption of Mabisi appeared to contribute more than Munkoyo to the EAR and its inclusion in food-based recommendations is warranted. Our results show the potential of traditional fermented foods such as Mabisi and Munkoyo to add value to current diets and suggests that variations in microbial composition between specific product samples can result in variations in nutritional composition. Introduction In many countries, locally processed traditional foods exist and these contribute to the diets of their consumers. Yet, for many of these products, the methods of preparation are not uniform and documented, their functional properties such as product composition, organoleptic characteristics and shelf life are unknown and the way these affect their nutritional composition has not been assessed. As a result, these local traditional foods are often not included in food-based dietary guidelines nor in estimations of how they can contribute to local food and nutrition security. Fermented foods and beverages that are produced using traditional fermentation processes are of special interest, since these foods are locally available and are a part of tradition. Like in other fermented foods, fermentation adds value to the raw materials used, resulting in a product with a prolonged shelf-life and stability and an increased sensory, and monetary value [1,2]. Furthermore, the activity of micro-organisms is known to add nutritional value to raw materials, for instance by the production of B-vitamins and the removal of anti-nutritional factors such as phytate. Removal of phytate increases the bioavailability of various micronutrients [3,4]. In milk-based fermented foods, the anti-nutritional factor lactose is converted into lactic acid during fermentation. Removing lactose has been linked to health benefits by reducing abdominal pain and diarrhoea in people with lactose intolerance [5][6][7]. As a result, the final nutritional and sensory properties of fermented products depend on their diverse microbial community. In turn, the composition of the microbial community to a large extent depends on the raw ingredients of each geographical region and traditional processing procedures [8,9]. Apart from increased nutritional contents compared to raw materials, fermented foods possess beneficial effects on human health, for example, through the modification of gut microbiota leading to a better immune response and the lowering of a person's risk of hypertension, diabetes, and high cholesterol [10]; the prevention and treatment of inflammatory bowel disease (IBD) [11]; and anti-carcinogenic and hypo-cholesterolemic effects [12]. In Zambia, various traditional non-alcoholic fermented foods exist that are consumed by all age groups. Of these, Mabisi and Munkoyo are commonly found in rural areas. Munkoyo is also found in some urban areas. Although consumption of both products is frequent and the product is an important part of the local diet [13], surprisingly, the nutritional composition has not been characterized. Mabisi is produced by fermenting raw cow's milk and Munkoyo is produced by fermenting maize porridge [13,14]. Mabisi is made by placing raw milk in a fermentation vessel and fermenting at ambient temperatures for 48 h, resulting in a mildly sour tasting product. Previous research has shown that variations in processing exist, which could lead to variations in product functionality in terms of microbial composition and sensory properties [13,14]. Processing most notably differs in the repeated additions (or not) of fresh milk, the level of shaking during the fermentation, and the levels of back-slopping (transfer of material from an old batch to a new batch, [14]). The traditional fermented food Munkoyo is made from maize flour that is mixed with water and boiled for several hours. After cooling, Rhynchosia roots are added to provide enzymes to degrade complex sugars and to provide a microbial inoculum for fermentation [15]. Fermentation can be done in a variety of vessels at ambient temperatures and takes around 48 h. Processing variations include the time allowed for cooking the maize porridge, the types of roots added, the fermentation vessel used, and the level of back-slopping [13]. For both traditional fermented food products, the microbial communities responsible for fermentation are dominated by four to ten species of lactic acid and acetic acid bacteria [13,14,16]. The exact composition varies between samples of the same product, and variations in processing, such as the containers used for processing, gives rise to further differentiation in microbial composition [13,14,17]. Since micro-organisms affect the nutrient composition and increase the nutritional value, variations in microbial composition may lead to variation in nutritional value of the final products. A previous study has compared microbial community structure of fermented microbes in the two types of traditional fermented foods Mabisi and Munkoyo [13]. While expecting a clear signature of raw materials used as a driver of the composition of the microbial community of fermenting microbes, these results were inconclusive due to uncontrolled factors such as geographical region, climatic conditions and processing variation. In the present study we analysed product samples of the traditional fermented foods Mabisi and Munkoyo that we collected from local producers in Zambia. We documented the nutritional composition of Mabisi and Munkoyo and their variations among products from different producers and profiled the microbial communities that are present at the end of fermentation. We expected to find variations in both nutritional composition and microbial community profiles between the different products and among samples taken of the same product. Finally, we assessed whether we could correlate variations in microbial communities to variations in nutritional content of the products. Our study thus provides unique data on the nutritional composition of two traditional fermented foods that could be part of the new food-based dietary guidelines currently in development for Zambia. Study Design This was a cross sectional study focusing on the nutritional composition and microbial community composition of the traditional fermented foods Mabisi and Munkoyo. Samples were collected from the Mkushi area in Zambia (location coordinates 13.1339 • S, 27.8493 • E). This site was selected because of the tradition of making Mabisi and Munkoyo that has been maintained by the collection of people who have migrated from other parts of Zambia to live among the locals (Swaka people), and because Mabisi and Munkoyo are locally produced in this area. Samples were purchased from producers either at their homes or at the market where they were selling their products; 12 Mabisi and 13 Munkoyo samples were collected. Producers were selected on the basis of their location, their presence at the market in Mkushi and by their processing method for the production of Mabisi and Munkoyo. All Mabisi processors were selected to use the Tonga-type method of fermentation [14]. Tonga is a tribe traditionally located in the southern part of Zambia. This method is characterized by placing raw cow's milk into a container to allow fermentation for 48 h at an ambient temperature in unshaken containers without the addition of fresh milk during fermentation nor the removal of whey that occurs due to whey separation. All Munkoyo processors were selected to use the Kitwe method of fermentation [16,17]. This method is characterized by mixing maize meal with water and cooking this for 30 min to gelatinize the starch. After cooling, Rhynchosia roots are added and the mixture is placed in a fermentation vessel to allow fermentation for 48 h at an ambient temperature. For both traditional fermented foods, the vessels used for fermentation may vary. We recorded the type of fermentation vessel (such as plastic bottles or buckets, metal cans or calabashes) that the processors had used as this may have had an impact on the microbial composition [18]. Samples were collected in two duplicate sterile 50 mL centrifuge tubes with screw caps and were immediately placed in a cool box with ice packs after which they were stored in the freezer, one tube at −20 • C (for nutritional analysis) and the other tube at −80 • C (for microbial analysis). Samples were analysed at the Tropical Diseases Research Centre (TDRC) in Zambia for pH, B-vitamin and mineral (calcium, iron and zinc) concentrations and at the University of Zambia, School of Agricultural Sciences, Department of Food Sciences and Nutrition for proximate content (protein, fibre, water, fat, energy and carbohydrates). Samples were transported to Wageningen University in the Netherlands for whole bacterial genomic DNA extraction and sample preparation for 16S rRNA amplicon sequencing. Measurement of Nutrient Value Our study assessed the levels of the main components of the products (including dry matter, carbohydrates, fats, protein, energy, fibre and ash), B-vitamins (B1, B2, B3, B6 and B12), and minerals (calcium, iron and zinc) using customarily used methods of analysis. Furthermore, the energy content for each sample was calculated. We chose to measure levels of selected minerals and B-vitamins since these are the most relevant considering the raw materials used in the production of Mabisi (milk) and Munkoyo (maize). We calculated the contribution of one adult portion size of 183 g [19] to reach the Estimated Average Requirement (EAR) for each of the nutritional components. Iron bioavailability was estimated at 5% and low bioavailability for Zinc was applied. All EAR's were based on WHO/FAO recommendations [20], however for protein no WHO/FAO EAR exist and therefore 80% of the Population Reference Intake of EFSA [21] was used for an average women of 60 kg. Proximate Analysis Different proximate parameters in Munkoyo and Mabisi were determined using the methods of the Association of Official Analytical Chemists (AOAC) [22]. Briefly, crude fat was measured in samples using Soxhlet apparatus with hexane as the solvent in the AOAC procedure. Protein was determined using nitrogen content by the micro-Kjeldahl method where nitrogen value obtained for each sample was converted to crude protein by multiplying it with the 6.25 factor for Munkoyo and the 6.38 factor for Mabisi. Moisture and dry matter content were determined by weighing 2 g of sample onto a crucible, heating it to dryness in an oven at 110 • C for 2 h and calculating the weight difference. Crude fibre was determined in each sample after the removal of fat using successive digestion with 1.25% sulphuric acid and 1.25% sodium hydroxide solutions. Carbohydrate content was determined by a difference calculation method as follows: % Total Carbohydrate = [100 − % (Protein + Fibre+ Ash + Fat + Moisture)]. All the proximate parameters were reported in AOAC, 2000 standard format as percentage. Atomic Absorption Spectrophotometry (AAS) The contents of calcium, iron and zinc were determined by dry ashing samples at 450 • C in a muffle furnace following the procedures described earlier [23]. After dissolution of the resulting ash in hydrochloric acid (HCl), the metal element contents in the solutions were determined by flame atomic absorption spectrophotometry (AAS) (AAnalyst-400, Perkin-Elmer Corp., Norwalk, CT, USA). Standards were prepared from stock standard solutions (1000 ppm) of zinc, iron and calcium to make a calibration curve for each element. Analyses were performed in duplicate. Quality Control (QC) To monitor performance and reproducibility of the analytical procedures used, we included quality control samples for each batch of samples on a daily basis. For calcium and iron we used Cobas™ control samples while for zinc we used inhouse QC samples with values previously determined using the Seronorm™ Trace Elements Serum Level 1 and 2. The means, standard deviations and coefficient of variation (CV%) for duplicate samples were calculated to ensure that the values were within the acceptable limits (10%). Analysis of B-Vitamins (B1, B2, B3, B6 and B12) by High Pressure Liquid Chromatography (HPLC) A previous method based of HPLC UV analysis was used with a few modifications [24]. After the homogenization of the sample by mixing, 2 g was weighed in a 100 mL volumetric flask, followed by adding 40 mL of water and 4 mL of 2 M NaOH. The suspension was then vigorously shaken and 50 mL of 1 M phosphate buffer (pH 5.5) was added in order to lower the pH of the final solution to about pH 7. The suspension was made up to the mark with water and sonicated for 10 min in an ultrasonic bath. Dilutions of 20-fold with water were used for the quantification of the vitamins. The solution was filtered through a 0.22 µm Millipore syringe before analysis. Analyses were performed in duplicate. Standard Preparation The multi-vitamins stock solution was prepared by weighing in a 100 mL volumetric flask 5 mg of vitamin B12; 12.5 mg of vitamin B2; 25 mg each of vitamins B1, B6 and B3. Forty millilitres (40 mL) of water was then added and the solution was shaken vigorously before adding 4 mL of 2 M NaOH. After complete dissolution of the vitamins, 50 mL of 1 M phosphate buffer (pH 5.5) was added and the solution made up to the mark with water. Stock standard solutions were prepared daily. Different concentrations of the standards were injected into the HPLC to obtain the peak areas. Peak areas were plotted against concentration for each vitamin to make specific calibration curves. The Shimadzu LC-2010CHT HPLC system was used with conditions according to Moreno et al. [25]. A volume of 20 µL for each sample was injected into the HPLC equipped with a C18 reversed-phase column (250 × 4.6 mm, 4 µm), 0.05 M ammonium acetate (solvent A)-methanol (solvent B) 92.5:7.2 as mobile phase at 1 mL/minute flow rate. A diode array detector was used to scan from 200 to 500 nm and LC-solutions software (Shimadzu, Japan) was used to integrate the peak areas for each vitamin. After the run, the peak area of each unknown sample was obtained, and concentrations were calculated using the calibration curves. Quality Control To monitor the performance and reproducibility of the analytical procedures for the analysis of the B-vitamins, we included quality control samples spiked with known amounts of standards for each batch of samples on a daily basis. The means, standard deviations (SD) and coefficient of variation (CV%) for duplicate samples were calculated to ensure that the values were within the acceptable limits ≤10%. Total Genomic DNA Extraction Sample DNA was extracted from Mabisi and Munkoyo samples using the method by Schoustra et al. [13] Briefly, for Munkoyo, after eliminating large particles from 1 mL of product samples, they were spun down at high speed and the pellet was retained after discarding the supernatant. Then 500 µL TESL (25 mM Tris, 10 mM EDTA, 20% sucrose, 20 mg/mL lysozyme) and 10 µL mutanolysin solution (in water at 1 U/µL) were added, followed by incubation at 37 • C for 60 min with slight shaking. GES reagent (5 M guanidium thiocyanate, 100 mM EDTA, 0.5% sarkosyl) amounting to 500 µL was added, cooled on ice for 5 min and 250 µL of cold ammonium acetate solution (7.5 M) was added followed by gentle mixing. The mixture was held on ice for 10 min, spun down and the supernatant was removed. The samples were purified by mixing with chloroform-2-pentanol mix (chloroform and 2-pentanol 24:1 ratio) by adding 1:1 to the supernatant and the mixture was centrifuged to obtain the supernatant. Phenol-chloroform purification was performed by adding equal volume of phenol (i.e., tris-saturated phenol-chloroform-isoamyl ethanol in a ratio of 24:25:1) to the supernatant, vortexed for a few seconds, spun for 2 min at 12,000 rpm 4 • C and the supernatant was transferred to a fresh tube. An equal volume of chloroform was added to the supernatant, vortexed for a few seconds, spun 2 min at 12,000 rpm and 4 • C and the supernatant was transferred to a fresh tube. An amount of 2.5 volumes 100% ethanol was added, vortexed and precipitated the DNA at −80 • C for 3 h. Subsequently, samples were spun for 20 min at 12,000 rpm and 4 • C; the supernatant was removed by aspiration. DNA was washed by adding 1 mL cold 70% ethanol, spun for 10 min at 12,000 rpm and 4 • C; the supernatant was removed by aspiration and the DNA pellet was air-dried for 10 min at room temperature. The DNA was dissolved in 10 mM Tris treated with RNAse (10 mM Tris, bring to pH 8.0 with HCl; 1 mM EDTA; RNAse 20 µL/mL) and stored at −20 • C. For the milk-based product (Mabisi), the DNA extraction protocol was performed as follows: into a 1.5 mL microcentrifuge tube was added 1 mL of Mabisi that was centrifuged at 13,000× g for 2 min to pellet the cells and remove the supernatant. The cells were re-suspended in a solution containing 64 µL of a 0.5 M EDTA solution, 160 µL of Nuclei Lysis Solution (Promega), 5 µL RNAse (10 mg/mL), 120 µL lysozyme (10 mg/mL) and 40 µL proteinase E (20 mg/mL) and incubated for 60 min at 37 • C. Ammonium acetate (5 M) 400 µL was added and cooled on ice for 15 min before being spun down at 13,000× g for 10 min. The supernatant containing the DNA was transferred to a fresh 1.5 mL microcentrifuge tube and a phenol-chloroform DNA purification was performed as described for Munkoyo. 16SrRNA Amplicon Sequencing of DNA Samples and Analysis of Sequence Data The Company LGC Genomics GmbH (Berlin, Germany) conducted 16S rRNA gene analysis of bacterial communities in metagenomic DNA samples using the illumina MiSeq V3. Using an analysis pipeline [26] based on qiime software [27], the 25 samples collected from the producers in Mkushi were analyzed. Firstly, the forward and reverse reads were joined in one fastq sequence (join_paired_ends.py, minimum overlap 10 nucleotides). Then primers were removed from both ends and reads were quality trimmed using cutadapt (minimum length 400, minimum quality 20, [28]). With uchime, chimeric reads were removed (using blast against the "gold" database, [29]). Then the sequences were given identifier names by a custom awk script (similar to split_libraries.py). The command pick_open_reference_otu.py, at 0.95 similarity was used to cluster Operational Taxonomic Units (OTUs), to produce an OTU table and to assign taxonomy. From the OTU table produced, the minimum number of sequences per sample was determined and OTU tables were made by using multiple_rarefactions.py, to 15,000 sequences. Then alpha diversity and beta diversity were determined, which produced the distance matrices that were used for jackknife clustering (upgma_cluster.py), from which a consensus tree was produced (consensus_tree.py). Bacterial diversity by total effective sequence reads, OTU numbers, Chao1, and the Faith's phylogenetic diversity (PD_Whole_Tree) were used to evaluate and compare diversity and richness of the communities among different samples, i.e., between and within samples taken per product type. Statistical Analysis R statistical package version 3.5.0 (version 3.3.1, R Foundation for Statistical Computing, Vienna, Austria) and IBM SPSS statistics version 25 (SPSS Inc., Chicago, IL, USA) were used to analyze the data. The nutrition composition data was presented as means with standard deviation (SD) and percentage coefficient of variation (%CV). A Student t-test and ANOVA as group tests were performed and principal components analysis (PCA) was carried out to determine variation in the samples. The nutritional variables were each categorized into three classes (low, medium, high) based on the percentage they could contribute to the estimated average requirement (EAR). The values that were able to contribute less than 20% of EAR were taken as low, for a contribution between 20% and 50% they were taken as medium, and for contribution of 50% and above then they were regarded as high. For this study, we focused on the bacterial composition alone, since earlier work has shown that yeasts and other eukaryotes are usually present at an abundance below 1% [13]. The alpha diversity indices Faith's phylogenetic diversity (PD) and Chao1 diversity richness were calculated for Mabisi and Munkoyo samples. Comparisons of diversity between Mabisi and Munkoyo samples were done using t-test of Chao1 diversity index. Then a non-parametric test, analysis of similarities (ANOSIM), was used to determine the impact of various independent variables including product type, fermentation vessel, and categories of nutritional parameters on the dependent variable microbial community composition. This analysis shows whether or not classifying the samples in distinct groups explains significant parts of the variation in microbial community composition between the samples-in our cases, the distinct groups are defined based on product and processing variables as well as on nutritional variables. Results For our survey, 12 Mabisi and 13 Munkoyo samples were collected in Mkushi from different processors, each processor producing only one of the two types of traditional fermented food products. The Mabisi producers were two males and ten females all using the Tonga processing method [14], whereas all the Munkoyo producers were females and all used the Kitwe processing method [17]. The fermentation vessels used by the local processors for Mabisi were small plastic bottles (83%) and plastic buckets (17%), whereas for Munkoyo it was mostly calabashes (62%) and metal drums (38%). Samples were analyzed for their nutritional and microbial composition. Nutritional Analyses The results of the proximate analysis are in Table 1, and the levels of vitamins and minerals are in Table 2. Statistical tests comparing the results found for Mabisi and Munkoyo for each parameter are in Table S1. The quality control revealed that all duplicate samples were below the acceptable range of 10% CV for AAS and HPLC. Mabisi samples on average had a moisture content between 85% and 90%, with two exceptions with a moisture content of 70%, while Munkoyo samples had a moisture content mostly between 90% and 95%. In comparison to the other Mabisi samples, the two Mabisi samples with lower moisture content showed high values of other proximate composition parameters and lower vitamin B2, vitamin B3 and calcium content, resulting in overall higher standard deviations around the mean over all samples for Mabisi than for Munkoyo. The pH for Mabisi on average is one pH unit higher than that for Munkoyo. Notes: SD is the standard deviation, CV% is percentage coefficient of variation. The % of the contribution to the EAR for women aged 19-50 years old for selected nutrients are shown in Table 3. One serving (183 g) of Mabisi would contribute mostly to the EAR of vitamin B2 (27%), calcium (22%), protein (18%), zinc (15%) but less than 10% of the EAR for the other B-vitamins and iron. One serving of Munkoyo would contribute to less than 10% of EAR for each of the nutrients. For each individual nutritional parameter that we measured, values are higher for Mabisi than for Munkoyo except for vitamin B1, vitamin B3 and vitamin B6. Table S2). Principle component 2 (PC2) explains 11.8% of variation with variables high in loadings including vitamin B1 and vitamin B6. The PC-analysis clearly separates Mabisi and Munkoyo samples showing differences between the two products ( Figure 1). The Munkoyo samples are spread along PC2 and the variation is explained by vitamins B1 and B6 with higher loadings as in Table 1. Imposed on the graph are the nutritional parameters with vitamins B1, B3 and B6 (indicated by the red lines) separated away from the rest as they were not different between the two traditional fermented foods Mabisi and Munkoyo; moreover, moisture clustered with Munkoyo samples as it was higher in Munkoyo than Mabisi; other nutritional parameters are clustered with Mabisi samples as they were higher in Mabisi samples than in Munkoyo samples. The nutritional parameters are all aligned around zero for PC2 except for vitamins B1, B3 and B6, which were the reason why Munkoyo samples were differentiated along PC2. The vitamins B1, B3 and B6 were similar between Mabisi and Munkoyo samples but not the other nutritional parameters. This difference was confirmed using a t-test with results indicating that there was no statistically significant difference between Mabisi and Munkoyo for these vitamins (Table S1). [19]. For protein no WHO EARs exist and 80% of the Population Reference Intake from the European Food and Safety Authority (EFSA) was used (80% × 0.83 g protein/kg body weight per day) [31]. c For iron, 5% bioavailability was assumed. d for zinc, low bioavailability was assumed. Microbial Analyses Microbial community composition for each sample of Mabisi and Munkoyo as determined by non-culture-based methods are shown in Figure 2. In total, 1826 distinct bacterial types (Operational Taxonomic Units or OTUs) were found in all samples of which most were identified as either Lactobacillus, Lactococcus, Streptococcus, Enterobacter, Klebsiella or Acetobacter. Since even within one species taxonomic variation exists, different OTUs can be identified as the same species, yet each OTU does represent a unique bacterial type [32]. Two diversity indices were calculated to describe the microbial communities in the samples. The alpha diversity indices, Faith's phylogenetic diversity (PD) and Chao1 were calculated for Mabisi and Munkoyo samples ( Figure 3 and Table S2). The Chao1 was different for samples of the different traditional fermented foods Mabisi and Munkoyo (t-test, t(23) = −7.18, P < 0.001). Based on Faith's PD, there was no difference in microbial diversity between Mabisi and Munkoyo samples. In the analysis of similarity (Table 4) assessing which variables contribute to the observed variation in microbial composition between the samples, we included two processing variables: the type of products (two categories, Mabisi and Munkoyo) and the type of fermentation vessel used (four categories). A P-value <0.05 indicates that the variable explains significant amounts of variation in the microbial community composition. Both processing variables explained significant parts of the variation in microbial profiles. We further included the categorical data of seven nutritional parameters for which sufficient variation exist to allow the statistical test; these were protein, fat, water soluble vitamins and minerals. Except for vitamin B1 and vitamin B3, the categorization of nutritional variables explained significant parts of the variation in the microbial community composition. Table 4. Results of the analysis of similarity (ANOSIM) for impact on microbial composition of product type and fermentation vessel and various nutritional parameters for which sufficient variation exists among samples. Two diversity indices were calculated to describe the microbial communities in the samples. The alpha diversity indices, Faith's phylogenetic diversity (PD) and Chao1 were calculated for Mabisi and Munkoyo samples (Figure 3 and Table S2). The Chao1 was different for samples of the different In the analysis of similarity (Table 4) assessing which variables contribute to the observed variation in microbial composition between the samples, we included two processing variables: the type of products (two categories, Mabisi and Munkoyo) and the type of fermentation vessel used (four categories). A P-value <0.05 indicates that the variable explains significant amounts of variation in the microbial community composition. Both processing variables explained significant parts of the variation in microbial profiles. We further included the categorical data of seven nutritional parameters for which sufficient variation exist to allow the statistical test; these were protein, fat, water soluble vitamins and minerals. Except for vitamin B1 and vitamin B3, the categorization of nutritional variables explained significant parts of the variation in the microbial community composition. Table 4. Results of the analysis of similarity (ANOSIM) for impact on microbial composition of product type and fermentation vessel and various nutritional parameters for which sufficient variation exists among samples. Notes: Variable, test statistic (R), number of treatment groups (# of Groups) and exact p value (P) are given, unless the p value was smaller than 0.001, which is indicated by <0.001. Discussion The aim of this study was to characterize the nutritional composition and microbial community composition of two traditional fermented foods, Mabisi that is based on raw milk and Munkoyo that is based on maize. The results in this study clearly showed that the two products were different with respect to the nutritional parameters and the microbial community composition. Nutritional Composition Mabisi was found to have higher nutritional values for crude protein, fat and carbohydrates than Munkoyo. The difference in nutritional composition between Mabisi and Munkoyo can be mainly attributed to the use of milk as the raw material for Mabisi and maize as the raw material for Munkoyo. Milk is a rich animal source of protein and fat, while the main component in maize is starch [33,34]. Among samples of the same product type, variation in nutritional composition is higher for Mabisi samples than for Munkoyo samples. This is mainly caused by two Mabisi samples that have a lower moisture content than the other samples. This lower moisture content may be caused by the removal of whey during the fermentation process. While specific processors in our study did not mention whey removal, other studies have found that several processors remove whey, reducing the volume of the processing batch by 30% [14]. In Zambia, fresh milk (unfermented) is rarely consumed due to high prevalence of lactose intolerance, which is estimated at 70-90% in the Zambian population [35]. During fermentation, most of the lactose, which at the beginning is an antinutritional factor, is converted into lactic acid and other compounds [4]. In the Zambian context, this makes Mabisi more nutritious than fresh milk. It was expected that Mabisi would have had a higher concentration of B-vitamins considering that it is made from milk and Munkoyo is made from maize which has low levels of most B-vitamins. We found however that both products are a source of B-vitamins, which could be attributed to the fermenting bacteria which have previously been shown to produce B-vitamins [3]. Mabisi was higher in calcium, iron and zinc and regular consumption in combination with other local foods could help to increase intake of these micronutrients. This was also reflected when one serving of Mabisi for an adult woman was considered to contribute higher amounts of calcium and zinc and also vitamin B2 and protein to the estimated average requirements. It can be said therefore that Mabisi would be a good source of nutrients for inclusion in the food-based dietary guidelines [36]. A recent study using 24 h recalls to measure micronutrient intakes of lactating women in rural Zambia found inadequate intakes, especially of vitamin B3, vitamin B12 and iron [37]. These B-vitamins are of interest to our study since microbial activity could increase their levels in the final products. Furthermore, since raw milk is not consumed that much due to lactose intolerance, the promotion of Mabisi could have a positive impact on iron intakes. This positive impact may apply more broadly to other dairy based traditional fermented foods in the region [18,38]. Microbial Community Composition Our results showed that the microbial communities in the product samples consist of three to eight distinct bacterial types (Figure 2). Several different bacterial types belong to the same species [32]. Previous studies have shown that bacterial densities in the final products are typically around 10 8 cfu/g [13]. The microbial community composition in Mabisi samples was most abundant in Streptococcus, Enterobacter and Lactococcus species, while the microbial community composition in Munkoyo samples was most abundant in Lactococcus and Lactobacillus species, which is consistent with other studies [13,17]. The microbial communities in the products are dominated by lactic acid bacteria, whose growth resulted in a low pH, enhancing food safety properties and shelf-life. For Mabisi, the final pH was around 4.1 and for Munkoyo the final pH was around 3.2, which is in line with previous studies [13]. Products at a pH below 4.5 are generally considered to be protected against microbial pathogen proliferation [39]. The pH values we found for Munkoyo were consistently well below this safety threshold. However, for one Mabisi sample, we found a pH of 4.6, highlighting that during Mabisi processing, the pH level could be a safety concern. The lactic acid bacteria are also regarded as healthy bacteria that may enable shifts in gut microbiota composition towards a more healthy composition. Mabisi had a slightly higher diversity as shown by the diversity indices that we calculated. This could be attributed to the fact that raw milk contains a wider diversity in substrates supporting a wider range of bacterial types, especially in that raw milk contains more protein. More complex substrates are known to support more diverse species communities [40]. The Chao1 diversity index showed higher diversity in Mabisi than Munkoyo, whereas Faith's phylogenetic diversity index was the same between the two products. This may be caused by the fact that the Faith's phylogenetic diversity index uses branch lengths for assigning diversity metrics, which cannot separate lactic acid bacteria with the same level of discriminatory detail [41]. The Chao1 index on the other hand is an estimator based on the abundance of species taking into account the rare species [42]. Alpha diversity index Chao1 found in Mabisi samples (ranged from 206 to 471) was higher than what Shangpling et al. (2018) found (Chao1 ranged from 90 to 138) for the Indian naturally fermented milk product [43]. However, our results were comparable with what Liu Xiao-Feng et al. (2015) found for a Chinese traditional fermented goat milk (Chao1 ranged from 166 to 640) [44]. Factors that Affect Microbial Community Composition It is thought that the composition of species' communities depends on external selection pressures that lead to a process of species sorting [45,46]. In our study, the main contrast in external selection were the raw materials and fermentation vessels used for fermentation. As expected, our results show a marked difference in microbial composition between Mabisi (based on milk) and Munkoyo (based on maize), which however is in slight contrast with earlier work [13]. This earlier work had compared various microbial communities from Mabisi and Munkoyo samples collected at various distant geographic locations and did not control for a processing method. They found that microbial communities collected at the same location had similar microbial communities, regardless of the product type (Mabisi or Munkoyo), suggesting that it is geographical location rather than raw materials that most significantly affects microbial community structure [13]. The present study was performed in a more systematic way, focussing on one processing method per product type and one geographical location. In our study, the differences between the microbial communities of the two products could be due to variations in other determinants known to affect microbial community composition, in particular the fermentation vessel used, which indeed came out as a significant factor explaining variation in microbial communities, and the level of back-slopping. It has been established that, for example, back-slopping, which is the transfer of a small fraction of the previous product into fresh raw material [47], ensuring the transfer of microbial communities underlying the fermentation helps shape microbial communities from batch to batch. In Zambia, back-slopping is usually done using a calabash as a fermenting vessel which is not washed to preserve some starter cultures that is used for the next fermentation and in our study 68% of the Munkoyo producers had used the calabash and none for Mabisi. This could imply that most of the Mabisi producers in this study area rely on the spontaneous fermentation method. This could mean that indeed environment played a role in shaping the microbial composition but also a fermenting substrate could play its part because of the similarities in samples of the same type which is in agreement with what has been found before [8,9]. We found a correlation between levels of various nutrients (levels of protein, fat, B-vitamins and calcium) and a variation in microbial community structure (Table 4). Our experimental design does not allow to distinguish whether different levels of nutrients in the raw materials affected the microbial community composition, or the other way around-that composition of microbial communities affects metabolic activity, resulting in some final products to have higher levels of nutrients. We hypothesize that levels and types of protein, carbohydrates and maybe calcium in raw materials are a selective force in driving species composition since these are nutrients that are directly used by a wide range of micro-organisms. Moreover, different micro-organisms have different requirements and capabilities to metabolise these substrates. On the contrary, the microbial community composition may affect the final levels of B-vitamins, since several B-vitamins are known to be produced by bacteria and are added to the raw materials by fermentation [48,49]. For instance, Lactococcus lactis has been found to produce significant amounts of riboflavin during fermentation [50]. Our finding suggests that research to determine the levels of enrichment with B-vitamins by the micro-organisms present in Mabisi and in Munkoyo fermentation is worthwhile. This could be done in controlled laboratory experiments using defined mixtures of bacteria isolated from Mabisi and Munkoyo and measuring levels of B-vitamins before and after proliferation of the bacteria. This future work could also include experiments with defined bacterial communities to identify the specific micro-organisms that are responsible for B-vitamin production. This could further be extended to other studies on the functionality of microbial communities, for instance in the removal of mycotoxins from maize during fermentation [51]. In the present study, we collected product samples from producers and did not perform the fermentation ourselves. The present work could not permit us to carry out baseline nutrition analysis on the raw materials before fermentation so that we could attribute any changes in microbial composition to the difference between baseline and after fermentation. Conclusions and Significance This study documented nutritional composition of traditional fermented foods Mabisi and Munkoyo with Mabisi having higher nutrient values than Munkoyo except for vitamins B1, B3 and B6. We determined the composition of micro-organisms that are present in Mabisi and Munkoyo. Mabisi has an advantage over Munkoyo for is consumers in that it has a greater impact on nutrient intake. The increase in B-vitamins and the possible probiotic effect of Munkoyo also makes it a product that is useful for regular consumption for an improvement in dietary diversity. We assessed and found that differences in microbial communities correlated to differences in nutritional content. Our study thus provides unique data on the nutritional composition of two traditional fermented foods that is essential for the planning of nutritional programmes in Zambia. It provides a general outlook on the importance of understanding how microbial activity adds to the nutritional value of fermented products. Our study is a formal demonstration that a locally produced fermented food, especially Mabisi, can contribute to achieving improved nutrient intake of various important macro-and micro-nutrients. For many of the locally available foods such as Mabisi and Munkoyo, nutritional data are lacking, impeding their consideration for inclusion in food-based dietary guidelines. Therefore, the data generated in this study will be useful for inclusion in the food-based guidelines. We recommend more research to include a determination of the nutritional composition of raw materials and end-products of fermentation to quantify the addition of nutrients by fermenting microbes and to conduct a more genomic analysis for B-vitamin production. Furthermore, other recent work has shown that a variation in Mabisi and Munkoyo processing methods have an impact on microbial community composition. Based on this current study, this variation in microbial community structure may also impact nutritional composition. Thus, the inclusion of other processing types of Mabisi and Munkoyo than the ones used in this study is also recommended. Finally, our work could be expanded by adding measures of bioavailability of the nutrients within the diets of consumers by determining molar ratios of phytate to zinc, iron and calcium. Supplementary Materials: The following are available online at http://www.mdpi.com/2072-6643/12/6/1628/s1, Table S1. Statistical analysis of differences in mean between measures for Mabisi and Munkoyo of nutritional parameters measured and pH (see Table 1; Table 2 in main text). Table S2. Variation accounted for by the different nutritional parameters (in rotated space by Varimax with Kaiser Normalization) a . Principal Components 1 and 2 are shown with loadings of the nutritional parameters contribution to the respective components. Component 2 variation in the samples is explained by vitamins B1 and B6. Explain that loadings have a value between −1 and +1, zero being the average of all observations. Table S3. Results of microbial community diversity analysis of traditional fermented foods Mabisi (MA) and Munkoyo (MU): alpha diversity measures (at highest sampling rarefaction of 15000 sequences per samples) for each sample based on Chao1 and Faith's phylogenetic diversity (PD).
v3-fos-license
2019-01-24T14:05:54.725Z
2018-12-19T00:00:00.000
59190390
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://mulpress.mcmaster.ca/ijrr/article/download/3510/3174", "pdf_hash": "3dd089aeff3d9ff9cb9bbe209ab4ec013b5a4740", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43673", "s2fieldsofstudy": [ "Law", "Medicine", "Psychology" ], "sha1": "ad126b115cc2eb108e20f7b4fc29c6f8346664e7", "year": 2018 }
pes2o/s2orc
The reform of Italian forensic psychiatric hospitals and its impact on risk assessment and management Italy has a strong history of deinstitutionalization. It was the first country to completely dismantle psychiatric hospitals in order to create small psychiatric wards closer to the community (i.e. in general hospitals). Nevertheless, it took the nation nearly 40 years to complete the process of closing all forensic psychiatry hospitals. Deinstitutionalization however, was not fully addressed by the first wave of Italian psychiatric reform. This paper describes the establishment of new facilities replacing old forensic hospitals, formally known as Residences for the Execution of Security Measures (REMS). REMS are a paradigm shift in terms of community-based residential homes, and are mainly focused on treatment and risk assessment, rather than custodial practices. The use of modern assessment tools, such as the Aggressive Incident Scale (AIS) and the Hamilton Anatomy of Risk Management (HARM), is crucial in order to objectively assess the clinical cases and are consistent instruments that form part of the treatment plan. A preliminary analysis of data from the first 2 years of activity, focusing on severely ill patients who have been treated for more than 12 months, is described for two REMSs in the Lazio region, close to Rome. Encouraging results suggest that further research is needed in order to assess clinical elements responsible for better outcomes, and to detect follow-up measures of violence or criminal relapse post discharge. Introduction It has been nearly 40 years since the Basaglia Law, also known as 180/1978 Law, was approved in Italy. This law led to the dismantling of all psychiatric hospitals; a definite landmark in Italian and psychiatric history (1,2). Similarly, another wave of reform, beginning in 2008, led to the closure of all 6 Forensic Psychiatric Hospitals (Ospedali Psichiatrici Giudiziari, OPGs) located across Italy. Along with this, was the establishment of new small-scale residential facilities called REMS (Residences for the Execution of Security Measures), designed to perform intensive and highly specialized mental health care to better meet the needs of mentally ill offenders. In May 2014, the 81/2014 Law established the deadlines and all of the procedures considered necessary for the final dismantlement of forensic psychiatric hospitals by March 2015. This second wave of deinstitutionalization completed the work of the first wave, and fully established community treatment as the primary method of psychiatric care in Italy. The Basaglia reform in 1978 did not extend its reforming principles to individuals suffering from a mental disorder, who committed a criminal offense and required psychiatric treatment in forensic hospitals. Those individuals were consequently left out of the medical and intellectual debate that arose. This group was kept under the same derelict conditions that the 180 Law aspired to eradicate. It seemed that the demand of preserving community protection trumped reform drives. The law laid the foundation for a novel therapeutic approach to mental illness, that favored extensive community treatment over hospitalization. But it did not address the framework of the Detention Security Measures, which outlines the process of internment in forensic hospitals. Starting from 1978, 6 forensic hospitals survived, preserving the characteristics of both asylum and prison, and complying with social obligations for cure and custody. Over the years, the discrepancies between the different treatments provided to patients who did not commit crimes, versus patients who did, gradually increased. The Forensic Psychiatry population was poorly studied with little epidemiological data available on quality of health care provided (3)(4)(5). The heavy use of custodial staff led to uneven observations of offending behaviors, and impeded the development of strategies to monitor and prevent them. A lack of constant cooperation with mental health community-based teams further weakened the therapeutic project. Additionally, the lack application, of the geographic catchment principle, resulted in patients being treated far from their homes, relatives, and doctors. This led to deficient and unsatisfactory discharge programs due to the lack of social support and therapeutic planning. The stagnation that the OPGs (forensic psychiatric hospitals) have experienced over the past decades, along with rare occasions of cooperation and collaboration with Mental Health Departments and Universities, partially set back the access to more recent acquisitions and practices. However, on March 31st, 2015, the reform process concluded; two more years were necessary to complete the transition period but by February 2017, 569 inpatients had been admitted to REMS throughout Italy. The entire therapeutic path of mentally ill offenders still remained under judiciary control, with Judges' ruling both on its length and its development, as well as defining the level of intensive care required, and sentencing patients either to REMS or other residential settings accord-ing to a highly subjective interpretation of the legal indications. To overcome any prolonged length of stay of patients within the forensic setting, the reform stated that the maximum length of the Detention Security Measure (i.e. the maximum internment in REMS) could not exceed the maximum detention provided by the Penal Code (i.e., the Italian Criminal Law) for that specific crime. The introduction of temporal limits, along with community proximity and small-scale numbers, are all key features intended by the legislator to guarantee a therapeutic journey aimed at rehabilitation and social reintegration. The REMS are small residences with a 20person capacity. Here, mentally ill offenders undergo the same pharmacological and therapeutic approach as any other psychiatric patient, and where health care more than custodial necessities determines the nature of treatment. As of July 2015, with the new allocation planned nationwide, the Lazio Region became the second largest forensic psychiatry center in Italy, with 81 beds and specific focus on violence risk assessment and management. So far, Mental Health Departments in Lazio have had 110 forensic patients admitted since their implementation; the 1 st REMS has been in Subiaco («Castor») in July 2015, then a second one in Palombara Sabina («Merope»), in Fall 2015, and a 3rd REMS has been established in Spring 2016 again in Palombara Sabina («Minerva»). The aim of our paper is to describe how the adoption of the Aggressive Incident Scale (AIS) along with the Forensic Version of Hamilton Anatomy of Risk Management (HARM-FV), as primary tools in violence risk assessment [6,7], have improved our daily practice guiding the evaluations within a team environment and granting a constant assessment of our rehabilitation program's efficacy, monitoring and redirecting our therapeutic intervention. Rehabilitation and risk assessment in REMS As the new Law has clearly demanded, REMS facilities have been established with the specific aim of psychiatric treatment and rehabilitation. Consequently, REMS have been the first units in our Department to structurally employ Psychiatric Rehabilitation (PR) therapists and include PR interventions as an integral part of the treatment team and program. Therefore the clinical assessment of forensic patients routinely consists of the following: 1) a mental status examination performed by a psychiatrist, 2) a psychological assessment undertaken by clinical psychologists using clinical examination and psychometric tools, 3) a psychosocial evaluation of social needs in terms of financial resources, family support and social inclusion by a social worker, 4) a functional assessment obtained through clinical examination; and functional scales and measurement by PR therapists. Measurements of psychopathology, personality traits, and level of functioning are regularly obtained through the Italian versions of internationally validated rating scales, tests, and interviews including: the Brief Psychiatric Rating Scale (BPRS) [8,9], the Minnesota Multiphasic Personality Inventory Ver. 2(MMPI-2) [10], the Millon Clinical Multi-axial Inventory 3rd Ed. (MCMI-III) [11], the Personality Inventory for DSM-5 (PID-5) [12], the Scale for Personal and Social Functioning (FPS) [13] , and the Scale for Specific Level of Functioning (SLOF) [14,15] . Cognitive assessment is performed through the Wechsler Adult Intelligence Scale 4th Ed. (WAIS-IV) [16] and the Repeatable Battery for the Assessment of Neuropsychological Status Update (R-BANS) [17] ; while specific psychopathological dimensions are addressed and measured by specific scales, tests or interviews, such as the Barratt Impulsiveness Scale (BIS-11) [18,19] for impulsiveness, the Columbia Scale for Suicidal (C-SSRS) [20] for suicidal behaviors, the Psychopathy Check List -Revised (PCL-R) [21] for psychopathy, and the HCR-20 V3 [22]. Concerning the assessment and management of the risk of violence, REMS have established the regular use of AIS and HARM-FV as new instruments for the whole Department of Mental Health since the outset, with possible future extension to other Community Services or Psychiatric Intensive Care Units. The routine use of HARM-FV during the early phase of admission has demonstrated impressive usefulness in defining most of the treatment plans for violent and nonviolent offenders. In fact, reporting and analyzing Current Risk Factors from the HARM-FV Present Section, makes it easy to underline which psychopathological conditions and behavioral problems are to be addressed first, and in which way. For instance, when Mood or Psychotic Symptoms are assessed as "severe" ("needing improvement" in the newer version), the physician has a clear indication for introducing or adjusting antipsychotic, or mood stabilizing pharmacological treatments. At the same time, when Impulse Control, Attitude/Cooperation or Anger Management (the last two being features of the newer version) are considered an issue in the current status of the offender, the treatment plan is oriented to include the patient in individual or group psychotherapy, or in Social Skills Training (SST) programs focused on anger management or cooperativeness. REMS utilized multiple psychopharmacological interventions, most commonly being second generation antipsychotics and mood stabilizers. Adjunctive therapy to the pharmacological interventions included: individual and group psychotherapy, psychological interventions, and psychoeducation. Specific focuses within these therapies included DBT for personality disorders, cognitive therapy for psychosis, and SST for better control of anger, impulsivity and violence. There was also a behavioral program in place to grant gradual access to privileges by virtue of constant rule adherence. Effects of psychiatric rehabilitation on risk indexes Since the implementation of the REMS, 46 patients have been admitted to REMS Castor, and 41 to REMS Merope, (REMS Minerva has had 23 patients, but they were not included in this study). In this study, we only considered patients with a diagnosis of Schizophrenia Spectrum Disorder (DSM-5 criteria) , including Schizoaffective Disorder, and treatment-resistant Schizophrenia, assessed according to Kane criteria [23,24] . We did include the patients suffering from Antisocial Personality Disorder as a comorbid condition. Exclusion criteria were: the presence of DSM-5 diagnosed Moderate to Profound Intellectual Disability, or the presence of Antisocial Personality Disorder alone with no association with Disorders of the Schizophrenia Spectrum or other major psychiatric disorders. A further 3 patients were excluded because they did not complete the initial assessment period following transfer to other correctional or rehabilitation facilities on the order of judicial au-thority. At the end of the recruitment period, 80 patients were included in this study. The evaluation of each current risk factor at baseline is reported in Table 1, where each degree of class according to the HARM scale (a 4-point Likert scale from none to severe) is expressed in terms of frequencies. Evidently, some factors are considered more problematic in the forensic population at baseline, with more than 50% of patients presenting a "moderate" or "severe" risk (in red in Table 1). These results support clinical experience where it was observed that forensic patients are commonly unaware of their psychiatric conditions, frequently present comorbidly with substance abuse, often demonstrate scant participation in the rehab program in the beginning and most have inadequate social support, hampering the treatment plan. The summary of HARM re-evaluations at 12 months is represented in Table 3, where improvements from the baseline are also represented in terms of overall and paired differences from the total (n=80) or paired counterpart (n=37) at baseline, considering the frequency of moderate/severe attributions alone. Statistical significance tests have also been performed in order to consider the frequencies of moderate/severe attributions that are different from the baseline, but no statistical significance has been demonstrated through the Chi-square and Fisher's exact tests for categorical variables. Despite the lack of statistical significance, there was an evident overall trend in improvement for all of the Risk Factors except Social Support. For the individuals that were scored moderate/severe risk, it is noted that 9 (out of 10) risk factors are reduced. The greatest improvement in terms of paired difference was found in Psychotic Symptoms and Substance Abuse (21.62%), Impulse Control, Program Participation and Mood Symptoms (-18.92%). The following factors reported a reduced frequency of moderate/severe risk evaluation: Antisocial attitude (16.22%), Illness Insight (-16.22%) and Rule Adherence (-13.51%) reported. Little to no improvement was found in Medication non-adherence and Social Support. Discussion The introduction of modern and scientific assessment tools for violence assessment and management in REMS has allowed psychiatric attitudes towards forensic patients to change significantly, from a mainly custodial practice to a more clinical and predictive one, with focus on risk factors for violence relapse. The predictive validity of HARM has already been ascertained and demonstrated [7] across different cultures and countries [30]. However, in order to confirm the predictive validity of HARM in an Italian context, further research in Italy is needed to compare clinical assessment to follow-up data after discharge from REMS. Our study shows that evaluating risk factors for violence is effective and crucial in the treatment planning for a forensic unit. This can be done through a comprehensive toolbox of instruments that focus on those factors playing a role of violence recidivism in psychiatric offenders, such as the HARM. As a reduction of psychiatric symptoms is crucial in forensic patients, a specific focus of intervention is devoted to positive and disorganizing symptoms, especially when connected with violent recidivism. As pointed out by Table 2, although there is significant reduction in the global severity of symptomatology (BPRS total score), the reduction in positive symptoms remains subtle. This may be explained by the comorbid drug use and possible period of treatment non-compliance and treatment-resistant Schizophrenia. Our study indicates that even when some symptoms persist, such as auditory hallucinations, delusions, disorganized speech, and no major clinical improvement is noted, their level of risk can nevertheless be assessed as reduced by the clinicians who considered some risk factors as being managed on the HARM tool (Table 3). At the 12-month follow up mark, clinicians generally tend to assess reduced severity for most of the HARM risk factors, especially those considered more problematic at the outset. Substance abuse and program participation reported an impressive reduction in those who scored severely or moderately at risk. Illness insight reduced the proportion of more critical patients to 72%, which still represents a critical issue for the majority of forensic patients. The aspect that is almost completely unaffected by treatment is Social Support, one of the limitations of the REMS model. Indeed, the majority of interventions are more oriented to social inclusion in terms of increased sociality rather than greater social equality or accessibility to social roles. In practice, this means that many forensic patients who are clinically stable but economically fragile cannot directly access external vocational therapy programs or job training. This is a direct consequence of reform that has ensured stronger clinical attitudes, but less funding for increasing opportunities in a socially vulnerable context. Our study model did not take into consideration the relative role played by specific interventions or by other clinical elements such as personality traits (antisocial or psychopathic for example) and impulsivity. Further research is needed to develop a more complex model in which personality profiles, impulsivity and likelihood of violence are examined within the setting of REMS interventions. This is the first study in Italy to evaluate the role of the HARM assessment tool in a forensic context. Here we present preliminary results on the experience of forensic de-institutionalization and the introduction of the REMS model in Italy. Conflict of Interest: none
v3-fos-license
2021-11-24T14:08:18.273Z
2021-11-01T00:00:00.000
244495390
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/1660-4601/18/22/12176/pdf", "pdf_hash": "a1b2244445bdee947aeb5f10a1a1990dd174c8ff", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43677", "s2fieldsofstudy": [ "Sociology" ], "sha1": "f285a92ad70644521fadd557224f6bc6e776eeb4", "year": 2021 }
pes2o/s2orc
Routines, Time Dedication and Habit Changes in Spanish Homes during the COVID-19 Lockdown. A Large Cross-Sectional Survey Many countries chose to establish social distancing as lockdowns after the COVID-19 outbreak. Households had to adapt their day-to-day lifestyles to new circumstances, affecting routines and time dedication to tasks. This national study was carried out to find out how the confinement by COVID-19 affected Spanish households on the perceived habit changes during this period, in relation to their socio-demographic characteristics and household composition. An online questionnaire was launched during the COVID-19 lockdown, from 30 April to 22 June 2020. Descriptive statistics were analyzed, stratified by gender, on time dedication, routine, home leaving, and habit change variables. Chi-square tests were used to explore the relations of significance with socio-demographic characteristics and home composition. All contrast analyses were performed for a 95% confidence level (significance considered for p < 0.05). In total, 1673 respondents participated from different age groups, educational level, employment status and household composition. Sixty percent of respondents maintained their routines. A third tried to establish a new one, being related to women, young people, not a university student, and living with others, including minors. Regarding dedication to tasks, adults aged 35–54 years, with more cohabitants, especially women, devoted themselves intensively to the home or to care, while those under 35 were dedicated more to rest, leisure, television or reading. People with university studies were more related to teleworking. The frequency of going outside was related to gender, age, educational level and living with elders, specifically for grocery shopping and taking out garbage. Changes in habits, routines and time dedication in confinement were strongly linked to the sociodemographic and coexistence conditions in Spanish homes. The greatest impacts were suffered by women, people with children, and adults between 35–54 years of age, especially on care and domestic chores. Introduction The SARS-CoV-2 coronavirus pandemic has swept the world since it emerged in late 2019 in China. In Europe, the first cases were confirmed on 30 January 2020, while more than 7800 cases were confirmed worldwide [1]. On 11 March, the World Health Organization decided to raise the incidence of COVID-19 to the degree of pandemic [2]. In Spain, the first case was detected on 31 January [3]. Given the imminent incidence in Europe, and three days after the declaration of a pandemic, the Spanish government declared a State of Alarm [4], by which all nonessential movements were restricted, and activity at the national level, using confinement as a measure of prevention and containment of the transmission of the disease [5]. This included the closure of schools and other educational centers [6], as well as teleworking as far as possible [7]. With the indefinite educational centers [6], as well as teleworking as far as possible [7]. With the indefinite suspension of these activities, homes became fully occupied for a full and permanent period for one month and a half. After this period, the Spanish government established a transition plan toward the new normal. They established four phases (phase 0: de-escalation preparation, phase I: initial, phase II: intermediate, phase III: advanced) of de-escalation conditioned by the epidemic situation in a progressive way [8], with phase I being the most restrictive and the last phase without restrictions. Progressively, phases became more flexible and allowed timetables and activities abroad, but not in a homogeneous way. The uneven de-escalation plan began on the islands and continued by province, for partial mobility and attendance to certain priority needs. Thus, overlapping degrees of de-escalation throughout the territory happened, according to the public health situation of each area. On 11 May 2020, only 54% of provinces (27) were in phase I [9]. After one month, on 18 June, 82% of them belonged to an advanced phase (41), and only 8% (four provinces) were in the new normal [10]. Finally, the national State of Alarm ended on 21 June 2020 [11]. According to the National Institute of Statistics, between January and May 2020, there were 32,652 deaths due to COVID-19 and 13,032 suspected of dying due to symptoms compatible with the disease. The month with the most deaths from COVID-19 and suspected of COVID-19 was April, with a total of 26,305 deaths [12]. Regarding the notified cases with a positive diagnostic test for active infection, during the data collection period (30 April to 22 June), 73.9% of positive cases (21,194) were between 30 April and 31 May, while the rest of the positive cases was at 26.1% (7480) between 1 June and 22 June [13]. The daily diagnosed cases of COVID-19 in the first wave of the pandemic (which basically coincided with the national State of Alarm in Spain) are shown in Figure 1. In it, the differentiated sub-periods are superimposed as total confinement (the entire population locked up 24/7, except for specific cases of force majeure or essential activity, in Figure 1, defined by a red shading) compared to the de-escalation period, which, as mentioned, had various sub-stages, some of them occurring simultaneously by territorial areas, according to the level of transmission of COVID-19 (in Figure 1, period with yellow shading). As shown in Figure 1, the national State of Alarm helped to drastically stop the transmission of the disease. This social norm, of an imperative nature, obtained the desired result, especially during total confinement. Once certain epidemiological requirements were met at the territorial level by provinces, these advanced de-escalation stages, which began on May 11, were conditioned to such requirements. However, as it occurred unevenly in the national territory, it generated different degrees of social mobility and performance of activities, depending on the spread of the disease by geographical area. Habits and Routine Behavior To understand what a habit is, and how it is produced, there are many references from various disciplines, such as psychology or sociology, included within the behavioral sciences. By way of summary, from the sociological field, the best known or accepted ones can be reviewed-among them are those related to American pragmatism during the first third of the 20th century, or Bourdeau's position at the end of the same century. At the dawn of the 20th century, Dewey defined habit as the engine of human action, which is influenced by group customs. Human actions must be learned; thus, habits bring together a series of ordered actions that provide comfort, ability and interest, once a certain habit is generated, to those who carry them out. Exercising the habit does not mean excluding thought, although it does channel it, exercising it in the spaces between habits [14]. Bourdeau developed the theory of "habitus", starting from Aristotelian concepts, Weber and (post)-husserlian phenomenology [15]. The habitus starts from a sociological, historical, and structuralist point of view, beyond naturalistic or mechanistic approaches. According to him, individuals (their bodies) have a natural predisposition to acquire nonnatural, arboreal abilities, although these must obey an external stimulus under certain circumstances. Through habitus, a system of "durable and removable" predispositions is formed. Bourdieu refers to "agents", endowing them with the skills of invention and improvisation. Habitus schemes allow constant adaptation to partially modified contexts, being able to meaningfully deconstruct a certain event by anticipating certain tendencies and behaviors that come in turn from all isomorphic habitus, immediately related [16]. Habitus are therefore "acquired schemes operating in the practical state as categories of perception and appreciation or as principles of classification, as well as organizing principles of action". Unlike habit, repetitive, mechanical, automatic, and more reproductive than productive, the habitus is considered as something potently generative [17]. Habit Changes and Adaptive Routines in Times of Emergencies and Disruptive Events Human agency has been defined, but there are other questions that analyze when, where and under what conditions habits prevail or, on the contrary, when internal reflection is prioritized [18]. For Camargo, habits and reflection depend (also) on people's beliefs and can punctually differ if satisfaction is perceived or not. If people perceive truthfulness in their beliefs, they will act out of habit [19]. Social satisfaction is not always born from empirical or verifiable verification, but from the joint belief of different groups. When this does not occur, there is no consensus, and critical episodes can arise. This reflection would involve questioning one's own belief, either discarding it, or trying to argue it to support it. Therefore, habit arises by satisfying belief, whether it is verifiable or not. These habits, once they lose their intentional character, are inscribed in the background [20], in what others call routines. In a simplified way, the power of human action or active response, also called "agency", depends on the state of the individual's beliefs, on the changes in the social structure (political, economic), and on those that are symbolic-cultural. In modern societies, where pluralism occurs, its members are more reflective [19]. Governments have the ability to effectively and quickly intervene and reform society and ways of life; society has supported this intervention in the service of social threats, such as health emergencies. However, not all threats to public health are supported in the same way, nor are they addressed in the same way by society. If the threat is immediate and direct, the social response is driven by necessity, and therefore it will be more radical and urgent (they are closer to people). Furthermore, if risks can somehow be mitigated in a relatively social "easy" and understandable way, it will be easier to assume. In addition, the temporary provision to execute these actions, and abide by the rules, is decisive to obtain a social response or another. As opposed threats and their social understanding in this sense, COVID-19 and climate change could be cited, for instance [21]. Habits, Routines and Time Dedication in Times of COVID-19 During the COVID-19 pandemic, many studies have been interested in knowing the social habits and the changes that they have experienced when physical distance was imposed. Ultimately, these studies addressed the consequences on people's physical, psychological, and emotional wellbeing. This situation altered the mental health of the population, causing situations of anxiety, stress and depression [35]. In many of these processes of wellbeing or lack thereof, the perception of the dwelling or its spaces also played a role, and therefore the adaptability it offered in the face of changes in activities or habits was required by circumstances [36,37]. However, studies where people reported general occupational engagement during lockdown, compared to the pre-pandemic situation, and their perception of altered habits, going out of the house, and changes in their daily life in which the home is the center of activity, were not common [38]. Other studies focused on socio-demographic characteristics such as gender [39], or household composition [40], without a generalized picture of Spanish households. The aim of this study has a double mission: (1) to describe the changes in habits, routines, going out of the house, and time dedication that occurred in Spanish households during the social confinement produced by the COVID-19 pandemic, together with the associated factors (sociodemographic and of cohabitants), and (2) to identify which of these factors have affected those changes in the daily lives of the residents. Through this study, the authors hope to help unveil the reality of the residents in order to contribute to decision making and the creation of strategies and contingency plans, related to the social management of time. In addition, they hope to help to create support networks and the design or cession of common spaces where people can alleviate certain burdens that have generated great tension, while guaranteeing safety conditions against COVID-19 transmission. Materials and Methods This cross-sectional study was carried out for the Spanish population between 30 April and 22 June, the period covered by the State of Alarm decreed by the national government on 14 March 2020 [4]. Its purpose was to find out the reality of Spanish households during the period of confinement. This study was funded by the Spanish National Research Council (CSIC), obtaining the approval of its Ethics Committee, with report number 057/2020. Household representatives participated anonymously and independently in an online forum. The topics addressed in the questionnaire covered aspects of changes in habits, temporary dedication to certain tasks, routines, and going out of the home for various reasons. Confinement, the object of study as a phenomenon, made contact with potential participants difficult. For this reason, an online, anonymous, self-completed questionnaire was established. The target population was selected with a non-probabilistic sample by convenience. Using the web scraping technique, the e-mail addresses of numerous groups, such as neighborhood and cultural associations and town councils, were obtained, to which the information sheet for the study and the web link to the questionnaire itself were sent. The purpose of this was that these groups would facilitate the contacting of a greater number of people, and a wider distribution throughout the national territory. Social networks, institutional websites and instant messaging applications were also used to expand the number of participants. Informed consent was implicitly understood by accepting access to the questionnaire after reading the information provided at the beginning, which also referred to the objectives of the study and its researchers and the organizing entity. All the information given, as well as consent, was approved by the aforementioned ethics committee. The digital platform for collecting the results of the online participation was SurveyMonkey ® . A database was then generated to organize and work with the information obtained. The original self-administered questionnaire contained 58 questions, combining both numerical and categorical responses as well as Likert-type. This questionnaire is based on other previously validated questionnaires, collected by specifically related regulations [41] or applied in studies and accepted by the scientific community as a common way of collecting data on cohabitants in studies on dwellings [42][43][44][45]. This questionnaire was previously carried out among ten people to ensure its readability, comprehension, compliance, such that improvements could be made in order to launch it to the target audience. To ensure that there were no duplicate questionnaires and that there were no inconsistencies, measures were taken such as stating in the initial informed consent that they were the only representative of the household completing the questionnaire, as well as detecting and eliminating recurrent duplications in responses and other inconsistent data. This study focuses on those questions related to changes in household habits, the time dedication to tasks in the dwelling, going outside, and the establishment of routines during confinement. Six categories were distinguished for this analysis, with their corresponding variables used (Table 1). Table 1. Variables grouped by categories used in this study, main questions and possible answers. Category Variables Socio-demographic factors Age, gender, employment status, education level, place of birth. Household composition Number of cohabitants, also distinguishing presence of minors or elderly in charge. Establishing routines Q: Have you established a daily routine in this period of confinement? R: No routine; barely a routine; trying a routine; similar but more flexible routine; same routine (Likert scale) Task dedication Q: Please indicate, from 1 to 5 (1: minimum value, 5 maximum) the dedication to any of the following tasks (if you may have combined tasks in the same proportion, you can repeat scores): R: Rest, watching TV/reading, housework, caring for minors/dependents, and leisure/sports. Going outside Q: During the lockdown: how often are you going out? R: Never, almost never-occasionally, and frequently (which is the sum of almost every day, and every day). Habit changes Q: Indicate, of the following habits, which have been altered during confinement (you can select one or more response options) R: Work, caring for minors or other family members, cleaning the home, other domestic tasks (cooking, tidying...), dressing/changing clothes, eating, sleeping, leisure, smoking, drinking, practicing sports, and social relationships in the home. The study is based on the responses obtained from the participants at the national level, where a descriptive analysis was carried out, stratifying the responses obtained by gender in the case of the socio-demographic variables, in addition to a bivariate analysis applying the Chi-square test. In relation to the variables of establishment of routines, dedication to tasks, frequency of leaving home, and change of habits, a bivariate analysis was applied to check for possible statistically significant relationships with socio-demographic variables and cohabitants, also using the Chi-square test. All contrast analyses were performed at a 95% confidence level (significance if p < 0.05). Results For this analysis, a total of 1673 valid responses were counted, of which 62.5% were women, 80.5% had a university education, 47.9% worked for the administration and 93.2% were of Spanish origin. Table 2 shows the socio-demographic characteristics of the study participants, representatives of each household. At the beginning of the questionnaire, it was requested that only one member of the household complete the questionnaire, in order to ensure a one response-one household correspondence. These responses were stratified by gender. Regarding cohabitation at home during confinement, 22.7% lived alone, 27.8% with another person and 49.5% with at least two other persons in the same household. In addition, 36% lived with children under 18 and 14.6% with people over 65. The cohabitation variables were not significantly related to gender but were significantly related to age. Being over 55 was related to living alone (29.3%) and being between 25 and 54 to living in households with more than two cohabitants (58%). People over 55 had a significantly higher proportion of cohabitation with those over 65 (35.2%) than younger participants. In contrast, the population with the significantly higher percentages of cohabiting with children was in the 35-54 age group. Characteristics of the Representative Members of the Participating Households Level of education and country of origin were statistically significantly, relating to the cohabitation variables. People with university studies lived proportionally more alone (23.9%) than those without them (17.3%), and conversely, non-university graduates tended to live more in households with more than two cohabitants (54.8%) versus university graduates (48.1%). People of non-Spanish origin lived more in households with two members (37.1%) than those of Spanish origin (27.3%), and the latter lived more in households with two or more people (50.1%) than the former (38.1%). Civil servants (11.2%) and those who were self-employed or entrepreneurs (14.8%) were significantly more likely to live with people over 65 than those who were employed (5.3%). Establishing Routines Half of the surveyed population (50.1%) stated that, during lockdown, they maintained their previous usual routines, albeit in a more flexible manner. It is remarkable that "establishing new routines" is related to changing habits and/or the schedule in which they are organized and executed by individuals, as a sequence of habitual actions. In this sense, 31% tried to establish a new routine, 9.1% made no changes to their previous routine, and 9% maintained no routine at all. This variable was related to several socio-demographic and cohabitation characteristics of the respondents in a statistically significant way (p < 0.05). For example, women were more likely to establish new routines than men (32.6% vs. 28.5%) and less likely to cope with confinement without routines (7.2% vs. 12.1% in men). Young people (18-34 years) either tried the most to establish new routines (36.9%) or established them the least (11.6%). There were significantly more people who did not establish routines among non-university graduates (12.2%) than among university graduates (8.3%), and they also tried more to create new routines (38.9% vs. 29.2% in the university-educated population). People living alone were more likely to maintain their usual routines (12.4%) than those living with others (8.1%) and were less likely to make those routines more flexible (45.8% vs. 52.6% of those living with others). Living with people over 65 was not related to the establishment of routines, while living with children was: people living with children were more likely to establish new routines (33.8%) than those who did not (29.2%) and less likely to maintain the same routines (6.9% vs. 10.4%). Table 3 shows the amount of time dedicated to tasks in confinement, according to socio-demographic and cohabitation characteristics. Time Dedication to Different Tasks Time dedication to certain tasks was scored on a Likert scale from 1 to 5, where 1 was no time and 5 was a lot of time. Figure 1 shows the frequency distribution of dedication to tasks during confinement, classified as low dedication (scores 1 or 2), medium (scores 3) and high (scores 4 or 5). Table 2 shows the percentages of high dedication to each of the tasks studied, according to socio-demographic and cohabitation variables. People under 35 years of age related to having a higher dedication to rest, leisure and watching TV or reading, while people between 35 and 54 years of age had a higher dedication to home or care. People over 55 were least likely to be engaged in teleworking or tele-study. Being a woman was associated with a higher commitment to care and home. People without a university education were more likely to spend more time resting, doing housework and watching TV or reading than those with a university education, but less time teleworking or tele-studying. People living with more than one other person were the most likely to be engaged in housework and care work, and the least likely to be engaged in resting and watching TV or reading. Living with older people was associated with more time spent watching TV or reading and less time spent on teleworking or housework. In contrast, living with children was associated with less time spent resting, watching TV or reading, and more time spent on household chores or care. Origin and employment status did not show a statistically significant relationship with engagement in these tasks. 16.8% respectively in those who did not live with people over 65), close to statistical significance (p = 0.056). Among those who went out occasionally or regularly, the most common reasons for going out were to go shopping (85.4%) and to take out the rubbish (58.1%). Furthermore, 28.2% went out for work, 17.9% for walking the dog and 10.7% for health care visits. No statistically significant relationship was found between gender or age and going out to do the shopping, or other sociodemographic or cohabitant variables. Taking out the rubbish was only statistically significantly related to age or employment status. People over 55 years of age were less likely to take out the rubbish (50.7%) versus people between 35 and 54 (59.7%) or those under 35 (62.9%). Entrepreneurs were less likely to leave the house to take out the rubbish (30.8%) than the self-employed (49.5%), and the self-employed were less likely to go out to throw away the trash than employees (56.3%) or civil servants (61.2%). Going out to work was related to age and living with people over 65: the age group that went out to work the most was 35-54 years old (32.3% vs. 23.1% for those under 35 or 23% for those over 55); and people who did not live with older people went out to work more than those who did (29.8% vs. 19.8%). Changes in Habits during Confinement General confinement led to a number of changes in the habits of the respondents. Some of the most frequent changes are those related to everyday aspects such as work (67.6%), social relations (65.9%), leisure (56.2%) or dressing (38.1%). These changes in areas were closely related to the socio-demographic and cohabitation characteristics of the participants, as shown in Table 3. It seems that these changes affected less old people, and more women, people with university studies, people of Spanish origin, civil servants, people who live with children, and people who do not live with people over 65 years of age. In terms of care, their related tasks and habits also underwent significant changes, such as cooking or tidying up (43.7%), cleaning (42.4%) or caring for children (32%). These changes are closely related to the characteristics of the person and their cohabitants, as shown in Table 4. Changes in care habits during confinement affected more women, people aged between 35 and 54, university graduates, people of Spanish origin, civil servants or employees, people who lived with more than one person, people who lived with children and people who did not live with people over 65. To a lesser extent, health-related habits such as sports (53.1%), sleeping (35.1%), eating (22.5%) and slightly less, drinking alcohol (10.5%) or smoking (6.2%) were altered during the blockade. These changes were related to socio-demographic and cohabitation variables ( Table 4). Having the least change in health-related habits during the lockdown was related to being over 55 or living with people over 65, while the greatest changes were related to being female. In terms of educational level, this affected the various habits differently: the population with a university education altered sports more, while it altered smoking less. There were no statistically significant differences for the variables' origin, employment status, number of cohabitants or cohabitation with children, except for the change in smoking habits (p = 0.045) and living alone (9.1%), which is higher than among people living with others (6.1%). Discussion In Spain, total confinement lasted one month and a half, extending this period to three months with more relaxed, although controlled and gradual, measures. During all this time, the home became the place where most citizens remained (totally at the beginning). This meant that the characteristics of the dwelling conditioned the way of life in them and therefore their occupational and behavioral habits [38]. According to the results, the majority of respondents shared a household with at least two other people. In contrast, people living alone were related to being over 55 years old. This is in line with the average size per Spanish household, which is 2.5 persons, according to official data from the National Institute of Statistics. This source also indicates that people over the age of 65 comprise the most represented group living alone [46]. Considering that, in the event of infection, the main recommendation is isolation from other members of the household [45], the number of people living together is a determining factor [46]. This is also related to the size and spatial distribution of the dwelling, as the likelihood of transmission in densely populated or overcrowded households is logically much higher, as it is impossible to maintain this distance [47]. Conversely, the older they are, the more likely they are to live alone, such that support structures outside the home itself are essential to be able to know the state of these people, and to monitor and support possible needs in situations as disruptive as these [48]. While 60% of the sample stated that their pre-confinement routines were not altered, or not significantly, the rest of the sample did experience changes. The segment of the population that was most affected by this alteration was women, who had to establish new routines in confinement to a greater extent. This was associated with living with (and caring for) children or other dependents. This is related to the interaction between mothers and children, and the level of household chaos; mothers who reported the importance of routines in the lives of children in confinement for a US study rated their children's sleep, children's behavior, and reported less screen time [49]. The confinement situation has meant that women have been more involved in the home and in caregiving. This is in line with other countries, as in the case of Turkey, which reported in a qualitative study that women interviewed revealed greater responsibility in the home, thus consolidating traditional domestic roles [50]. This is also in line with the results reported by Eurofound, which established a generalized worsening of the gender gap throughout Europe, both because of the greater job insecurity suffered by women, and because of the role of caretakers that they have played during this confinement, as well as the need to reconcile work and family life [51]. As for women teleworkers, who have increased during confinement in order to be able to attend to all these roles [7], unequal results are reported in terms of productivity during this period. However, according to the Ellen et al. study, having meaningful goals or activities, whether imposed or selfdetermined, may have helped to develop greater resilience and engagement in the face of such obligations, which was positive for balancing mental health [52]. Nevertheless, this balance could easily be blurred by the uncertainty of the circumstances, which may have caused psychological disturbances and loss of well-being [53]. For the reasons mentioned above, women were the ones who stayed at home the longest, while men were the ones who went out more, either for work or to cover essential household needs. In terms of time spent, this was unequal according to age. Those under 35 spent more time on leisure, compared to the 35-54 age group, who spent more time on housework and care. The situation of confinement has been pioneering for many, and the need to combine housework with work has been a major challenge for many families. However, in the younger age group, there are many university students, with no family responsibilities, and living with other members of the household; thus, this need would be reduced in comparison with other age groups. Young people would also generally have no need to go outside. People over 55 years of age went out more than younger people. This may be explained by the rate of people in this age group living alone or with people over 65 years of age, thus being responsible for essential household provisioning and logistical tasks, as well as other essential tasks that involved going outdoors. With regard to people who were highly engaged in teleworking, these were more associated with people who were qualified, university educated, and under 55 years of age. This is confirmed by previous analyses in the same research project, which associated teleworking with having a certain socio-economic status (SES) [7,54]. According to official data, teleworkers, although with a more diversified profile during the pandemic, are mostly skilled, freelancer or self-employed, with a medium-high income and a high level of education [55]. The perceived impact of the alteration of habits and the simultaneity of tasks to reconcile work and family life would condition the perception of telework, as indicated by previous analyses referred to above [7,54], which is supported by other similar studies [56]. For people over 55, and especially for those in households where there were people over 65, the most important tasks were related to leisure and distraction, such as reading or watching TV, since there were probably not proportionally greater work obligations or childcare. As for other health-related habits, these were more altered in the case of younger people or those with university studies. Physical activity was affected for more than half of the respondents, sleep for more than one-third of the sample, and eating habits for almost one-quarter. This is confirmed by studies such as that of Kontsevaya et al., which justified the disturbance of rest in turn by changes in mealtimes, teleworking or increased use of screens [56]. Yet, this disturbance does not necessarily lead to less rest, as a national study found that people who slept 6 h or less per day decreased during confinement [57]. Taking into account that the literature supports the ability to adapt their lives to a disruptive event [58] such as a lockdown by public health institutions, there are many factors that can bring about these changes, or on the contrary, not encourage them. Returning to what was stated in Section 2, reflection does not necessarily have to lead to a change in habits. In addition, as Camargo explained, a multitude of situations both at a supraindividual and individual level can affect the way of dealing with these disruptive events and therefore potentially abandon habits and/or create new ones [19]. In this case, possible causes related to personal and household situations have been evaluated, to understand, at a general level, what has happened in the lives of the confined people [38,59]. In turn, both the imposition or social norm given by the State of Alarm, as well as the adaptive behavior to a greater or lesser extent of people inside the home, has given rise to a series of new activities that could lay the foundations or consolidate new habits. These can have direct economic, social, community implications, such as ways of enjoying the city and common spaces in buildings, teleworking, e-commerce, or the way of relating, where social networks and internet connection have also certainly had a relevant place [60]. The situation of uncertainty in the face of what is to come, the risks associated with the illness itself, isolation, or fear of change for oneself or for our loved ones in any of the areas of life have also generated unstable psychological states, anxiety, stress or depression [61][62][63]. Other impacts, related to the environment, have also been given by the way we behave, such as those derived from the use of energy in the home [40,64], or the environmental ones, which have been positive during this time for the planet, cleaning our cities of greenhouse gas emissions, for example [65]. To the best of our knowledge, this is the first study to be carried out at a national level for the Spanish population on the permanence in the home and the change in habits and routines, temporary dedication to tasks and going out of the home, as the ultimate objective of the research, in relation to the composition of the household and socio-demographic data. This analysis reveals behavioral differences in terms of gender, age, and household composition, where cohabitants with a certain degree of dependency, such as children or the elderly, and their special vulnerability to the coronavirus, have largely conditioned the dedication of the adults in charge. The role of women as caregivers and home maintainers has been more pronounced, also affecting, to a large extent, their predisposition to alter health-related habits, contact with the outside world and their routines, which is in line with similar studies [66,67]. These factors can be taken into account for future contingency plans, time management strategies, or formulas that favor both equal time distribution within the household as well as exchanges and design of safe areas and other measures that provide families with support in similar situations, thereby mitigating the impact on their daily lives without detriment to the safety of cohabitants in terms of virus transmission. Limitations This research is not without certain limitations. In the first place, derived from the selection of the sample, by convenience, both the means of contact (social networks, websites, and other contacts), and the type of questionnaire required an internet connection and digital resources and skills to be able to answer it. Additionally, the sample showed a significant tendency of high time dedication to teleworking, which could also be due to the type of sample and the platforms selected for the dissemination of the questionnaire. It is assumed that vulnerable segments of the population were not specifically covered in this study, such as elderly people living alone, for instance, or their specific situations. Only those capable of access to the Internet and willing to answer the questionnaire were considered in this study. Conversely, the limitation of the alteration of habits and routines could have been complemented by an assessment of them, in order to have a complete picture in relation to the socio-demographic variables of the study for the sample. In addition, the way in which some of these questions were formulated in this questionnaire were qualitative, preventing a comprehensive idea of what was happening with routines and habits of confined people. Finally, routines and habits were used in a secular sense, having not exactly the same approach that was offered in Section 2. Nonetheless, it seemed relevant and opportune to carry out an exploratory study to assess permanence in the home, the obligation to combine work and work-life balance, and the use of spaces by cohabitants. Observing which segments of the population have seen their habits and customs altered, including going outside (mainly due to force majeure), and their occupations, could be a good way of approaching the reality experienced during lockdown in order to be able to establish preventive and contingency measures for disruptive and extreme situations such as this or similar ones in the future. Conclusions In conclusion, the impact of changes in habits during the general confinement of the COVID-19 pandemic seemed to fall more heavily on women, as well as on people living with children and those aged 35-54 years, especially with regard to tasks related to home care or their cohabitants. These changes in habits generally affected both the establishment of new routines, going out of the home, time dedication to tasks, and the perceived alteration of pre-pandemic habits. People over 55 years of age and those living with people over 65 were the least likely to have altered these aspects of their daily life in confinement. Institutional Review Board Statement: The study was conducted in accordance with the guidelines of the Declaration of Helsinki. All participants were previously informed by means of a written informed consent, and they all accepted it. All measures were adopted to ensure anonymous participation. The Ethical Committee from the Spanish National Research Council (CSIC) approved this study (dossier number 057/2020), on 30 April 2020. Informed Consent Statement: Informed consent was obtained from all subjects involved in the study before accessing the online questionnaire. Although this participation was anonymous, each participant accepted an informed consent before accessing the questionnaire. Additional information was provided in Information Sheets, available for the potential participants. Data Availability Statement: Data are not available due to ethical reasons.
v3-fos-license
2022-11-13T16:21:45.849Z
2023-03-01T00:00:00.000
253495075
{ "extfieldsofstudy": [ "Computer Science" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://ieeexplore.ieee.org/ielx7/4200690/5418892/09946387.pdf", "pdf_hash": "85d9a9ef4429f3db34ea13df260d409af2947839", "pdf_src": "IEEE", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43678", "s2fieldsofstudy": [ "Physics" ], "sha1": "408240d25fedbd632799fb29444d67922a7df569", "year": 2023 }
pes2o/s2orc
Aperture Synthesis With Digital Array Radars and Covariant Change of Wavenumber Variables Attributes of digital array radars are leveraged in enhancements of wideband frequency-wavenumber (omega-k) methods to achieve 1) single-pulse, short-range imaging from a stationary array; 2) single-pulse, all-range, high-density, digital beamforming-on-receive from a stationary array; 3) multiple-pulse aperture synthesis for short-range imaging with sensor movement; and 4) multiple-pulse inverse aperture synthesis for long-range imaging with tracked object movement. Modifications to conventional omega-k algorithms used in synthetic aperture radar are introduced to accommodate antenna element level data, real array element spacing, large scene size and small array size (compared to scene size). Large scene size with k-space processing is handled by a novel Huygens-Fresnel transfer function that does not fully rely on zero-padding to resolve array and scene size mismatch. Aperture synthesis with generalized pulse-to-pulse sensor-step operations is supported. Connections between omega-k wavenumber migration and a covariant change of variables transform associated with Dirac's spectral models of free and scattered electromagnetic fields are established. I. INTRODUCTION A DVANCES in wideband signal processing for aperture synthesis that utilize covariant change of wavenumber variables are presented and shown to be related to the quantum field theory work of P. A. M. Dirac [1]. The introduced omega-k enhancements and capabilities rely on the availability of element (preferred) or subarray (with concomitant grating lobes) data channels of a digital array radar (DAR). The advanced aperture synthesis methods presented here are based on a novel baseline single-pulse omega-k (frequencywavenumber domain) method that is also introduced in this paper. The baseline algorithm (usable without aperture synthesis) empowers single-pulse imaging at short range with a baseline (nonsynthesis) resolution. At long range, the single-pulse baseline method also provides means for high-density digital beamforming-on-receive (HD-DBF). The point spread function (PSF) of short-range imaging becomes increasingly overlapped as the PSF morphs with increasing range into the beam spread function (BSF) of long-range. The angular resolution of the short-range PSF is the same as that of the long-range BSF. In both cases the single-pulse cross-range spatial resolution degrades with range. The single-pulse baseline method can be viewed as an allrange digital beamformer that is not constrained by a plane-wave assumption. The single-pulse omega-k method produces the same results achieved by time-space domain spherical backpropagation. Spherical wavefield inversion is accomplished at all ranges with efficient omega-k domain processing. With use of Dirac's covariant frequency-wavenumber domain descriptions of free-space and scattered electromagnetic (EM) fields, the avoidance of plane-wave signal model approximations empowers a capability to coherently integrate multiple single-pulse data products (images and HD-DBF) for aperture synthesis. If relative movement increases angular dwell or reduces the range between sensor and scene on a pulse-to-pulse basis, then cross-range resolutions progressively improve as single-pulse images are coherently fused in the pixel domain for aperture synthesis. The single-pulse resolution approaches a range-independent value that is a fraction of the passband wavelength. A dense lattice that specifies a pixel grid for imaging and a receive-beam aim-point grid for beamforming is required. This lattice grid spacing matches the DAR's antenna element (AE) spacing. The beam spacing of long-range beamforming is identical to the pixel spacing of short-range imaging; hence, the "high density" adjective that labels the HD-DBF use case. At short range, the "HD" can also imply "high definition". The single-pulse method can image a large field-of-view (FoV) if illuminated by a single transmit beam. The FoV can be parsed into smaller regions-of-reconstruction (RoR) for processing. The tessellation of RoRs required to cover the transmit beam's FoV can be processed simultaneously in parallel. Hence, the computational efficiencies of omega-k domain processing are further advanced by a system-level solution architecture that does not require full-scene data aggregations in parallel computing systems. 1) Early Observations of "Solopulse": After an extensive literature survey and study conducted in preparation for authoring [2], and after an initial research activity that addressed a formulation of the omega-k algorithm for wideband and threedimensional synthetic aperture radars (SARs) operating at very short range [3], our research turned to the use of DARs for inverse SAR (ISAR) imaging of highly maneuvering drones. The DARs in this initial study were assumed to operate in a colocated multiple-input/mulitple-output (MIMO) mode with time-division orthogonal waveforms. The DAR assumption provided, for each transmitted pulse, multiple receive data channels. Conceptually the synthetic aperture's spatial sampling interval is reduced to that of the element spacing of the DAR; i.e., a fraction of the wavelength of the highest frequency within the transmitted signal's passband. During this drone surveillance research we observed evidence that suggested a form of imagery was obtainable from a single pulse. This serendipitous observation led us to further investigate and advance this new type of imaging modality, which we eventually labeled "Solopulse". 2) Aperture Synthesis With "Solopulse": Solopulse is intimately related to both digital beamforming-on-receive and to SAR. Since this Special Topics issue is on synthetic apertures, a description of Solopulse from the SAR perspective is emphasized in this presentation. A SAR processor ingests single-channel-radar data from a collection of pulses gathered with a relatively large spatial sampling interval (determined by the temporal pulse repetition interval and platform speed) to image a scene that is comparatively small relative to the sensor flight-line (see Fig. 1). Our drone surveillance research took us to complementary situations where the (real) aperture is much smaller (defined by the DAR length without sensor movement) and with fields-of-view (scenes) that are much larger than the DAR's size. A Solopulse processor ingests multiple-channel-DAR data of a single pulse with a spatial sampling interval that is equal to the AE spacing. The single-pulse data sample count is determined by the number of AEs in the DAR. In this paper, we advance four-dimensional wavenumber migration methods to support this "SAR-with-DAR" or "Solopulse" concept. The resulting adjustments require the development of what we call a Huygens-Fresnel transfer (HFtransfer) function. The HF-transfer handles (without total reliance on zero-padding) the spatial expansion of the measured DAR data set to that of the cross-range extent of a larger scene. The choice of some HF-transfer reference point within the scene identifies an effective point of reference for spherical backpropagation (via k-space processing) from sensor to scene. Also, if the HF-transfer reference point is updated pulse-to-pulse with relative sensor-scene (or object) movement estimates, then aperture synthesis with generalized sensor stepping modes is supported. 3) Early Validations of "Solopulse": Solopulse was invented at Georgia Tech in 2017 [3], [4], [5]. There has been much follow-on effort to validate and mature the concept and explore potential use cases via modeling and simulation. Initial software simulations included models of both environmental noise and receiver hardware imperfections. Hardware prototyping activities were conducted at the Georgia Tech Research Institute (GTRI) under internal research and development funding during 2019-2021 and also under external funding during 2021-2022 [6]. Research goals included activities to validate and verify the Solopulse concept by collecting and analyzing data measured in an anechoic chamber and in an open laboratory. More advanced front-end array models were then utilized to produce additional simulated, but increasingly realistic, data-cubes at the hardware performance levels of potential Solopulse antenna arrays. Various error tolerances were evaluated, including frequencyindependent amplitude/phase offset errors, element dropouts, misaligned array elements and channel response mismatch. These initial studies established that Solopulse has a measure of robustness in the presence of hardware imperfections. The laboratory work of [6] demonstrated that the omega-k domain processing of Solopulse produces the same beam function of time-space domain spherical backpropagation, but with the higher computational efficiency afforded by a k-space approach. 4) Paper Outline: The remainder of this introductory paper on Solopulse with a focus on the aperture synthesis perspective is outlined as follows. Section II provides an introductory overview of Solopulse foundations, including discussions on concepts, covariance, spherical wave theory and inversion, and the algorithmic system model. Section III presents examples and analyses of various Solopulse data products, including single-pulse images and HD-DBF products, and multiple-pulse aperture synthesis images. Section IV gives a detailed explanation of Solopulse signal processing. Section V overviews the various methods used to describe EM spectra related to wave motion equations, with an overview of Dirac's results. Section VI summarizes status and plans. A. Solopulse Aperture Synthesis Concepts To better illustrate the dual relationship between Solopulse and SAR, consider the concept shown in Fig. 2, where each spatial pulse repetition interval of SAR is replaced with an imagined DAR of equal length and with a number of (singlepulse) data channels equal to the number of AEs in the DAR. This idea introduces the availability of contiguous element-level array data across the synthetic aperture with DAR-AE spacings. Furthermore, this idea invites consideration of the following questions: r Can imagery be formed with multiple channel (AE) data from a single pulse with the DAR in a fixed position? r Can the DAR be more generally maneuvered during aperture synthesis? r Can coherent fusion, possibly with complex pixel additions, occur post single-look image formation? This paper establishes that Solopulse provides affirmative answers to these questions. As illustrated by the SAR-like side-stepping example in Fig. 3, Solopulse produces imagery with each pulse. Multiple images can be coherently fused (in the pixel domain) with no particular sensor-scene motion geometry required to perform aperture synthesis. Aperture synthesis requires that the HF-transfer's reference point be updated pulse-to-pulse. The resulting multiplepulse data products have image quality (sensitivity and resolution) levels that depend on the mode's step-interval size(s) and aperture length. B. Covariance Certain aspects of radar signal propagation modeling become more tractable and realistic when spherical EM wavefields are approached as relativistically covariant time-space fields that are also describable by corresponding frequency-wavenumber domain spectral models. The spectral descriptions of quantum field theory (QFT) and quantum electrodynamics (QED) prove particularly useful in wavenumber domain algorithm development for radar signal processing. Like light, radar signals are electromagnetic and covariant principles can be applied to advantage in algorithm development. This is one of the objectives of Solopulse signal processing. 1) Time-Space Covariance: Covariant analysis harmonizes time and space observations [7], [8]. Within the context of the Special Theory of Relativity, time and space can be viewed as a single entity, time-space [9], [10]. Covariance requires that the square of any change in time-space "distance" (Δs) 2 between two time-space points (events) should satisfy (Δs) 2 = c 2 (time interval) 2 − (space interval) 2 , with c representing the speed of propagation. If the separation interval Δs is infinitesimal, then the difference Δ goes to the differential d and the temporal dt and spatial dx, dy, dz are introduced, (ds) 2 = (cdt) 2 − (dx 2 + dy 2 + dz 2 ). The time-space interval between any two events is a geometric quantity and all observers (possibly in relative motion) measure "4-vector" time-space coordinates in a way that preserves, as an invariant, the differential differenceof-squares. Classical descriptions of space utilize three-dimensional vectors, or "3-vectors". Extension of radar wave theory from a classical (nonrelativistic) to a covariant (relativistic) form is facilitated by "4-vectors". This extension holds for all relative velocities, whether fast, slow or nil. Covariance does not place an analysis into the Minkowski space [11]. The noncovariant Minkowski form (ds) 2 = (jcdt) 2 + (dx 2 + dy 2 + dz 2 ) that uses jct for the time coordinate seeks to retain Euclidean behaviors as noted in [12]. The relativistic and covariant time-space universe in which we all live is non-Euclidean. 2) Frequency-Wavenumber Covariance: Spectral (frequency and wavenumber) properties of free-space EM propagation are more readily obtained when dealt with in relativistic 4-vector forms. Dirac's covariant formulation of electromagnetic fields requires 4-vector analysis of scalar-vector potential fields of both free and scattered EM fields. In quantum mechanical disciplines, unbounded photon (free-space EM field) behaviours can be described in either a 4-vector time-space difference-of-squares χ domain or a 4-vector frequency-wavenumber difference-of-squares κ domain. The complementary k-space model of the κ-domain is such that energy (signal frequency) and momentum (directed radiation) are used in a 4-vector. 3) Covariance and Solopulse: The Solopulse signal spectrum is related to the radar's time-space data by a fourdimensional temporal-spatial Fourier transform. Subsequent signal processing seeks to maintain covariant relationships among wavenumber data samples by moving or "migrating" the wavenumber sample positions. Aperture wavenumber samples k u are positioned such that the squared magnitude |k u | 2 is equal to the square of the signal's temporal wavenumber k 2 ω . Said another way, the difference-of-squares κ = k 2 ω − |k x | 2 = 0 is maintained or held as "covariant" by wavenumber migration. The 4-vector Fourier transform between the χ and κ domains ensures that the covariance of the corresponding time-space (t, x) domain manifold χ = (ct) 2 − |x| 2 = 0 is also preserved. Covariant systems preserve the Lorentz invariance of both timespace and frequency-wavenumber entities. C. Spherical Wave Theory The foundational elements of Solopulse's covariant spherical wave theories are the Huygens wavelet, the Fresnel wave field and an entity that we call the Huygens-Fresnel spectrum. 1) Huygens Wavelet: A Huygens wavelet, is an impulsively thin EM sphere centered at r = 0 that expands with increasing time. Since the impulsive Huygens wavelet • δ is viewed as a distributed singularity, generalized function theory applies [1], [13], [14]. The impulsive wavelet "density" spatially decays at the rate of 1/r. The spherical attribute of hh is indicated iconically by placing the "•" over δ to obtain • δ. Bold fonts are used in 3-vector descriptions. Double scripted notation hh(t, r) is sometimes used to emphasize the existence of both the temporal t and spatial r domains. Ordered upper case letters are used to indicate the result of a temporal Fourier transform Hh(ω, r) or the result of both temporal and spatial Fourier transforms HH(ω, k r ). A radial "r" variable is sometimes used instead of a rectilinear "x" variable in anticipation that with point source models the spatial analysis will have spherical symmetries that depend only on radial distance r = |r|. Scalar analyses of EM vector fields are common when spherical symmetries exist, in which cases, differential equations of f (x) often go to rf (r) [2]. The Huygens wavelet was a key element used by Einstein in his development of the Special Theory of Relativity [9]. 2) Fresnel Wave Field: The temporal Fourier transform of a Huygens wavelet yields the static (time-independent) Fresnel wave field, where k ω = ω/c. Note that k ω may be positive or negative. If specified by a single value k ω the situation can be called monochromatic, if by a set of values {k ω } the situation becomes polychromatic. In radar signal analysis, consideration of a continuous k ω band as specified by a bounded set a < k ω < b for passband sensors is useful. A Fresnel wave field for just one temporal frequency ω c is shown in Fig. 4 (a). Huygens' wavelet is a solution to a covariant (difference-of-squares) time-space domain wave motion equation derived from Maxwell-Heaviside equations [15]. Fresnel's wave field is a solution to the (difference-of-squares) frequency-space domain Helmholtz wave motion equation, Huygens wavelet and the Fresnel wave field both originate at point singularities. Both spherical wave functions are solutions when the forcing functions (ff and Ff ) of the corresponding wave motion equations are point singularities. Hence, these can be called the Huygens Green-function and the Fresnel Greenfunction [16]. 3) Huygens-Fresnel Spectrum: One might expect that the spatial (3D) Fourier transform of the Fresnel wave field Hh(k ω , r) or the combined temporal-spatial (4D) Fourier transform of the Huygens wavelet hh(t, r) would provide what we call the Huygens-Fresnel (HF) spectrum HH(k ω , k r ), which is characterized by a difference-of-squares, frequencywavenumber domain, wave motion equation, The HF-spectrum is anticipated to embody an alternate 4-vector (k ω , k x ) frequency-wavenumber domain expression of the same Huygens wave motion (1) that occurs in the 4-vector (t, x) timespace domain. The spatial Fourier transform HH of the Fresnel wave field Hh can be obtained computationally with a Discrete Fourier Transform (DFT). Fig. 4(b) shows the computed HH function that results when a three dimensional DFT is applied to the Fresnel wave field Fig. 4(a). A monochromatic sample of the HF-spectrum is related to the Ewald sphere of x-ray crystallography [17], [18], [19], [20], [21]. The Ewald sphere of the computed HH of Fig. 4(b) is clearly evident. The banding on the Ewald sphere is what we call the k x -space locator sinusoid exp(jx n k x ) that expresses the location x n of the x-space source point δ(x − x n ) of the Fresnel wave field of Fig. 4(a). 4) Spherical Wavefield Inversion: One approach to modeling a spherically scattered wave field is to describe the interaction of an incident wave on a bounded region that contains scatterers [22], [23], [24], [25]. The incident field is viewed as energizing each scatterer, which in turn, if certain conditions are met [26], can be viewed as each creating its own isotropically scattered field, a portion of which is received back at the antenna. These concepts are based on Huygens principles [27]. An approach to forming an image from scattered wave field data is through wave field inversion [28]. As one option, the wave field inversion task can be formulated in the time-space domain. Such algorithms, sometimes applied in SAR, are predominantly within the class of spherical back-propagation algorithms [29], [30]. Corresponding SAR inversion methods can be developed in frequency-space and frequency-wavenumber domains. D. Solopulse System Model Solopulse is able to image a scene by spherical wavefield inversion performed with the k-space processing illustrated by the block diagram of Fig. 5. If illuminated during transmit actions, the scene can even be the entire surroundings of the radar, thereby delivering on the futuristic vision of Skolnik's "surround" or "ubiquitous radar" [31]. Solopulse signal processing can be configured to create imagery within one or more RoRs, which may be a subset of a larger FoV illuminated by the transmitted pulse. Solopulse reconstructions of multiple RoRs can be performed simultaneously to adapt the Solopulse algorithm to parallel computing architectures. Parallelization reduces the computational latency of large-scene or full-surround Solopulse reconstructions. The number of RoRs obtainable with a given radar system are determined by the sensitivity (power-aperture product) of the sensor array and the compute capacity of the processing hardware. If sufficient power-aperture product is provided, only one pulse is required to cover the FoV with the tessellation of RoRs that form a mosaic. No transmit beam scanning is required. The size of the individual RoRs determines computing latency. The number of RoRs within the field-of-view mosaic determines the computing throughput requirement. RoR-boundary or seaming degradations over the FoV that might occur can be minimized or eliminated with careful bounded-region design decisions as described in Section IV-B. Imaging methods based on temporal-spatial isotropic wave field inversions, but implemented with frequency-wavenumber domain operations, can be viewed as holographic [32]. Holographic reconstructions of k-space descriptions of remotelysensed (ex situ) wave fields can be converted to within-scene (in situ) descriptions through k-space operators that we call inverse HF-transfers [3]. Inverse HF-transfers are based on the Fourier transform pair δ(x − x n ) ↔ exp(jx n k x ) and are k-space operators that correspond to spatial domain spherical wavefield back-propagation operators. Solopulse signal processing utilizes HF-transfers and covariant wavenumber migration to achieve spherical-backpropagation by k-space methods. Inverse HF-transfers of k-space are preferable to the computationally intensive, interpolated, pixel-by-pixel, temporal-spatial, spherical back-propagation methods of SAR or near-field array scanners. 1) Huygens-Fresnel Transfers: HF-transfer functions are key to managing the frequency of the Ewald sphere banding of the HF-spectrum. Lower frequency banding is advantageous in the design of a required resampling task in Solopulse signal processing. The frequency of Ewald sphere phase banding is reduced by the HF-transfer that changes scattered field descriptions from remotely sensed ex situ descriptions to within-scene in situ descriptions [3]. A single reference point transfer is effective for the entire scene. The resulting scatterer-specific lower frequency in situ tonal fields in k-space describe scatterer-specific offsets relative to the selected reference point. Lower frequency tonal bands on the Ewald sphere are desirable in preparations for the uniform resampling process that occurs either after or as part of a wavenumber migration process [33]. As shown in Fig. 5, part of the Solopulse algorithm requires, before the runtime of the algorithm, computation of a HF-transfer function for each RoR to be imaged. Multiple transfer functions can be simultaneously applied to a single copy of the measured (single-pulse) data to simultaneously form images of multiple RoRs with parallel processing. 2) HF-Transfer Setup: An innovative feature of Solopulse is means for isotropic wave field inversion with an inverse HF-transfer function expanded from a relatively small stationary array back to a larger scene. In preparation for setting up an expanded HF-transfer function, a reference signal that would be received by a virtual array with length or size matched to the cross-range extent of a desired RoR is produced beforehand by computer simulation. A reference Huygens wavelet hh(t, a − r c ) with a point r c positioned at the nominal center of the objective RoR is simulated with a describing locations of real and virtual AEs that are imagined to exist within and outside the bounds of the DAR (across an extent equal to the RoR size). Solopulse algorithms modify the reference Huygens wavelet from an impulsively thin shell to a radial thickness defined by the transmitted waveform as a function of time. A reference Fresnel wavefield Hh(k ω p , a − r c ) is obtained by the temporal Fourier transform of the reference Huygens wavelet hh. Computation of a spatial Fourier transform of the reference Fresnel field yields a reference HF-spectrum and from which the forward HF-transfer function can be obtained. The inverse HF-transfer function is obtained by conjugation. 3) Array Data Zero Padding: Data sample-count mismatches between array and scene sizes create potentially problematic situations in (both real and synthetic) array signal processing. In some SAR algorithm families, for example, if there is a mismatch between the synthetic array length and the typically smaller (in spotlight mode) cross-range scene size, then zero padding has been recommended as a means to adjust data set sizes [34], [35], [36]. Similar recommendations have been made for applications in optics [37]. For Solopulse, the spatial expansion of the HF-transfer function requires that the measured sensor array data be spatially zero-padded to the size of the objective RoR (the expanded HF-transfer function is not zero padded). 4) Covariant Change-of-Variables for Wavenumber Migration: Sensor array data in a k-space format obtained by temporal-spatial Fourier transforms of received signals do not immediately satisfy the covariant constraint. Wavenumber migration reformats a sensor array's noncovariant rectilinear spectrum into a covariant HF-spectrum. This migrated spectrum provides an estimate of the scene's angular spectrum. During wavenumber migration, the signal frequency wavenumber k ω and cross-array wavenumber k u of the sensor array undergo a change-of-variables (CoV) transformation, The breve accent indicates migrated angular spectrum variables. 5) Uniform Resampling of Migrated Spectrum: After the CoV transform, data samples of the migrated angular spectrum wavenumber domaink x are nonuniformly positioned. A resampling operation from nonuniformk x to a gridk x of uniformly spaced scene wavenumbers must occur before Fourier inversion with Fast Fourier Transform (FFT) algorithms [38]. Double-dot accents indicate resampled data. Hence, there is a sequence of mappings k u →k x →k x of measured phase history data positioned at array wavenumber points in the array's rectilinear spectrum k u to the migrated wavenumber points in the angular spectrumk x and on to uniformly resampled image spectrum data samplesk x . 6) Least-Squares CoV: The resampling process can also be viewed as a regridding process sometimes used in various computed imaging tasks [39], [40], [41], [42], [43], [44], [45]. The terminology of "regridding" does not hold consistent meaning throughout the literature. As used here, the meaning and methods of [46] used for magnetic resonance imaging (MRI) are relevant. The MRI approach allows formulations based on linear algebra pseudoinverse image reconstructions. As explained further in Section IV-B, the MRI approach inverts the inherent continuousto-discrete mapping of the (continuous) scattered field data to (discrete) measurements of migrated non-Cartesiank x -space samples. The notion of a regridding "transformation" opens the door to approaching the resampling task as an estimation problem. Exact discrete-to-continuous inverse mappings may not exist, and this invites the use of least-squares solutions. However, as a first step, this paper uses a Jacobian-weighted CoV (JW-CoV) method to implement the covariant transfer. Future research will explore the use of a least-squares CoV (LS-CoV) approach. 7) Jacobian-Weighted CoV: In standard signal processing problems, a nonuniform to uniform resampling process can be achieved with sinc interpolation. Coincidentally, when a box indicator function is used to both define and bound an objective RoR, Jacobian determinant weighted sinc interpolation can be used to resample to the uniformly spacedk x [33], [47], [48]. Although not an exact interpolation based reconstruction, nor even a least-squares solution [49], [50], Jacobian-weighted sinc-based reconstruction methods for rectilinearly bounded RoRs have been considered appropriate due to implementation efficiency [34]. If the RoR bounding function were circular, then Bessel functions, instead of sinc functions, would be required for regridding [51]. 8) Cauchy Structures in Resampling Matrices: The resampling task of JW-CoV can be setup as a linear algebra problem. The resampling matrix possesses a Cauchy structure due to x −1 decay functions [52], [53], [54]. The linear algebra formulation allows wavenumber migration to be implemented with simple matrix-vector multiplies, a key to the computational efficiency of Solopulse. The Cauchy matrix-vector product is shown as an icon for each wavenumber path in Fig. 5. These migration operations, implemented as matrix-vector multiplies, can all be implemented in parallel for each of the RoRs, which also can be processed in parallel. 9) Inverse Fourier Transform: After wavenumber migration, inverse Fourier transforms lead to Solopulse imagery or HD-DBF data products. The inverse FFT is shown as a subblock in Fig. 5. III. SOLOPULSE DATA PRODUCTS This section provides an overview of the variety and attributes of Solopulse data products for both single-pulse imaging and HD-DBF, and for multiple-pulse applications with aperture synthesis. Characteristics of point and beam spread functions are demonstrated and described. A single-pulse image of a small drone at short range is demonstrated. A study of performance levels as SNR and range are varied is provided. Aperture synthesis for short-range surround imaging and long-range HD-DBF are performed. Scenarios that utilize parallel processing are provided with multiple RoRs that span surround FoVs and track-mode HD-DBF FoVs. The surround FoVs demonstrate aperture synthesis with sensor array movement and the HD-DBF FoVs demonstrate inverse aperture synthesis with tracked object movement. 1) Solopulse in a Colocated MIMO Radar: Solopulse can utilize SISO (single-input/single-output), SIMO (singleinput/multiple-output) and MIMO transmit/receive configurations. All of these colocated MIMO configurations can be viewed as "single-pulse" operations. Use of correlated or uncorrelated/orthogonal waveforms among the AEs is an option [55]. If the waveforms are correlated, then transmit beamforming occurs during MIMO transmit dwells. However, if the waveforms are uncorrelated or orthogonal so as to broaden the transmit beam, then the Solopulse system operates in a SISO mode. Transmit beamforming, possibly with monostatic spoiling or bistatic pulses from a secondary, smaller, transmit antenna or array, can be utilized. SIMO mode, where one AE transmits and all AEs receive, provides system behaviors like SISO but with an undesirable (but removable) artifact that causes geometric warping of Solopulse imagery at short range. 2) MBOR Comparisons: A useful baseline for Solopulse comparisons is multiple-beams-on-receive (MBOR) data products produced by a conventional plane-wave algorithm. Such Fraunhofer digital beamformers are implemented with timedelays for beam-steer-on-receive in wideband scenarios. An MBOR receive beam can reasonably be called a Fraunhofer beam (F-beam) and a Solopulse beam a Huygens-Fresnel beam (HF-beam). With conventional DBF, the MBOR cross-range field of view is divided into a number of whole or fractional single-beam intervals. Track-mode MBOR utilizes overlapped beam rosettes with typically 10's of overlapped receive beams. The degree of overlap in track mode is typically on the order of half a beamwidth. Search-mode MBOR tends to utilize less overlap. The receive beam density of MBOR used for Solopulse comparisons is increased in this paper. MBOR beams spaced as close as a tenth of a beamwidth apart are utilized to better see attributes of MBOR solutions. This makes Fraunhofer-DBF data products more image-like and easier to compare to Solopulse images and HD-DBF data products. 1) Pixel and Beam Lattices: Operations of the Solopulse baseline has a requirement not typical of SAR or DBF: the pixel-density of the image or beam-packing lattice is set to match the real array's AE spacing. This beam/pixel packing requirement is indicated by the red lattices of Figs. 2, 3, and 6. 2) Point Spread and Beam Spread Functions: At long range the beam lattice characterizes the aim-points of a high-density, highly overlapped, set of HF-beams, as illustrated in Fig. 6. At short range, the BSF overlap is reduced, and the beam spread function behaves more like a PSF of a computed imaging algorithm. 3) Single-Pulse Cross-Range Resolution: Solopulse with a stationary DAR provides a range-dependent cross-range spatial resolution of Rθ, where R is range. The range-independent angular resolution is θ = λ/D DAR , with D DAR being the length of the DAR sensor. Note that with a DAR, the angular resolution is λ/D DAR = 4/N AE in SISO mode, where N AE is the number of AEs and the AE spacing is λ/4. This becomes 2/N AE in SIMO and MIMO modes with an AE spacing of λ/2. 4) Short-Range Solopulse PSF: Shown in Fig. 7 are two examples of the PSFs of a C-band (5 GHz) digital array at short range. The array has 32 elements spaced λ/4 apart and the DAR is about 44 cm long. The uncoded waveform has a bandwidth of 500 MHz. The simulation is noise free. Fig. 7(a) contains a single scatterer at about 25 m. Fig. 7(b) has a scatterer at about 115 m. The Fraunhofer (2D 2 DAR /λ min ) near-field/farfield boundary is 7 m. The λ/D DAR angular beamwidth is 7.5 degrees. The associated R × λ/D DAR spatial beamwidths are 3 m and 15.5 m, respectively. The measured 4 dB-down cross-range resolutions are 1.6 m and 8.2 m, respectively. This near factor-of-two difference is expected in SISO mode. The colormap spans a full scale dynamic range of more than 140 dB. This allows all sidelobe structures to be observed. The curved sidelobes shown in Fig. 7 are typical of Solopulse imagery at short range, or at long range with large cross-range FoVs. Note that there is wrap-around aliasing in Fig. 7(b). This can occur with certain combinations of parameter settings related to RoR and array sizes. Section IV-B provides a detailed analyses of the potential aliasings and ambiguities of spatial and bandlimited reconstruction scenarios. Fig. 7(b) demonstrates that with judicious parameter selections the wrap-around aliasing can be managed and that the side-lobe curvature is less pronounced with longer range. Fig. 8 is a comparison of Solopulse and MBOR as the transmitted power is varied to produce decreasing signal-to-noise ratios (SNRs) of 20, 10, 0 and -10 dB (the columns of images left-to-right). The top row are Solopulse images. The middle row are MBOR "images". The red plots in the bottom row are the cross-range profiles of the Solopulse images. The blue lines are the cross-range profiles of the MBOR images. The same C-band sensor-scatterer configuration of Fig. 7(b) is used in these SNR comparisons, except the number of AEs has been increased to give a larger array size of about 3.5 m (128 AEs). The sensor mode is also changed to operate in a SIMO mode with just one transmit element. The required power-aperture-gains (PAGs) are respectively, 16.9, 6.9, -3.1 and -13.1 dB. The larger array size increases the Fraunhofer near-field/far-field boundary to 446.5 m. This scenario is within the Fraunhofer near-field of the sensor. The scatterer's PSF can be made out at all tested SNR levels in both Solopulse and MBOR images. Measured cross-range (4 dB-down) spatial resolution is about 1.9 m for Solopulse and 2.9 m for MBOR. The Solopulse image (both noise and signal) sits on a elevated energy floor. Analysis suggests this energy pedestal comes from the gain of the HF-transfer function of the Solopulse processing flow. 5) Solopulse and MBOR Comparison: Sensitivity: Shown in 6) Solopulse and MBOR Comparison: Range: Shown in Fig. 9 is a comparison of Solopulse and MBOR as the range is increased to 0. 7) Single-Pulse "Freeze Frame" Image of a Drone: Shown in Fig. 10(b) is the Solopulse image of a small drone obtained with a W-band (77 GHz) digital array at short range. The scatterers of a Swerling 1 model used to simulate the small drone is shown in Fig. 10(a). The array has 1024 elements spaced λ/4 = 0.1 cm apart and the DAR is about 95 cm long. The coded waveform has a bandwidth of 4 GHz and a time-bandwidth product of 10. A round-trip PAG of 8.7 dB is required to deliver 20 dB sensitivity at 50 m with a receiver noise figure of 3 dB. The λ/D DAR angular beamwidth is 0.23 degrees. The Rλ/D DAR spatial beamwidth is 20 cm at 50 m range. The Fraunhofer near-field/far-field boundary is 485 m. This image is well within the near-field of conventional Fraunhofer beamformers. With a pulse duration of 2.5 nanoseconds, the image is essentially a freeze-frame, even with respect to propeller rotation. 8) Solopulse Aperture Synthesis: Multiple Solopulse images, either in time series or concurrent from multiple platforms (i.e., multiple DARs with a centralized processing center for shared data), can be coherently fused with k-space or pixel domain processing. This capability is possible due to the coherent correctness of the Solopulse spherical wavefield model, where the covariant formulation of spherical wave fields removes approximations associated with plane-wave models. Also, the HF-transfer operation converts each Solopulse image to a common scene-centered coordinate system that is shared across the extended dwell [3]. If relative motion is present, then a change in view-angle between sensor and scene can be exploited to enhance resolution beyond that achievable with a static situation. The result is progressive resolution if angular dwells are extended by the relative movement. If the scene is moving and tracked by a stationary sensor, Solopulse progressive resolution is an inverse-SAR solution. Fig. 11(a) is a single-pulse image and in Fig. 11(b) a 10-pulse image with aperture synthesis of a large, forward-looking, FoV typical of a radar that might be used in an autonomous vehicle. The FoV extends from 10 m to 100 m in range, and from 4 m on the right side to 12 m on the left side. Just one side of the forward FoV is reconstructed here to allow more detail to be seen in this printed format. The radar is a Ku-band (36 GHz) radar with a 1 GHz bandwidth with a time-bandwidth product of 10. The number of AEs is 128, spaced 0.2 cm apart, operated in a SISO mode, with a forward-facing DAR-length of about 26 cm. Sensitivity Time Control (STC) is utilized to counter range dependent attenuation. A PAG of 9.4 dBW is required to deliver 15 dB sensitivity at 100 meters with a receiver noise figure of 3 dB. The λ/D DAR angular beamwidth is about 1.8 degrees. The Fraunhofer near-field/far-field boundary is 16.6 m. A 22 × 4 RoR mosaic is used to parse the FoV for parallel processing. Each RoR is 2048 × 2048 pixels in size. Each RoR contains a single scatterer in this simulation. The display has a partial-scale dynamic range of 30 dB. Note that the PSF broadens with range and angle off boresight. Fig. 11(b) is the same setup as Fig. 11(a), but where 10 frames are coherently integrated, pixel-by-pixel, post image formation, to achieve aperture synthesis. The forward-facing sensor moves forward by 1 m on each pulse. Note that the improved resolution varies with each scatterer's angle off-boresight. Cross-range resolution improves more significantly for scatterers with increased angle off-boresight since these scatterers experience a larger increase of angular dwell pulse-to-pulse. The PSF experiences an undesirable amplitude modulation for scatterers at the Authorized licensed use limited to the terms of the applicable license agreement with IEEE. Restrictions apply. shortest range and at the most extreme angles off-boresight. This characteristic can be mitigated by reducing the step-size between pulses. These results suggest that a radar point-cloud data product could be easily extracted from Solopulse with aperture synthesis. The point cloud density would increase pulse-to-pulse as resolution cells improve (become smaller). 9) Surround Imaging With Parallel Processing: Shown in 10) Inverse Aperture Synthesis: Fig. 12 shows a Ka-Band (36 GHz) progressive resolution image of a drone at a range of 1 km. Pulse bandwidth is 4 GHz with a time-bandwidth product of 25. The drone is modeled as a Swerling 1 object as illustrated by the leftmost image in Fig. 12. The number of AEs is 512, spaced 0.2 cm apart, operated in a SISO mode, with a DAR-length of about 96 cm. The sensitivity is 20 dB above the noise floor with a receiver noise figure of 3 dB. A PAG of 60.2 dBW is required. The angular beamwidth is 0.47 degrees. The Fraunhofer near-field/far-field boundary is 245.6 m. A 2 × 3 IV. SOLOPULSE SIGNAL PROCESSING Solopulse signal processing is similar to the wavenumber domain Stolt transform method [56], [57] used in SAR omega-k (range migration) algorithms [58], [59], [60], [61], [62]. The primary modification is the adaptation of the HF-transfer function (which is sometimes implicit or missing in prior formulations of omega-k processing [33]) to the size difference of the DAR and RoR. A detailed description of Solopulse signal processing through the system model of Table I provides a quick reference guide for primary variables and symbols. 1) Scene Function: Let a continuous scene function g(x) be modeled as a set of continuous Dirac impulse functions δ(x − x n ), each representing a point scatterer at some location x n ; hence g(x) = n g n · δ(x − x n ), where the scatterer strength is g n . 2) Scene Spectrum: An unbounded scene function g(x) and unbounded scene spectrum G(k x ) comprise a Continuous Fourier Transform (CFT) pair, g(x) ↔ G(k x ). The scene spectrum G(k x ) is comprised of k x -domain locator sinusoids exp(jx n k x ) determined by the CFT pair, δ(x − x n ) ↔ exp(jx n k x ). The scatterer's position x n determines the frequency of the complex locator sinusoid exp(jx n k x ) that exists in the k x -domain. 3) Transmit Waveform: The transmitted waveform is w(t) for the one transmit-AE in SIMO mode. If in SISO or MIMO mode, all AEs are assumed here to transmit w(t), either simultaneously or in a quick time-series, if orthogonality within a burst is desired. There is a zero pad that matches the UPA size (data sample count) to the twodimensional cross-range RoR size. (b) Extended HF-transfer reference signal for an UPA. There is no zero pad. g(x). The received signal is a function of time t and positions u of transmit/receive elements within the array. In practice, sensor arrays measure discrete output data samples s(ẗ,ü). Fig. 13 shows an example received signal collected by an uniform linear array (ULA). Fig. 14 shows an example received signal collected by an uniform planar array (UPA). 4) Received Signal: The array receives and measures a continuous time-space signal s(t, u) scattered by the scene function The array should be viewed as collecting an incident (received) phase pattern, rather than a directed amplitude and phase information of a single plane-wave along the line-of-sight from each scatterer [63]. This received phase pattern is a linear or planar "slice" of the incident spherical wavefield [2]. Use of expanded HF-transfers makes this viewpoint relevant at both short and long range. 5) Reference Huygens Signal: In preparation for setting up the inverse HF-transfer function, a reference signal that would be received by a real array with a virtual extension with size matched to the cross-range extent of the RoR is produced by computer simulation . The reference signal h(t, u) is setup by selected the position of a reference scatterer that may be anywhere within the RoR. Fig. 13(b) shows a reference signal for an ULA and Fig. 14(b) for an UPA. 6) Received Signal Rectilinear Spectrum: A CFT of s(t, u) in time and space yields the continuous signal spectrumS(k ω , k u ), where k ω is the signal wavenumber and k u is the aperture wavenumber. A DFT of the sampled s(ẗ,ü) yields the discrete signal spectrum, S(k ω ,k u ). The spectral sampling can be modeled by a grid of continuous Dirac impulse functions, in which case, both S(k ω , k u ) and S(k ω ,k u ) can be dealt with as continuous functions. In other cases, the data samples can be handled as discrete Kronecker data, e.g., with data buffer indexes i ω and i u . Due to its lack of a covariant angular structure, the signal spectrum is said to be rectilinear. The phase patterns of scattered fields seen in rectilinear spectra SS(k ω p , k u ) are not tonal, i.e., embedded locator sinusoids such as exp(−jx n k x ) are "warped" in the rectilinear spectrum format. 7) Covariant CoV for Wavenumber Migration: The phase pattern of a rectilinear sensor spectrum can be "unwarped" or made tonal through covariant wavenumber migration to thȇ k x -domain. Wavenumber migration implements a covariant change-of-variables transformation. Tonal formatting prepares the spectral data to form Solopulse imagery by Fourier inversion, k x →x. Once migrated, the nonuniformly spaced samples represent locator sinusoids without warping in k x -space, which comprise the image spectrum. But the task to uniformly resamplȇ k x →k x remains. Conceptually, this resample operation is performed after the migration from k u tok x , but in implementations, can be combined with the migration as done with a Cauchy matrix formulation. An example of the magnitude of the frequency-wavenumber rectilinear spectrum for an ULA after pulse compression (frequency domain matched filtering), but before wavenumber migration, is shown in Fig. 15(a). The resulting angular spectrum after wavenumber migration is shown in Fig. 15(b). Corresponding results for the UPA are shown in Fig. 16 B. Aliasings and Ambiguities Continuous-to-discrete and discrete-to-continuous signal analysis is provided here to better enable an understanding of the impact of sampling of sensor data s(ẗ,ü) and its sampled spectrum S(k ω ,k u ) as eventually migrated to a sampled estimate M (k x ) of G(k x ). Analyses of continuous-to-discrete sensing and discrete-to-continuous inversion is also preparatory for application of minimum-norm least-squares image reconstruction methods [46]. The forward Discrete-Time Fourier Transform (DTFT) of a discrete scene function g(ẍ) from uniformly sampledẍ-space data creates replicas G(k x ) of the unbounded continuous scene spectrum G(k x ) in the continuous k x -domain (a tilde accent is used to indicate replication). The spectral periodicity creates k x -domain ambiguities and possibly, overlap aliasing in the wavenumber domain. The inverse Discrete-Frequency Fourier Transform (DFFT) of a discrete scene spectrum G(k x ) from uniformly sampled k x -space data creates replicas g(x) of the unbounded continuous scene function g(x) in the continuous x-domain. This spatial periodicity creates x-domain ambiguities and possibly, overlap aliasing in the spatial domain. Forward and inverse DFTs induce periodicity in both spatial and wavenumber domains. If spatial and spectral sequences are bounded (finite length) and if sampling rates are sufficiently high, then there is no overlap aliasing in either domain. The following analyses seeks to define boundary functions and to prepare for control of overlap aliasing effects. 1) Band-Limiting and Spatial-Limiting: To manage the possibility of overlap aliasing of the replicated estimate m(ẍ) of the unbounded scene function g(ẍ), the objective scene function can be changed from the entire scene g(ẍ) to just a box-bounded subset (expressed rather iconically as) g(ẍ) , which is limited to finite extent by multiplication with a box-shaped spatial bounding function Π(ẍ) . The bounding box corresponds to an objective RoR. The box-bounded scene forms a Fourier transform pair with a version of the unbounded scene spectrum G(k x ) convolved with a sinc function, , m(x) ≈ g(x) . Since the sensor is passband limited and view-angle limited, the migrated spectrum M [k x (k ω ,k u )] sinc is also limited as expressed by a migrated box-window function, The preimage (in a functional sense) of the angular spectrum window Π(k x ) is the rectilinear signal spectrum window Π(k ω ,k u ) . The " " subscript represents a migrated version of the " " bounding box. The attributes of the data-supporting angular spectrum window sinc are determined by sensor system parameters, e.g., waveform bandwidth, angular support of the FoV or RoR, etc, as mapped by the covariant CoV transformation. A continuous (view-angle, band-limited and box-bounded) scene estimate from sampled data is obtained by a DFFT pair, wherek x are nonuniformly sampled Dirac impulses. A Kronecker impulse version of the sampled data of the migrated wavenumber can be obtained by uniformly resampling,k x → k x to obtain a corresponding discrete spectrum and scene estimate related by a DFT pair, C. Continuous JW-CoV Development Ignore for a moment the eventual discrete aspect of the end-to-end mapping S(k ω ,k u ) → M (k x ) that involves both a covariant CoV and uniform resampling of migrated data and consider an unsampled (continuous) version of the migrated spectrum M (k x ). An unreplicated reconstructed scene estimate m(x) can be expressed as an inverse CFT, A JW-CoV transformation of the sampled signal spectrum S(k ω ,k u ) is desired to estimate the scene spectrum. First, consider a continuous, unbounded, scene estimate m(x) that is the inverse CFT of an angularly windowed, migrated, continuous, passband spectrum method of obtaining m(x), not from the inverse CFT of M (k x ) , but from the inverse CFT of S(k ω , k u ) is desired. The inverse CFT functional can be modified to involve a covariant CoV transformation with a Jacobian matrix determinant weighting |J (k ω , k u )| applied to the (continuous) signal spectrum S(k ω , k u ), This is the continuous version of the JW-CoV. For notational efficiency, let be called the scene's Jacobian-weighted spectral estimate (JSE), where the signal bandwidth and aperture wavenumber window function constraint is indicated by the subscript JSE. 1) Discrete JW-CoV Development: A bounding box can be applied in the following analysis to setup a bounded version of the scene reconstruction that is subject to replications, This modified objective of wavenumber migration is to have m(x) provide a satisfactory representation of a bounded g(x) . Consider next a sampled version of the scene's Jacobianweighted spectral estimate, To get a handle on the description of the migrated-wavenumber spectrum of a box-bounded and replicated reconstruction m(x) , synthesis via a windowed version of the inverse DFFT of S(k ω ,k u ) JSE used in (5) can be considered, A forward CFT of m(x) will convolve the spectrum of S(k ω ,k u ) JSE with a sinc function. To see the connection of the box-bounded m(x) with a sinc-convolved signal spectrum S(k ω ,k u ) sinc JSE , consider the forward CFT of (6) with respect to a new wavenumber variable k x , where, with some foresight, M (k x ) sinc has been annotated with the superscript "sinc," since The covert sinc-convolution of (7) can be made overt by dealing with the integral over x first, where i is an index over the dimensions of (8). To simplify notation, the box Π(x) = Π(x) (2Xo @Xc) is assumed here to be sized the same in each dimension. The notational shorthand and position X c = (X c , Y c ) of a square-bounded section of g(x), Equation (8) also simplifies notation by using bold-font vectors, indicating that the center location of the sinc function k x (k ω ,k u ) in the k x -domain holds in each dimension k x i . Hence, using (8) in (7), with an interchange of the order of CFT integration and DFFT summations, and thus an estimate of the scene's migrated angular spectrum is obtained from the Jacobian-weighted version of the discrete signal spectrum. The task of obtaining a discrete version of m(ẍ) to produce an approximation of the discrete version of g(ẍ) is accomplished by design of a resampling gridk x to be applied in the Uncertainty rules related to the Π x parameter of the RoR represented by the box determines the required sample spacing Δk x of thek x domain. An inverse DFT is then used to obtain m(ẍ) as the box-bounded replicated scene estimate of g(ẍ) . D. Uncertainty Principles Solopulse signal analysis involves signals that are both time and frequency limited, and both space and wavenumber limited; and hence, are characterized by (Heisenberg) uncertainty principle bounds [64], [65], [66]. Uncertainty principles govern the support of conjugate parameters in Fourier transform pairs (e.g., x and k x ). These bounds determine the unescapable relationships between the sizes of various observational windows or dwells (e.g., receive time window, bandwidth, aperture size, wavenumber manifold, etc.) and the corresponding Nyquist sampling densities required in the conjugate domains. Use of uncertainty principles proves important in the parameter selection process of instantiated Solopulse algorithms, where comparatively small sensor arrays with dense sensor element spacings may handle large, remote (near or far), scattering scenes reconstructed through wave field inversion processes. For efficiency of language, the size of an observational dwell, such as the support of the scene, the array size, or the size of some k-space manifold, shall be generically called here a "box". A sampling interval shall be called a "bin". Boxes are typically large. Bins are typically small. The relationship between observational boxes (generically indicated here by upper case Greek letter Π) and sampling bins (generically indicated here by upper case Greek letter Δ) are governed in signal analyses and system designs by the uncertainty relationship, Π · Δ ≥ 2π. [67]. For example, the bins and boxes of the spatial domain (Δ x and Π x ) and those of the corresponding wavenumber domain (Δ k x and Π k x ) govern signal processing system designs. Select any two of these four as free parameters and the other two are governed by Π · Δ ≥ 2π. If the signal processing is designed such that the bins and boxes satisfy Π · Δ ≥ 2π with equality, then the parameters are critically sampled at Nyquist rates. Let the cross-range dimension of an ULA, for example, be expressed with the variable u y and the (parallel with respect to u y ) cross-range dimension of the Solopulse reconstructed scene with the variable y. Consider first the aperture box and bin parameters. The AE spacing or aperture bin size Δ u y determines the array wavenumber manifold size Π k uy = 2π/Δ u y , and the array length Π u y determines the required array wavenumber sample density Δ k uy = 2π/Π u y . Likewise, given a specified cross-range image (RoR) box size Π y and image bin size Δ y (pixel spacing, not resolution), the corresponding cross-range image (RoR) wavenumber manifold must satisfy Π k y = 2π/Δ y with sample spacing Δ k y = 2π/Π y . One-half wavelength element spacing gives an off-boresight angular FoV of ±90 degrees and an Ewald sphere diameter of 2k ω , where k ω is the largest within-band signal frequency. The uncertainty relationship establishes that a wavenumber domain manifold of size of Π = 2k ω requires spatial domain sampling intervals Δ be of size (2π)/(2k ω ) = λ/2, where λ is the temporal signal wavelength. This bin size (for SIMO and MIMO modes) specifies the pixel density requirement of Solopulse images and HD-DBF beam fields as illustrated by the red lattices of Figs. 2 and 3. This bin size is halved and the box size is doubled in SISO mode. A. The Search for the Huygens-Fresnel Spectrum Instead of taking a computational approach to obtain the Huygens-Fresnel spectrum HH, an analytic solution has long been desired. These temporal-spatial Fourier transforms are surprisingly difficult to fully evaluate and their analyses depends on a variety of preconditions. Certain questions must be resolved before proceeding with a search for the solution. If used as a starting point in the analysis, should the Fresnel field Hh be static or dynamic? Should the Fresnel field Hh be in a sink or source form? Is HH representative of free space electromagnetic radiation or a scattered field? Here are some approaches found in the literature for seeking analytic solutions of the spectrum of EM wave motion described by HH: r Plane-wave synthesis and decompositions [68], [69]. r Approximate methods based on stationary phase [70]. r Asymptotic Fourier analyses that seek insights into the structure of HH by analysis of the singularities of Hh [28]. A "complete" spatial Fourier transform of the Fresnel field solution of the Helmholtz equation was recently "developed" by Schmalz, et al. in 2010 [78]. They note that their approach was originally developed and utilized in the 1920's by Dirac in his k-space analysis that established QFT and QED [1], [80]. In QFT and QED the frequency-wavenumber domain analysis of the complementary (probability amplitude) wave function of a boson (photon) is essentially the same as the energy-momentum analysis of the field, whether free or scattered. B. Dirac's Approach Dirac provided two primary forms for the HF-spectrum. His first relates to photon radiation scattered by electrons. His second relates to photon descriptions of free-space EM fields. 1) Dirac's Free-Space Field: Generally, EM field analyses of the HF-spectrum can be qualified by the following attributes of the problem: free versus scattered, out-going versus in-going, noncausal versus causal, analytical versus arbitrary complexity, double-sided versus single-sided, even versus odd, and real versus imaginary. It was clear to Dirac that the free-space solution should be real, odd, and noncausal in the time-space domain and hence purely imaginary in the frequency-wavenumber domain. Atypical of much of the ensuing research performed by others over the intervening decades, Dirac focused on the use of Huygens' hh in the time-space domain as a starting point rather than use of Fresnel's Hh in the frequency-space domain. In free-space analysis, there is no scattering agent that causes causality in time, hence analysis requires time to be a free parameter with both positive and negative values. To achieve covariance, Dirac generalized the Huygens wavelet to be an odd double-sided (noncausal) light cone (bidirectional sequence of Huygens wavelet spheres). We call this structure a hypercone. Dirac's covariant version of the Huygens wavelet is where the bow-tie accent indicates a hypercone singularity δ and where covariant time-space is indicated in 4-vector notation by χ = χ μ χ μ = (ct) 2 − |x| 2 . Dirac's solution for the HF-spectrum of free-space is indicated here by where covariant frequency-wavenumber 4-vector k-space is indicated by κ = κ μ κ μ = (k ω ) 2 − |k x | 2 . Dirac established through Fourier analysis of (9) that (10) is also a double-sided hypercone (bidirectional nested series of Ewald spheres). Dirac established that the 4-vector energy-momentum (frequencywavenumber) spectrum δ (κ) is the Fourier transform of the 4-vector time-space double-sided light cone δ (χ). This is an example of Dirac's "beautiful mathematics". This Fourier transform is self-similar (see [1], p281), We choose to call these the Dirac time-space hypercone and the Dirac frequency-wavenumber hypercone. If context is clear, all can be simply referred to as Dirac hypercones. It is important to remember that these δ (χ μ χ μ ) ↔ δ (κ μ κ μ ) hypercones are functional compositions of an underlying difference-of-squares, and hence, are covariant. 2) Dirac's Scattered-Field Solution: Dirac's scattered field solution was based on his understanding of the interaction of the (massless) photon and (massive) electron. The photon interacts with the electron in a way so as to be scattered and hence gives rise to the scattered EM field. Once assumed scattered, Dirac's assumptions about the EM field's (unbounded) probability amplitude wave functions were narrowed down to just out-going and time-space causal. Hence, Dirac's free-space double-sided hypercone in the time-space domain became single-sided for scattered fields. The Fourier transform of a causal time-space structure is such that the corresponding frequency-wavenumber structure is analytic in the Hilbert transform sense [65], [76], [79], [81], [82], [83], [83], [84], [85]. As an aside, the ability of an analytic signal to convey arbitrarily complex (scatterer) information should not be forgotten per the Hilbert transform product theorem [86], [87], [88], [89], [90], [91]. This theorem also establishes the ultra-wideband limits of systems based on these theories: the single-sided signal bandwidth can be as large, but no larger, than the (peak passband) carrier frequency. Solopulse supports such ultra-wideband systems [3] Similar to the combination of homogeneous and inhomogeneous solutions, the free-space spectrum's purely imaginary, double-sided, odd, hypercone remains as part of the scattered field spectrum. Scattering of the time-space EM field induces an additional real component to the free-space version of the HF-spectrum, a covariant pole 1/κ, hence This is the "complete" solution of Schmalz, et al. [78]. The covariant pole 1/κ follows directly from (2). This covariant pole is also a functional composition of a baseline differenceof-squares 1/κ = 1/(κ μ κ μ ) and as such, has a partial fraction expansion. Dirac seems to be the first to have combined these k-space elements in his study of quantum electrodynamics [1] (see also [11], p. 71 and [84], p. 224) and this was done again in the work of Lighthill in 1958 [13]. This structure in the 4-vector difference-of-squares κ-space retains the double-sided odd hypercone (bidirectional sequence of Ewald spheres) of the free-space HF-spectrum, but where a real part has been added to the spectrum to make it analytically complex, i.e., the real and imaginary parts are related by the Hilbert transform. The double-sided light cone is in its entirety a generalized function; but once truncated to be single-sided, the causal light cone has a spread of spectral energy off the Dirac hypercone manifold as expressed by 1/|κ| for |κ| > 0. We shall refer to this as the "Hilbert spread". The Hilbert spread exist both inside and outside of the Ewald spherical singularities of the Dirac hypercone. C. Dirac's Analysis Details 1) Dirac Hypercone in Time-Space: Covariant analysis of time-space is based on the difference-of-squares expression c 2 t 2 − x 2 − y 2 − z 2 = 0, which defines the time-space requirement of Lorentz invariance. Using the notation of Dirac, let time scaled by the speed of light ct be expressed by x o , and let the dimensions of conventional 3-space be indicated by x = (x 1 , x 2 , x 3 ). Covariant 4-vector time-space is denoted by +x 1 , +x 2 , +x 3 ). The time-space 4-vector scalar product expresses a difference-ofsquares, χ μ χ μ = x 2 o − |x| 2 . This 4-scalar is sometimes written Consider a 4-vector time-space singularity described as a functional composition based of the difference-of-squares polynomial χ μ χ μ = x 2 o − |x| 2 = 0 in a spherical singularity, The second-order differenceof-squares invariant x 2 o − |x| 2 = 0 can be decomposed into linear factors that specify the two roots of (x o − |x|)(x o + |x|) = 0. By the property of generalized functions considered as functional compositions, an important decomposition is obtained, where • δ(x o + |x|) and a right Huygens wavelet sequence as a function of x o (i.e., onehalf of the Dirac hypercone) and call a left Huygens wavelet sequence (i.e., the other half of the hypercone). A double-sided (right and left) set of expanding Huygens wavelet sequences are realized in (11) by the difference-ofsquares spherical singularity • δ(χ μ χ μ ) = δ (χ μ ) + δ (χ μ ). Dirac found utility in establishing both even and odd (as a function of time x o ) versions of the time-space difference-ofsquares singularity, Dirac noted that the definition (12) gives meaning to the function δ (χ) if applied to any covariant 4-vector [1]. Example 4-vectors motivated by physics include time-space, frequencywavenumber, energy-momentum, and electromagnetic scalarvector potential functions. Note that For − δ , t < t n is permissible. For + δ , t n < t is permissible. Such offsets lead to the locator sinusoid banding of the Ewald sphere exploited by Solopulse. 2) Dirac Hypercone in Frequency-Wavenumber: Covariant analysis within the frequency-wavenumber domain is based on the difference-of-squares expression encountered in the spatial Fourier transform of the frequency-space domain Helmholtz wave motion equation (ω/c) 2 − |k r | 2 = 0, which defines the requirement of frequency-wavenumber domain covariance k 2 ω − k 2 x − k 2 y − k 2 z = 0, where k r = (k x , k y , k z ). Using the notation of Dirac, let the temporal-frequency k ω be expressed by k o , and let the dimensions of conventional wavenumber domain 3-space be indicated by (k 1 , k 2 , k 3 ). Covariant 4-vector frequency-wavenumber is denoted by κ μ = (k o , −k r ) = (k o , −k 1 , −k 2 , −k 3 ) and the contravariant 4-vector by κ μ = (k o , +k r ) = (k o , +k 1 , +k 2 , +k 3 ). The frequency-wavenumber 4-vector scalar product is κ μ κ μ = k 2 o − k 2 1 − k 2 2 − k 2 3 = k 2 o − |k r | 2 . This 4-scalar is sometimes more compactly written, κ = κ μ κ μ . The frequency-wavenumber difference-of-squares invariant k 2 o − |k r | 2 = 0 can be decomposed into linear factors that specify the two roots of κ μ κ μ = (k o − |k r |)(k o + |k r |) = 0. By the property of generalized functions considered as functional composition, • δ(κ μ κ μ ) = δ (κ μ ) + δ (κ μ ). Similar to the time-space difference-of-squares singularity, even and odd versions of the frequency-wavenumber hypercone • δ(κ μ κ μ ) can be defined, The frequency-wavenumber structure of δ (κ) can be characterized as a 4-vector double-sided (noncausal) HF-spectrum of EM propagation in free-space. Although developed many years ago, the Dirac approach did not become the standard approach for spatial Fourier transform analysis of the Helmholtz equation and has not been widely adopted. Our recent recognition of the relevance of Dirac's model of the HF-spectrum to the structure of the Solopulse spectrum filled a lingering gap in our prior theoretical analyses, which until then relied on an ansatz (i.e., the fundamental angle isomorphism of SAR) as explained in [2]. VI. STATUS AND PLANS Single-pulse signal processing methods for short-range imaging and long-range, high-density, receive beamforming with digital sensor arrays have been developed and demonstrated. Evolving view-angle diversity was shown to achieve progressive resolution with aperture synthesis, where coherent fusion is implemented with pixel domain additions. Future research plans include more extensive modeling and simulation, prototyping, validation and further demonstration of these and other use cases. Additional research is required to better understand position estimation error sensitivities in multiplepulse use cases. Additional research is required to demonstrate that microwave video signal processing based on Solopulse freeze-frames is feasible in real-time, with and without aperture synthesis. Research is planned to explore use cases related to sensing for autonomous vehicles. Future research will also explore other sensor modalities such as ultrasound and sonar.
v3-fos-license
2022-11-04T17:48:42.556Z
2022-11-01T00:00:00.000
253294599
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/2076-328X/12/11/429/pdf?version=1667308717", "pdf_hash": "1212a74adceb7fb4fdd47502c63ca1954f63c7c5", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43679", "s2fieldsofstudy": [ "Sociology", "Psychology" ], "sha1": "31aeaefb0df27d42097c36e0ac0d703ab82de9ec", "year": 2022 }
pes2o/s2orc
An Online Survey Testing Factorial Invariance of the Optimization in Primary and Secondary Control Scales among Older Couples in Japan and the US This study examines the factorial invariance of the Optimization in Primary and Secondary Control (OPS) scale and its associations with subjective well-being among older couples in Japan and the US. To this end, 200 older couples in Japan and 220 in the US were recruited through paid vendors and completed the questionnaire online. Couples were eligible if husbands were 70 years or older and wives were 60 years or older. A six-factor model, in which Compensatory Primary Control was subdivided into two factors, fit the data best; its factorial invariance was confirmed among the four subsamples. Compensatory Secondary Control was more strongly associated with subjective well-being in American couples than in Japanese couples, although the associations between well-being and the other five OPS factors were similar in the two countries. Future research on this six-factor model will be able to examine how these control strategies function in different cultures. Introduction Since Rothbaum et al. proposed a two-process model of perceived control over 40 years ago, researchers have studied how primary and secondary control correlate with wellbeing across both age and culture [1]. In primary control, people attempt to influence the immediate environment, outside themselves. Secondary control is directed inward, as people attempt to accommodate themselves to external realities. One important theory of primary and secondary control proposes that people optimize primary and secondary control processes depending on their age, situation, and cultural context. Specifically, as people age, secondary control is theorized to become dominant over primary control [2,3]. In addition, researchers have theorized that culture shapes people's control preferences, with independent cultures emphasizing primary control and interdependent cultures emphasizing secondary control [4,5]. Measuring Primary and Secondary Control For testing the aging hypothesis, one of the most widely used, theoretically derived measures of primary and secondary control is the Optimization in Primary and Secondary Control (OPS) scale [6]. The OPS scale consists of five factors: Optimization, Selective Primary Control, Compensatory Primary Control, Selective Secondary Control, and Compensatory Secondary Control [6]. In one study [7], Hasse et al. tested three self-report measures-the control scales of the OPS, Tenaciousness, and Flexibility (TenFlex) [8], and Selective Optimization with Compensation (SOC) [9]-together. They confirmed that three meta-factors exist: meta-regulation, goal engagement, and goal disengagement. The researchers also established that all three factors increase with age and are all associated with well-being. However, few studies of the factor structure of the OPS scale itself have been conducted, and fewer have been performed using cross-cultural samples. The original study on scale development [6] did not provide information on the factor loadings of each factor's corresponding items because parceling scores were used. In that study, each of the five factors in the OPS scale was constructed by three parceling scores, in which several item scores were aggregated [6]. Specifically, Heckhausen et al. divided 12 items of the Optimization factor into three parcels, each of which consisted of four items, and created three parceling scores by computing simple means of each set of four items [6]. Parceling scores were used in that case, because the maximum likelihood (ML) method cannot estimate the appropriate parameter values with ordinal data (the OPS response scales are ordinal). ML can, however, estimate appropriate parameter values when parceling scores are used, because parceling scores are considered continuous [10]. Given the increasing availability of methods for ordinal response scales, we can now analyze ordinal indicators directly with weighted least square estimation with robust standard errors and a mean-and variance-adjusted test statistic (WLSMV). In doing so, we can estimate the factor loadings of each item on its corresponding factors, something which the parceling method previously obscured. Research Questions The present study had four main research questions. First, we asked if we could establish factorial invariance of the five-factor OPS model across two cultures (Japan and the US) in a sample of older adults who were heterosexual married couples. If the original five-factor model did not fit the data, we planned to propose a more appropriate model of the OPS scale. Second, we tested whether we could confirm factorial invariance among the four subsamples (Japanese men, Japanese women, US men, and US women). Third, we examined gender and cultural differences among the factor scores. Fourth, we examined associations of the factor scores with subjective well-being, including examining whether gender and culture moderated these associations. In this study, subjective well-being was operationalized according to Diener's three-part definition, which measures satisfaction with life, positive affect, and negative affect [11]. Samples We contracted with vendors in Japan (N = 200 couples) and the US (N = 220) to recruit older adult married couples. In order to participate, all husbands needed to be at least 70 and wives at least 60 years old. Japanese couples had been married for an average of 50 years (because of an oversight, no data are available on the length of marriage in American couples). Online surveys were conducted in March 2018 in both countries. Mean ages were 78.15 (SD = 4.84) in Japanese husbands, 74.52 (SD = 5.55) in Japanese wives, 75.74 (SD = 4.54) in American husbands, and 71.43 (SD = 5.15) in American wives. In the US, ethnicity proportions of husbands and wives were as follows: 94.1% and 92.7% White, 1.4% and 1.8% Asian American, 1.8% and 2.3% Black, and 2.7% and 3.2% other or Latino. Power analyses using G*Power Version 3.1.9.6. showed that given the current sample sizes (N = 220 in the US, N = 200 in Japan), p = 0.05, and an effect size of r = 0.20, the study's statistical power was adequate, at 0.85 in the US and 0.81 in Japan. Measurement OPS scale. We administered the Optimization in Primary and Secondary Control scale (OPS) [6], which consists of 5 factors. We used the short version, whose 28-item were drawn from the original 44-item questionnaire based on their factor loadings in a previous survey by the first author [12]: Optimization (6 items), Selective Primary Control (6 items), Compensatory Primary Control (6 items), Selective Secondary Control (6 items), and Compensatory Secondary Control (4 items). The response scale ranged from 1 (never true) to 5 (almost always true). Subjective well-being. According to Diener et al., subjective well-being (SWB) was measured with three components: satisfaction with life, frequency of positive affect, and frequency of negative affect [11]. The Satisfaction with Life Scale (SWLS) was measured with five items [13,14]. The SWLS has been used in multiple world cultures with meaningful results, suggesting that it is appropriate for use in cross-cultural research [15,16]. The scale ranged from 1 (strongly disagree) to 7 (strongly agree). Higher scores indicate higher satisfaction with life; the Cronbach's alpha coefficient was 0.88, 93, 89, and 0.88, in Japanese husbands, Japanese wives, American husbands, and American wives, respectively. Positive and negative affect were measured with eight items from the Positive and Negative Affect scales [17,18]. Participants were asked how often they felt each emotion during the last 30 days about four positive emotions (cheerful, happy, peaceful, full of life) and four negative emotions (effortful, hopeless, restless or fidgety, and sad). The scale ranged from 1 (none of the time) to 5 (all of the time). The Cronbach's alpha coefficients were 0.81, 0.83, 0.90, and 0.89 for positive affect and 0.80, 0.82, 0.80, and 0.81 for negative affect in Japanese husbands, Japanese wives, American husbands, and American wives, respectively. Analytic Procedure We analyzed the data using packages and functions in the statistical software R, including the "psych" package for descriptive statistics and Cronbach's alpha coefficients, the "anovakun" function for analyses of variance, and the "lavaan" package for confirmatory factor analysis (CFA). When conducting CFA, we used the WLSMV estimator for analyzing ordinal indicators and producing several goodness-of-fit indices such as χ 2 , Comparative Fit Index (CFI), Tucker-Lewis Index (TLI), and Root Mean Square of Error Approximation (RMSEA). The conventional levels for acceptable fit were as follows: CFI and TLI > 0.95 and RMSEA < 0.07 [19]. Factor Structure of the OPS Scale First, we performed ordinal CFA testing of the original five-factor model for all the participants in Japan and the US. There were 16 items in which no participants responded to the lowest category (never true) in at least one among the four subsamples. In these cases, we merged the never true category with the seldom true category because ordinal CFA cannot be executed when one of the response categories is empty. One item from the Compensatory Secondary Control scale was deleted because of its low factor loading. As a result, 27 items were analyzed. The robust goodness-of-fit indices did not meet conventional levels of acceptable fit (χ 2 (314) = 6558.77, p < 0.001; CFI = 0.885, TLI = 0.871, RMSEA = 0.154; Table 1). The modification indices suggested that, in the Compensatory Primary Control scale, there should be error correlations among two subsets of items: the three items CP3, CP5, and CP6, and the three items CP1, CP2, and CP4. The first three items seem to capture support seeking ("CP3. When I cannot solve a problem by myself, I ask others for help.", "CP5. When difficulties become too great, I ask others for advice.", and "CP6. When obstacles get in my way, I try to get help from others."), while the remaining items seem to capture alternative strategies to compensate for lost primary control ("CP1. When I cannot get to a goal directly, I sometimes choose a roundabout way to achieve it.", "CP2. When I can no longer make progress on something, I look for new ways to reach my goal.", and "CP4. When obstacles get in my way, I find another way to get what I want."). Therefore, we decided to divide the factor of Compensatory Primary Control into two subfactors: Support Seeking and Alternative Strategy. The robust goodness-of-fit indices of this modified six-factor model were estimated (χ 2 (309) = 2224.68, p < 0.001; CFI = 0.964, TLI = 0.960, RMSEA = 0.086) and a robust chisquare difference test showed that the fit of this six-factor model was significantly improved over that of the original five-factor model (∆χ 2 (5) = 861.70, p < 0.001) ( Table 1). The model improved significantly after we added one error correlation between two items in the factor of Optimization ("O1. It is important for me to be active not just in one area of life, but in several different ones." and "O4. I stay active and involved in several different areas of life."), which the modification index suggested were correlated, and whose meanings seem to be similar (∆χ 2 (1) = 160.09, p < 0.001) and the robust goodness-of-fit met appropriate levels (χ 2 (308) = 2033.16, p < 0.001; CFI = 0.968, TLI = 0.964, RMSEA = 0.082). Each of the items of the OPS loaded highly (more than 0.66) on the factor specified by the theory (Table 2). However, there were two high correlations over 0.95: the correlation between Selective Primary Control and Selective Secondary Control was 0.958 and the correlation between Optimization and Selective Primary Control was 0.952 (Table 3). Therefore, it was necessary to examine the discrimination of these two factors. We used Bagozzi et al.'s method for testing factor discrimination, testing whether the correlation coefficient differs significantly from 1.00 or not, that is, whether (the correlation + 1.96*standard error) is greater than 1.00 [20]. The standard errors of the two correlation coefficients were 0.018 and 0.019 and the upper values of 95% confidence interval were 0.993 and 0.989, respectively, which were not greater than 1.00. The hypothesis that these two latent constructs were identical was rejected. Further, we examined whether merging these highly correlated factors made the fit better or not. In the descending order of size of factor correlations, the highest correlated two factors were merged into one and the model fit was compared successively (Table 4). When Selective Primary Control and Selective Secondary Control were merged, the robust goodness-of fit became significantly worse (∆χ 2 (5) = 21.84, p < 0.001) and other goodness-of-fit indices also got worse in the five-factor model (CFI = 0.965, TLI = 0.961, RMSEA = 0.085). Therefore, all the successive merging processes made the fit significantly worse. We concluded that the six-factor model was the best one because these six latent constructs were statistically separate from each other and this model could explain the data most appropriately and parsimoniously (even though a few of the factor correlations were very high). Factorial Invariance among Older Couples in Japan and the US We compared the goodness-of-fit indices of the six-factor model with configural invariance (in which the factor structure is the same but no parameter was constrained) with those of the factorial invariance model (in which all the factor loadings equally constrained) among the four subsamples (Japanese husbands, Japanese wives, US husbands, US wives). Although a robust chi-square difference test showed that the factorial invariance model was significantly worse than the configural invariance model (∆χ 2 Table 5). Although these fit indices were inconsistent, we decided to adopt the factorial invariance model. In addition to constraining factor loadings, when all the factor covariances were equally constrained, the correlation between Selective Primary Control and Selective Secondary Control was higher than 1.0 in Japanese husbands and this model was not identified. In the end, we adopted the factorial invariance model as a final one. Gender and National Differences in These Factors' Scores Descriptive statistics and Cronbach's alphas are shown in Table 6. We calculated these six-factor scores by taking the mean of the relevant items. Gender and national differences in these factor scores were examined in mixed ANOVA. All six-factor scores were higher in American than in Japanese couples (all ps < 0.001) ( Table 6). However, these mean differences between cultures should not be overinterpreted because they may simply be due to systematic differences in the way people in the two cultures use response scales [21]. In both countries, while Selective Primary Control, Alternative Strategy, and Selective Secondary Control were higher in husbands than in wives (p = 0.014, p = 0.009, and p = 0.027), Support Seeking was lower in husbands than in wives (p < 0.001). Associations of the Six Control Factors with SWB Pearson correlations of the 6 control factors with three aspects of SWB were calculated (Table 7). Three control factors, Optimization, Selective Primary Control, and Selective Secondary Control, were positively associated with all three aspects of well-being, while Support Seeking and Alternative Strategy were positively associated with the two aspects of well-being (SWLS and Positive Affect). When we examined whether these correlations were moderated by culture, we found significant differences between the two countries in the associations of Compensatory Secondary Control with SWB (Table 8), such that Compensatory Secondary Control was more strongly associated with SWB in American couples than in Japanese couples. Discussion This study examined the factorial invariance of the OPS scale in older couples in Japan and the US. We documented several interesting findings. First, using ordinal CFA on each item of the OPS scale instead of the parceling method, we proposed a six-factor model with one residual error correlation (the factor of Compensatory Primary Control) that was subdivided into Support Seeking and Alternative Strategy factors. This model fits the data better than the original five-factor model. In addition, each item has high factor loadings on its corresponding factor. Even though two pairs of factors were highly correlated with each other, the discrimination among these factors was statistically confirmed. This model also showed the best statistical fit compared with models with fewer factors. Second, the factorial invariance of the six-factor model was confirmed among older couples in Japan and the US. Again, using ordinal CFA, we find that the overall framework in the OPS scale proposed by Heckhausen et al. [6] is generally maintained in the two different cultures, with the exception that the Compensatory Primary Control factor can be subdivided into two factors: Alternative Strategy and Support Seeking. This can lay the groundwork for further cross-cultural research. Third, there were several gender differences in levels of control strategies in both countries. Selective Primary Control, Alternative Strategy (one of the two new factors), and Selective Secondary Control were higher in husbands than in wives, but Support Seeking (the other new factor) was higher in wives than in husbands. The finding seems consistent with traditional gender roles in which men are more likely to use agentic skills and abilities (Selective Primary Control and Alternative Strategy) and maintain motivation for a selected goal (Selective Secondary Control). In turn, women are encouraged to maintain social interactions, so they may be more likely to seek support from others. This pattern of results also complements other work which finds that in both the US and Japan, women are more likely to seek social support from others [22]. Fourth, we found that although the associations of control strategies with SWB were positive across the two cultures, there was one cultural difference. The association of Compensatory Secondary Control with SWB was significantly stronger in the US than in Japan. One aspect of this control strategy involves self-justification because, after failure, people remind themselves of their own effort or their own past accomplishments. Using this self-enhancing strategy may be more elaborated and approved in an individualistic cultural context (the US), more than in a collectivist cultural context (Japan). This pattern aligns with past research, in which college-aged Americans were more likely than Japanese to use self-esteem-enhancing strategies [23]. Our results suggest, then, that such cultural differences extend to older adults. Future research can replicate this finding in a new sample of older adults, and a broader range of self-enhancement measures. One strength of this study was that it tested older adults in two cultures. While much research on the OPS has compared older, middle-aged, and younger adults, very little has tested cultural differences in the OPS. There were several limitations in this study. First, the samples provided by vendors in both countries were not random samples of their respective populations. They were biased to include older adults who are willing to seek out paid surveys. Second, the US sample is almost entirely White, so any conclusions are limited to this subgroup. In American culture, White American contexts are probably the most likely to foreground individualism and independence. Therefore, if anything, our primarily White American sample was biased to find more, rather than less, cultural differences when compared with Japan. In this context, it is notable that we actually found few differences between this White American sample and a Japanese sample. A third limitation is that although we confirmed the factorial invariance of a six-factor model in older couples in Japan and the US, the factor structure should be reconfirmed before assuming it would apply to additional cultures. In particular, careful attention should be paid to the highly correlated factors. We did find several positive associations, all of which are consistent with the argument that both primary and secondary control strategies are associated with well-being in both the US and Japan. However, a limitation is that our cross-sectional design did not allow us to determine the causal direction between control strategies and SWB. In conclusion, we provide a modified six-factor model of the OPS scale, which fits the data better than the original five-factor model. This six-factor model will enable future researchers to examine cross-cultural differences in, and well-being correlates of, these well-known primary and secondary control scales.
v3-fos-license
2017-05-03T12:45:51.527Z
2017-01-01T00:00:00.000
10709789
{ "extfieldsofstudy": [ "Computer Science" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "http://thesai.org/Downloads/Volume8No4/Paper_68-PaMSA_A_Parallel_Algorithm_for_the_Global.pdf", "pdf_hash": "be959e383da82c428acebfd51ce33e3551599227", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43680", "s2fieldsofstudy": [ "Computer Science" ], "sha1": "be959e383da82c428acebfd51ce33e3551599227", "year": 2017 }
pes2o/s2orc
PaMSA : A Parallel Algorithm for the Global Alignment of Multiple Protein Sequences Multiple sequence alignment (MSA) is a well-known problem in bioinformatics whose main goal is the identification of evolutionary, structural or functional similarities in a set of three or more related genes or proteins. We present a parallel approach for the global alignment of multiple protein sequences that combines dynamic programming, heuristics, and parallel programming techniques in an iterative process. In the proposed algorithm, the longest common subsequence technique is used to generate a first MSA by aligning identical residues. An iterative process improves the MSA by applying a number of operators that were defined in the present work, in order to produce more accurate alignments. The accuracy of the alignment was evaluated through the application of optimization functions. In the proposed algorithm, a number of processes work independently at the same time searching for the best MSA of a set of sequences. There exists a process that acts as a coordinator, whereas the rest of the processes are considered slave processes. The resulting algorithm was called PaMSA, which stands for Parallel MSA. The MSA accuracy and response time of PaMSA were compared against those of Clustal W, T-Coffee, MUSCLE, and Parallel T-Coffee on 40 datasets of protein sequences. When run as a sequential application, PaMSA turned out to be the second fastest when compared against the nonparallel MSA methods tested (Clustal W, T-Coffee, and MUSCLE). However, PaMSA was designed to be executed in parallel. When run as a parallel application, PaMSA presented better response times than Parallel T-Cofffee under the conditions tested. Furthermore, the sum-of-pairs scores achieved by PaMSA when aligning groups of sequences with an identity percentage score from approximately 70% to 100%, were the highest in all cases. PaMSA was implemented on a cluster platform using the C++ language through the application of the standard Message Passing Interface (MPI) library. Keywords—Multiple Sequence Alignment; parallel programming; Message Passing Interface I. INTRODUCTION A fundamental research subarea of bioinformatics is biological sequence alignment and analysis, which focuses on developing algorithms and tools for comparing and finding similarities in nucleic acid (DNA and RNA), and amino acid (protein) sequences [1].The sequence similarities found are used for identifying evolutionary, structural or functional similarities among sequences in a set of related genes or proteins [2].The set of sequences to be aligned are assumed to have an evolutionary relationship.Sequence alignment plays a central role in several areas of biology, such as phylogenetics, structural biology, and molecular biology. Multiple sequence alignment (MSA) can be defined as the problem of comparing and finding which parts of the sequences are similar and which parts are different in a set of three or more biological sequences.The resulting alignment can be used to infer sequence homology.Homologous sequences are sequences that share a common ancestor and usually also share common functions. Multiple sequence alignment is a well-known problem in computer science.A number of strategies have been applied to obtain MSAs, such as progressive alignment methods [3][4], iterative methods [5] [6], dynamic programming [7], genetic algorithms [8], greedy algorithms [9], Markov chain processes [10], and even simulated annealing methods [11].Currently, MSAs are obtained via two main approaches.The most popular alternative is the progressive multiple sequence alignment method.The main drawback with progressive alignments is that errors in the initial alignments of the most closely related sequences are propagated to the final multiple sequence alignment.The second most common approach to accomplish MSAs is the use of heuristic methods, which are more efficient than dynamic programming, but that do not guarantee finding an optimal alignment. The main contribution of the present work is the development of a parallel algorithm-PaMSA, which stands for Parallel MSA-for the global alignment of multiple protein sequences.The strategies applied in PaMSA to obtain an MSA of a set of sequences differ from those of other currently used MSA algorithms in several ways.The PaMSA algorithm is not a progressive-alignment approach, as all sequences are aligned simultaneously.In contrast to existing heuristic alignment methods, which start from completely unaligned sequences, the PaMSA algorithm generates an initial MSA of the sequences based on a Longest Common Subsequence (LCS) of the set of sequences to be aligned.In addition, in the PaMSA algorithm several processes work independently at the same time searching for the best MSA of a set of sequences.Thus, the PaMSA algorithm combines a number of strategies to produce the sequence alignment. The PaMSA algorithm was implemented as a parallel program that runs on a cluster platform; however it is not necessary to have a cluster environment to execute the application, as it can run even on a single processor.Currently, only protein sequences are aligned by PaMSA, but it is possible to adapt the implementation to align nucleic acid sequences as well. Our implementation of PaMSA was compared against the currently used MSA algorithms Clustal W [3], T-Coffee [4], MUSCLE [5], and Parallel T-Coffee [12].The comparison against the first three methods was done using a sequential www.ijacsa.thesai.orgversion of PaMSA, as these methods are non-parallel implementations of the respective algorithms.The comparison in all cases was through the application of the sum-of-pairs function [13].PaMSA was faster than Parallel T-Coffee, whereas the sequential version of PaMSA was the second fastest when compared against the nonparallel methods. The remainder of this article is organized as follows.Section II describes the PaMSA algorithm and the metrics used to evaluate the alignments, whereas Section III specifies the protein sets and the conditions for the runs.Results are presented in Section IV with a discussion of their relevance.Finally, Section V presents the conclusions and future work. II. THE PAMSA ALGORITHM Our parallel approach for the global alignment of multiple protein sequences, PaMSA [14], combines dynamic programming, heuristics, and parallel programming techniques in an iterative process.Dynamic programming techniques are applied for setting up an initial alignment.The algorithm improves the initial MSA in an iterative manner by applying a number of operators that move, delete or realign gaps.The algorithm ends when the termination criteria are reached. The PaMSA algorithm was implemented on a cluster platform.Hence, in this approach a number of processes work in parallel for the search of the best MSA of a set of sequences.If np is the number of processes used by the algorithm, the number of possible different MSA solutions is equal to np.For example, if np = 2, there will be 2 independent processes searching for the best alignment.As the number of processes increases, the number of solutions increases as well.A consecutive integer 0, 1, 2, . . ., np − 1 is assigned to each process, which acts as an identification number (id) for the process.There exists a process that acts as a coordinator, whereas the rest of the processes are considered slave processes.The id for the coordinator process is always equal to zero.Slave processes have a consecutive integer id, which goes from 1 to the total number of processes minus one.The algorithm was implemented to be run on a cluster, however it works also on a nonparallel environment.In order to evaluate the quality of the MSA, a number of objective functions were implemented. A. General structure As mentioned above, the PaMSA algorithm is not a progressive-alignment approach, as all sequences are aligned simultaneously.In contrast to existing heuristic alignment methods, which start from completely unaligned sequences, PaMSA generates an initial MSA of the sequences based on a Longest Common Subsequence (LCS) of the set of sequences.The proposed algorithm follows a strategy analogous to a parallel genetic algorithm.The main steps in the general structure of a simple genetic algorithm (GA) are followed in the basic PaMSA algorithm procedure (Fig. 1).In PaMSA there is a population of initial MSAs, whereas in a GA there is a population of random initial solutions.In PaMSA, alignments are given a score, whereas in a GA, individual solutions are evaluated by an optimization function.An alignment is improved by applying operators in PaMSA, whereas individuals in a population evolve by applying operators in a GA.Finally, in both algorithms, operators are applied in an iterative process until a predefined condition is satisfied. PaMSA combines a number of strategies to produce the sequence alignment, which are briefly described next and explained in more detail in the following sections.First, a wellknown LCS technique for two sequences that uses dynamic programming was adapted and implemented to obtain an LCS of more than two sequences.In this approach, a number of processes work in parallel, so that each process calculates an LCS of the sequences.Even though all processes apply the same algorithm to the same set of sequences, the resulting LCSs are possibly different, because the calculations are based on a different order of sequences and there exists the possibility of having more than one LCS for the same sequences.Second, an algorithm is applied to the set of sequences in order to generate a first MSA by aligning identical residues, as well as similar residues, as much as possible.This algorithm uses the LCS generated at each process, which can be different from the LCSs in the other processes.This approach allows various potential solutions to be running in separate processes.Third, after the first MSA is generated in each process, the quality of the MSA is evaluated using a set of objective functions (OFs).Each process evaluates its MSA, and the slave processes send the scores of four of the OFs to the coordinator-the ID, the SY, the SP, and the PWS scores, described below-, which receives the scores and determines what process has the best MSA for all four OFs.The coordinator then propagates the id of the process with the best scores to all the slave processes.If the alignment has not shown improvement in all processes in two consecutive iterations, or if a predefined number of iterations is reached, the algorithm ends and the process with the best alignment of the sequences provides the resulting MSA.Otherwise the alignment is improved at each process by iteratively applying a number of operators that move, delete or realign gaps in the sequences following specific rules.These proposed operators perform a search along the length of the sequences with the aim of finding an opportunity to improve the alignment.The search is focused on the detection of gaps in order to minimize their number.The operators accept a certain number of parameters.Therefore, the operators can act differently in the sequences of the separate processes in order to have a variety of potential solutions.After each iteration, the resulting MSA is evaluated in all processes.This procedure is repeated until the termination criteria mentioned above are met. B. The LCS technique Given a sequence S i = s i1 s i2 . . .s im , a subsequence of S i is a sequence S = s 1 s 2 . . .s p , defined by s k = s ir k , where m is the length of sequence S i , r 1 < r 2 < r p , p is the number of selected items from sequence S i , 1 ≤ k ≤ p, and p ≤ m; i.e. S can be obtained by deleting m − p (not necessarily contiguous) symbols from S i without changing its order. Let S 1 = s 11 s 12 . . .s 1m and S 2 = s 21 s 22 . . .s 2n be two sequences of length m and n, respectively.The sequence S = s 1 s 2 . . .s p is a common subsequence of S 1 and S 2 , if S is a subsequence of both sequences.The LCS of S 1 and S 2 is the longest sequence S that is a subsequence of both S 1 and S 2 .In general, the LCS problem consists of finding the maximal-length subsequence-i.e.there exists no other subsequence that has greater length-that is a common subsequence of the sequences. Let S 1 and S 2 be the above defined sequences of length m and n, respectively.The algorithm implemented to obtain the LCS of two sequences [15] uses dynamic programming and requires calculating the LCS table (LCST) as LCST (i − 1, j − 1) + 1 if i > 0, j > 0 and s1i = s2j , max(LCST (i, j − 1), LCST (i − 1, j)) if i > 0, j > 0 and s1i = s2j where i = 1, 2, . . ., m and j = 1, 2, . . ., n.The number of rows and columns in LCST are m + 1 and n + 1, respectively, whereas the cell LCST (i, j) is the element in the LCS table at row i and column j.The LCS table stores numbers which correspond to the actual length of the LCS.After filling the LCS table, the lower right cell in the table contains the length of the LCS.The longest common subsequence can be found by tracing back from the cell at LCST (m, n).Each time a match is found, it is appended to the longest common subsequence and a movement is made to cell LCST (i − 1, j − 1).When the symbols do not match, a movement is made to the cell with max(LCST (i − 1, j), LCST (i, j − 1)) in order to find the next match.In general, there may be several such paths, because the LCS is not necessarily unique, i.e. it is possible to have more than one LCS.For example, let S 1 ="MFVFS" and S 2 ="MVFVS".After application of the previous rules, the subsequence "MFVS' is the LCS of the sequences.However, if we placed the sequences in the inverted order, the subsequence "MVFS" would be the LCS of the sequences. C. Parallel LCS strategy Let S = {S 1 , S 2 , . . ., S n } be a set of n protein sequences, where S i = s i1 , s i2 , . . ., s imi , m i is the length of S i for i = 1, 2, . . ., n, and s ik is the k th residue in the sequence S i .Let np be the number of processes used by the algorithm.In this step, each process calculates an LCS of the set S. First, sequences are read and saved into an array.The procedure in this step is as follows: the i th process applies the LCS algorithm to all possible pairs of sequences LCS(S i , S j ) in the set S that result from the combination of sequence S i for i = 1, 2, . . ., n with the rest of the sequences S j , for j = 1, 2, . . ., n and i = j.Even though all processes apply the same algorithm to the same set of sequences, the resulting LCSs are possibly different, because the calculations are based on a different order of sequences and there exists the possibility of having more than one LCS for the same sequences, as previously noted.For example, in the first iteration of this step, Process 1 applies the algorithm to the following pairs of sequences: In the same manner, Process 2 will apply the algorithm to the following pairs of sequences: and a similar strategy is applied for the rest of the np processes. The results obtained from this first iteration are saved in order to create a new set of sequences.Thus, this new set of sequences contains the LCSs of the pairs of sequences in the original set S and its size will be n − 1. Next, the process repeats this iterative procedure with the obtained LCSs until there remains only one LCS.When this happens, it means that the LCS of the sequences in the set S has been found. D. Setting up an initial MSA After the LCS is obtained, an algorithm is applied to the set of sequences in order to generate a first MSA.This algorithm aligns identical residues of the sequences by using the resulting LCS.In general, the algorithm aligns identical residues, as well as similar residues, as much as possible. Let A be an array of strings, with the sequences to be aligned, and n the number of rows (sequences) in array A, with A i = a i1 , a i2 , . . ., a imi the i th row in array A, m i the length of sequence in A i , for i = 1, 2, . . ., n, and a ij the j th element (residue) in row A i (sequence i), for j = 1, 2, . . ., m i .Moreover, let R = r 1 r 2 . . .r p be a string with the LCS of the sequences in array A, with p the length of string R. The algorithm initiates with j = 1 (pointing to the first column in array A) and k = 1 (pointing to the first element in string R).The element r k in string R is compared with all elements a ij in A for i = 1, 2, . . ., n.The resulting comparison can fall into one of the following three cases: No gap is inserted in the sequences of array A, so identical residues are aligned, and k is increased for k = 1, 2, . . ., p, in order to point to the next element r k+1 in string R. Case B. None of the elements a No gap is inserted in the sequences of array A, so residues at this position are aligned, and k is increased for k = 1, 2, . . .p in order to point to the next element r k+1 in string R. Case C.Only some elements a ij in A match the element r k in R.An iterative procedure introduces a gap in the sequences of array A at position a ij , if a ij = r k for i = 1, 2, . . ., n. In all the previous cases, j is increased in order to point to the next residue of sequences in array A, for j = 1, 2, . . ., m.The iterative procedure is repeated until the last element r p in string R is processed.The maximum length of sequences in array A is calculated, and this length is established as the length of the initial MSA.Finally, gaps are inserted if needed at the end of sequences having a smaller length than the maximum length calculated, so that all sequences have the same length.As a result of the previous calculations, a matrix is created containing an initial MSA. The procedure described above uses the LCS generated at each process, which could be different from the LCSs in the other processes.This approach allows various potential solutions to be running in separate processes. E. MSA assessment The accuracy of PaMSA-i.e. the quality of the alignment-is evaluated using the following five optimization functions (OFs): where ID measures the identity percentage score, SY evaluates the similarity percentage score, SP calculates the sum-of-pairs score, P W S obtains a pairwise score of the sequences compared with the first sequence in the alignment, and N G counts the number of gaps in the alignment. 1) Identity percentage (ID): In our implementation, the identity percentage score among the sequences being aligned is calculated as where A is the array with the MSA as previously defined, r is the length of the aligned sequences in A, n is the number of sequences aligned, and n i=1 a ij is counted only if all the a ij are identical for i = 1, 2, . . ., n in the j th column of the MSA for j = 1, 2, . . ., r.In general, the higher the identity percentage score, there better the alignment. 2) Similarity percentage (SY): The similarity percentage score is calculated using the same formula used to calculate the identity percentage score.However, the sum n i=1 a ij is counted only if all the a ij in the j th column of the MSA are similar-i.e.not necessary identical but imperatively different from a gap.In general, the higher the similarity percentage score, the better the alignment. 3) Sum of pairs (SP): The sum-of-pairs score is a metric for measuring MSA accuracy, based on the number of correctly aligned residue pairs, where the score of all pairs of sequences in the multiple alignment is added to the overall score.The SP score is calculated as where A is the array with the MSA as previously defined, r is the length of the aligned sequences in A, n is the number of rows (sequences) in array A, and s(a ki , a li ) is the score obtained by comparing the k th row in the i th column of the MSA with the l th row in the same i th column of the MSA for k = 1, 2, . . ., n − 1 and for l = 2, 3, . . ., n.This score is calculated using the following general formula: The value of the sum-of-pairs score depends on the number of sequences aligned, the length of the sequences aligned, and the similarity among the sequences aligned.Therefore, there is not a pre-established range of values for this score.The higher the sum-of-pairs score of a particular set of sequences, the better its alignment.It is possible to use a substitution matrix to compare the residues among sequences in order to obtain better alignments.The BLOSUM62 matrix is provided in our implementation as it is the de facto standard in protein database searches and sequence alignments [16]. 4) Pairwise score (PWS): The pairwise score of sequences was included in the evaluation of our algorithm.In our implementation, this pairwise score obtained among all pairs of sequences is calculated as where A is the array with the MSA as previously defined, r is the length of the aligned sequences in A, n the number of rows (sequences) in array A, and s(a 1i , a ji ) is the score obtained by comparing the f irst row in the i th column of the MSA with the j th row in the same i th column of A. The same comparison evaluation criteria as in SP are used.The value of the pairwise score depends on the number of sequences aligned, the length of the sequences aligned, the similarity among the sequences aligned, and the first sequence in the alignment.Hence, there is not a pre-established range of values for this score.In general, the higher the sum-of-pairs score of a particular set of sequences, the better the alignment. 5) Number of gaps (NG): The number of gaps is an additional score, which is calculated using the formula where A is the array with the MSA as previously defined, r is the length of the aligned sequences in A, n is the number of rows (sequences) in array A, and a ij is counted only if it is a gap.The value of the number of gaps depends on the number of sequences aligned, the length of the sequences aligned, and mainly on the similarity among the sequences aligned.Therefore, there is not a pre-established range of values for this score.The fewer the number of gaps of a particular set of sequences, the better the alignment. After the first MSA is generated in each process, the alignment is evaluated using the implemented OFs.Each slave process evaluates its MSA and sends the scores of four of the OFs to the coordinator-the ID, the SY, the SP and the PWS scores-, which receives the scores and determines which process has the best MSA for all four OFs (Fig. 2).The Number of Gaps (NP) score is calculated and displayed in the screen output, but it was left out of the selection criterion, as preliminary results suggested that the other four OFs were sufficient to evaluate the alignment. After the coordinator process receives the OFs scores from the slave processes, it propagates the id of the process with the best scores to all the slave processes.If the scores of all four OFs of the MSA have not shown improvement in two consecutive iterations or if a predefined number of iterations is reached, the algorithm ends and the process with the best alignment of the sequences provides the resulting MSA.Otherwise the alignment is improved by iteratively applying a number of operators that move, delete or realign gaps in the sequences following specific rules. F. Improvement of the MSA In order to improve the MSA, sixteen operators were defined in the present work.In the current version of PaMSA, there exist two main groups of operators: the basic operators, and the refinement operators, both shown in Table I.The main differences between the two groups of operators are that refinement operators can be applied even when only one sequence of the two sequences has the gaps, and that some of them are applied only in the last iteration of the algorithmi.e. when the number of generations was reached or if there was no improvement in the alignment after two consecutive iterations-, in contrast to basic operators which are applied only when both sequences have gaps.The proposed operators perform an exhaustive search along the total length of all sequences with the aim of finding an opportunity to improve the alignment.The search is focused on the detection of gaps and identical or similar residues that are not totally aligned.The operators are always applied to pairs of sequences.At every iteration, operators are applied-when necessaryto each of the potential solutions running in the independent processes.An assessment method marks columns of sequences in the MSA when their elements are totally aligned, so that the algorithm will not apply the operators to those columns in future iterations.This strategy improves the performance of the algorithm. 1) Basic operators: There are nine basic operators (Table I), which mainly move gaps trying to minimize their number by eliminating columns that only contain gaps.The mGapRF 3 operator moves three gaps to the right in the first sequence of a pair of sequences being compared in the alignment.This operator is applied in order to align identical residues.The mGapRS 3 operator acts in a similar way as the mGapRF 3 operator, but in this case the operator is applied to the second sequence of the pair of sequences being compared.In the same manner, the mGapRF 2 and the mGapRS 2 operators move two gaps to the right in the first and second sequence, respectively.These operators are also applied in order to align identical residues.Similarly, the mGapRF 1 operator moves a gap to the right in the first sequence of a pair of sequences with the aim of aligning identical residues.The mGapRS 1 operator moves a gap to the right in the second sequence of the pair of sequences being compared in order to align identical residues. The mGapRF G and the mGapRS G operators move a gap to the right in the first or second sequence, respectively, of a pair of sequences.These operators are applied in order to reduce the number of gaps by aligning similar-i.e.nonidentical-residues. The rGaps operator is used to remove a column from the alignment when all the residues in the column are gaps.This operator is applied after the application of any of the other operators.Once the rGaps operator has been applied, a new assessment of the MSA is made in order to update the MSA scores. 2) Refinement operators: As can be seen in Table I, there are seven refinement operators.The mGapRF 3S and mGapRS 3S operators move three gaps to the right in the first and the second sequence, respectively, of a pair of sequences being compared.These operators are applied in order to align identical residues.Unlike the mGapRF 3 and mGapRS 3 operators, these refinement operators are applied even when only one of the sequences has the gaps.The mGapRF 2S and mGapRS 2S operators act similarly as the mGapRF 3S and the mGapRS 3S operators, but in this case the refinement operators realign only two gaps to the right. The three remaining refinement operators, mGapLF 1 , mGapLS 1 , and mGapn, move a gap to the left in order to align identical residues in the alignment.Because these operators are the only ones that move gaps to the left, they are applied at the last iteration of the algorithm. III. METHOD The implementation of the PaMSA algorithm was developed on a computer cluster provided by Intel Corporation, which contained 10 nodes, each node with two Intel Xeon 5670 6-core 2.93 GHz CPUs, 24 GB of 1066 MHz DR3 RAM, and two 274 GB 15K RPM hard drives.The operating system used was Red Hat Enterprise Linux 5 Update 4 with Perceus 1.5 Clustering Software and Server 5.3 running Intel MPI 3.2.An implementation of PaMSA can be downloaded from http://www.bioinformatics.org/pamsa.The Message Passing Interface (MPI) library was used in our implementation of the algorithm.defines the syntax and semantics of a set of functions in a library designed to exploit the existence of multiple processors, and it provides the synchronization and communication needed among processes.Synchronous communication operations were used in this work to handle communication and synchronization among tasks.When a synchronous operation is invoked, a process sends a message and then waits for a response before proceeding with the process flow.Object-oriented and structured programming paradigms were applied using C++ as the programming language.The PaMSA algorithm was implemented on a cluster platform using the Linux operating system; however, PaMSA can be run on a nonparallel environment. Results presented in this work were obtained from alignments performed on the Hybrid Cluster Supercomputer Xiuhcoatl of the General Coordination of Information and Communications Technologies (CGSTIC) at CINVESTAV, in Mexico City.This cluster contained 88 nodes, each node with 1056 Intel X5675 CPUs, 2112 GB of RAM, and 22000 GB in local hard disk drives. Table II presents the 40 datasets of protein sequences that were used in the present work in order to analyze the performance of PaMSA and the other MSA methods tested.Each protein dataset was chosen according to its number of sequences, identity percentage, and length average.Datasets were organized in eight groups of five sequence clusters each, named from A to H (i.e.Group A, Group B, and so on).The groups of sequences were obtained from the UniProt Reference Clusters (UniRef) contained in the UniProKB protein database.At this site, sequences are classified in groupscalled clusters-according to their identity percentage; thus, similar sequences can be obtained through a database query.Sequences that belong to a specific cluster are called cluster members.The Identity score shown is the minimum of all methods tested for each cluster.These protein datasets can be found at http://www.uniprot.org/uniref/. The number of iterations of the PaMSA algorithm can be modified by the user as a parameter, the default value being five.A file with the resulting MSA was created with sequences in clustal format.The MSA output file has the same name as the input file but with the pamsa extension.Basic validations are implemented, such as verification of the existence of the input file with the sequences to be aligned, the creation of the output file, the correct introduction of the parameters given, and the verification of the FASTA format of the sequences to be aligned.PaMSA was compared against the following versions of the MSA programs: MUSCLE v3.7, Clustal W v2.0.10,T-Coffee v9.03r1318, and Parallel T-Coffee v1.913, all of them running in Linux. IV. RESULTS AND DISCUSSION In this section we present results obtained from alignments using PaMSA, as well as comparisons made against several methods commonly used for MSA, namely MUSCLE, Clustal W, T-Coffee, and Parallel T-Coffee-a parallel implementation of T-Coffee.Of particular note is the Parallel T-Coffee method, which runs on a cluster platform and uses the MPI library, just as our implementation of the PaMSA algorithm.The variables used for evaluating the performance of the methods tested were the MSA accuracy (quality of the alignment), and the response time. A. MSA accuracy results The sum-of-pairs score was used in the present work for evaluating the quality of the alignments, as it is a simple and sensitive measure for assessing the accuracy of alignments and has been widely used [17].The greater the sum-of-pairs score, the better the alignment obtained; thus, the alignment with the highest sum-of-pairs score is considered the most accurate (the best) MSA of all the alignments obtained.The sum-of-pairs scores for all dataset groups and MSA methods are presented in Table III.As can be seen, all algorithms achieved the optimal MSAthe alignment with the highest sum-of-pairs score-and 100% of identity percentage, when datasets of protein sequences from Groups A, B, and C were used (Table III).The datasets from these groups had originally 100% of identity percentage among them; thus, the LCS found by the PaMSA algorithm corresponded exactly to the sequences to be aligned, making it simple in this case to find the MSA with a perfect identity percentage. In the MSA accuracy results obtained from alignments using datasets of protein sequences from Group D (clusters with an identity percentage score within the range from 90% to 99%), the T-Coffee method achieved the best MSA in 4 out of 5 cases tested (Table III).The PaMSA, MUSCLE, Clustal W and Parallel T-Coffee methods obtained the best alignment in all datasets of this group, based on the sum-of-pairs scores. Clusters of sequences with an identity percentage score approximately within the range from 80% to 89% were used in alignments with sequences from Group E. Datasets from this group have slightly dissimilar sequences.The sum-of-pairs scores obtained using the PaMSA algorithm were the highest in all cases tested (Table III) The MSA accuracy results obtained in alignments using Group F (with an identity percentage score approximately within the range from 70% to 79%) were similar to the results obtained with Group E-the PaMSA algorithm also obtained the best alignment in all the cases tested.Datasets from this group have more variable sequences than the previous groups.The T-Coffee and Parallel T-Coffee methods achieved the best MSA in 3 out of 5 cases.The MUSCLE and Clustal W methods obtained the best MSA in 4 out of 5 cases tested. The PaMSA algorithm obtained less accurate alignments, according to the sum-of-pairs score, than the MUSCLE and Clustal W methods in at least four cases tested from Group G (clusters with an identity percentage approximately within the range from 60% to 69%) and Group H (clusters with an identity percentage approximately within the range from 50% to 59%).However, the MSA accuracy results obtained by the PaMSA algorithm were equal or better than those obtained by the T-Coffee and Parallel T-Coffee methods using these groups of sequences. In general, results show that the MUSCLE method had the best MSA accuracy of the methods tested, as it obtained the best alignments (according to the sum-of-pairs score) in all but four of the 40 cases tested.The Clustal W method and the PaMSA algorithm were a close second place in accuracy, achieving the best alignment in 34 out of 40 cases tested.The Parallel T-Coffee method obtained the best alignment in 30 of the cases tested, against the 26 achieved by the T-Coffee method. With the exception of MUSCLE, PaMSA and the other tested MSA methods had trouble finding accurate alignments when using datasets with an identity percentage lower than 70%.Nevertheless, even in this case PaMSA was able to find the best alignment in 4 out of 10 datasets. B. Response time results of nonparallel methods This section presents the execution time results obtained from alignments using PaMSA and three common nonparallel methods for MSA: MUSCLE, Clustal W and T-Coffee.For these alignments, PaMSA and the other three methods were executed in a nonparallel environment.It should be mentioned that the results shown are the best execution times achieved from a set of five runs.Alignments were made under the same conditions for all the methods compared-i.e.computer, environment, operating system, and timer. Table IV presents the execution time results in seconds obtained from alignments for all dataset groups.As can be seen, in Group A the MUSCLE method achieved the best response times, whereas the PaMSA algorithm had better response times than the Clustal W and T-Coffee applications for all datasets in this group.The performance results obtained using Group B were similar to the results from alignments using Group A, i.e. the MUSCLE method achieved the best response times and the PaMSA algorithm was the second best.The performance results obtained using Group C were similar to the results from the previous group, with the MUSCLE and the PaMSA methods in first and second place, respectively.From the execution time results obtained using Group D, there are no differences with previous results regarding the order of the best two methods, i.e. the MUSCLE method also achieved the best response times, whereas the PaMSA algorithm had better response times than the Clustal W and T-Coffee applications.The performance results obtained using Group E were similar to those of the previous alignments, with the exception of Dataset 22, with which PaMSA achieved the best response time.In the rest of the datasets from this group, the MUSCLE method achieved the best response times.The PaMSA algorithm showed once again with this group better response times than the Clustal W and T-Coffee methods.The performance results obtained using Group F were different from those of the previous alignments; in alignments using this group, PaMSA achieved the best response time when using Dataset 30, whereas this algorithm and MUSCLE method reached a tie in best execution time in two instances.The PaMSA algorithm was again superior to the Clustal W and T-Coffee methods when testing this group of sequences.The response times obtained by the PaMSA algorithm using Group G were superior to those of the other methods tested in three out of five alignments, whereas the MUSCLE algorithm achieved the best response time in the other two cases.Finally, using Group H, the MUSCLE method obtained the best response time in all but one of the cases tested in this group of alignments, whereas the PaMSA algorithm was again superior to the Clustal W and T-Coffee programs. As can be seen, in most of the MSAs with the datasets presented in Table II, the MUSCLE method achieved the best execution time results.However, the PaMSA algorithm was superior or equal to the MUSCLE method in some cases.On the other hand, the execution times achieved by the PaMSA algorithm were better (i.e.lower) than the results obtained using the Clustal W and T-Coffee programs in all the cases tested. C. Response time results of parallel methods In this section we present the execution time results achieved by comparing the PaMSA algorithm against Parallel T-Coffee-a parallel version of the T-Coffee method.Alignments using these two methods were executed in a cluster environment under the same conditions (cluster type, number of processes, MPI library, operating system, and timer).Table V shows the execution times in seconds of Parallel T-Coffee and PaMSA.The times shown are the best of five runs for each dataset.As for the comparison of execution times of the PaMSA algorithm against the sequential MSA methods tested, PaMSA was run as a one-processor application in a nonparallel environment and the results were compared against those of MUSCLE, T-Coffee and Clustal W. In 80% of the tested cases the MUSCLE method achieved shorter response times.However, the PaMSA algorithm was faster than the MUSCLE method in 15% of the cases.On the other hand, the execution times achieved by the PaMSA algorithm were better than the results obtained by Clustal W and T-Coffee in all the cases tested.It can be concluded that the PaMSA algorithm was the second faster of the methods under the nonparallel conditions tested. As for the accuracy of the alignments, results achieved with the PaMSA algorithm in clusters of very similar protein sequences (within a range from 90% to 100% of identity percentage score, approximately) were at least as accurate as the alignments obtained with the other methods tested.It can be concluded that the PaMSA algorithm, along with the MUSCLE, Clustal W, and Parallel T-Coffee methods, achieved the best overall MSA accuracy results when using these groups of sequences.In general, when aligning closely related sequences, all the tested methods obtained the best-or close to the When using clusters of sequences with an identity percentage score of approximately 70% to 89%, PaMSA found the best alignment in all cases-according to the sum-of-pairs score-, whereas MUSCLE, Clustal W, T-Coffee, and Parallel T-Coffee could not find the best MSA in at least three cases.It can be concluded that the results achieved by the PaMSA algorithm were better than the other methods tested with these groups of sequences.It is possible to assume that when aligning more dissimilar sequences, not all methods can obtain the best alignment.Finally, when using clusters with approximately 50% to 69% of identity percentage score, the PaMSA algorithm achieved less accurate alignments than the MUSCLE and Clustal W methods in 6 out of 10 datasets in both cases.However, the alignments obtained by the PaMSA algorithm were equal or even better than the alignments obtained by the T-Coffee and Parallel T-Coffee methods in 8 out of 10 cases tested in these groups of sequences.According to our results, no single MSA method can always obtain the best alignment for all sets of sequences.Future work will focus on further improvement of accuracy of the alignments obtained by PaMSA using benchmark protein databases, such as BAliBASE, PREFAB and SABmark.Additional MSA methods, such as MaFFT, will also be considered for comparison.As for improvement in performance, more work remains to be done by studying and applying other parallel optimization techniques in order to obtain better response times.One of the main problems in the evaluation of MSA methods is that it is possible to obtain different MSAs having the same assessment score, making it difficult to discern which of them is the best, especially when aligning very dissimilar sequences.In this case, it is necessary to conduct a thorough analysis to achieve the best results in terms of accuracy.The long-term goal of the present work is to provide researchers with state-of-the-art algorithms and software tools that can help them advance in their field in a more efficient manner. in 1st sequence to the right BS mGapRS 3 Moves three gaps in 2nd sequence to the right BS mGapRF 2 Moves two gaps in 1st sequence to the right BS mGapRS 2 Moves two gaps in 2nd sequence to the right BS mGapRF 1 Moves a gap in 1st sequence to the right BS mGapRS 1 Moves a gap in 2nd sequence to the right BS mGapRF G Moves a gap in 1st sequence to the right BS mGapRS G Moves a gap in 2nd sequence to the right BS rGaps Removes an MSA column if all elements are gaps BS mGapRF 3S Realigns three gaps in 1st sequence to the right RF mGapRS 3S Realigns three gaps in 2nd sequence to the right RF mGapRF 2S Realigns two gaps in 1st sequence to the right RF mGapRS 2S Realigns two gaps in 2nd sequence to the right RF mGapLF 1S Realigns a gap in 1st sequence to the right RF mGapLS 1S Realigns a gap in 2nd sequence to the right RF mGapn Moves a residue in 2nd sequence n columns RF Type: BS = Basic, RF = Refinement. , i.e. the PaMSA algorithm obtained the best alignment in this group of alignments.The MUSCLE, Clustal W and T-Coffee methods obtained the best MSA in 3 out of 5 cases tested, whereas Parallel T-Coffee achieved the best MSA in 4 out of 5 cases. Fig. 3 . Fig. 3. Parallel execution times of PaMSA and Parallel T-Coffee using datasets from Group A. The times shown are the best of five runs for each dataset. Fig. 3 Fig. 3 graphically shows the execution time results achieved by the PaMSA algorithm and the Parallel T-Coffee method when using datasets of protein sequences from Group A; similar results were found for the rest of the groups.In all the cases tested, execution times achieved by PaMSA were superior to the results obtained with the parallel version of T-Coffee.In order to confirm the superiority in performance of the PaMSA algorithm over the Parallel T-Coffee method, the speedup for all datasets in all groups was computed by dividing the execution time of the Parallel T-Coffee method by the execution time of the PaMSA algorithm.Results showed that the PaMSA algorithm had better response time than Parallel T-Coffee in all the cases tested, as seen in TableV.The PaMSA algorithm was at least 1.9 and up to 27 times faster than Parallel T-Coffee, depending on the number and length of the sequences to be aligned.A multi-factor ANOVA was done by group in order to statistically compare the execution times of the PaMSA and Parallel T-Coffee algorithms.Two factors were considered for the eight groups.Seven of the eight groups considered the algorithm and the number of the sequences as factors, whereas in Group B the algorithm and the average length of sequences were considered as factors. Fig. 4 . Fig. 4. Multi-factor ANOVA plot of means by group.The panel letters correspond to the protein sequence group being analyzed.Group B used the algorithm and the average length of sequences as factors, whereas the rest of groups considered the algorithm and the number of the sequences as factors. TABLE III . SUM-OF-PAIRS SCORES OF RESULTING MSAS TABLE IV . SINGLE-PROCESSOR EXECUTION TIME RESULTS IN SECONDS TABLE V . EXECUTION TIME RESULTS OF PARALLEL T-COFFEE AND PAMSA IN SECONDS Table VI shows the pvalued obtained for each group. TABLE VI Based on their p-values, TableVIshows that in five of the eight groups (A, B, F, G, and H) there was a statistically
v3-fos-license
2021-07-14T07:02:22.933Z
2021-10-01T00:00:00.000
235823948
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://doi.org/10.26773/smj.211007", "pdf_hash": "f9bc8d4064041db5c31c4f21f84004c6378019ad", "pdf_src": "ScienceParsePlus", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43681", "s2fieldsofstudy": [ "Medicine" ], "sha1": "671367ae91a616d2df4281c38ca26c2733151a8a", "year": 2021 }
pes2o/s2orc
The Effects of Functional Exercise Training on Obesity with Impaired Glucose Tolerance Obese individuals with impaired glucose tolerance (IGT) are at risk for developing overt diabetes and cardiovascular diseases (CVD). This study aimed to examine the effects of 12 weeks of a functional exercise training (FET) programme in obese individuals with IGT. Sixteen males and females university staff, aged 50.4±1.3 years (43 to 59 yrs) with mean BMI ≥25 kg/m² (WHO Asian Guidelines) and IGT were randomly divided into the functional exercise training (FET) group or the control (CON) group. Both groups underwent the baseline assessments, including anthropometric measurements, exercise capacity, oral glucose tolerance test (OGTT), and blood chemistry analysis. All testing was repeated at 12 weeks post-intervention. The FET groups engaged in the FET programme, and the CON group carried out normal daily physical activity, including walking. After the intervention, the FET group showed significant changes in exercise capacity, body weight (BW), BMI, waist circumference, triglycerides, fasting plasma insulin (FPI), 2 hrs glucose, and glucose AUC (p<0.05) while the CON group only exhibited an improvement in HDL-C (p<0.05). The study showed that the FET programme improves exercise capacity and alters cardiometabolic parameters. It can be an alternative form of exercise for managing obesity and improves glycaemic control in those at risk. Introduction Impaired glucose tolerance (IGT) is a major predictor of type 2 diabetes (Alberti, 2007) and is a cardinal sign of insulin resistance (DeFronzo, & Abdul-Ghani, 2011).Those with impaired glucose tolerance (IGT) are at increased risk for developing overt type 2 diabetes and cardiovascular disease (CVD) (DeFronzo, & Abdul-Ghani, 2011).Additionally, epidemiological studies have shown an association between physical inactivity and IGT (Tapp et al., 2006).Physical inactivity alters functional capacity and normal metabolic action of insulin, including glucose transport, glycogen synthesis, and glucose oxidation (Venables, & Jeukendrup, 2009).Previous data have shown that most individuals with IGT are overweight, and up to 80% are obese (Hawley, 2004).Thus, being obese with IGT can further accelerate the risk of frank diabetes (Rai, Wadhwani, Sharma, & Dubey, 2019).The American Diabetes Association (ADA, 2002) recom-mends that overweight individuals with IGT undergo some kinds of lifestyle intervention to prevent the onset of type 2 diabetes. Exercise training is often prescribed for blood glucose management (ADA, 2002;Diabetes Prevention Program, 2002).Swindell et al. (2018) showed that exercise training improves glucose transporter 4 (GLUT 4) translocation to the cell membrane, facilitating glucose transport into the cell.Regular exercise training lowers blood glucose and produces other benefits, such as increased fitness, weight reduction, improve physical function, and reduced risk for developing non-communicable diseases (NCD) (Rehn, Winett, Wisløff, & Rognmo, 2013).The investigators in the Diabetes Prevention Program (DPP) found that lifestyle intervention that included regular physical activity reduced the incidence of diabetes by 58% compared to the use of metformin in those with IGT (Sigal, Kenny, Wasserman, Castaneda-Sceppa, & White, 2006).The DPP study reinforced the importance of achieving ≥150 min/wk of physical activity at moderate intensity (e.g.walking) for preventing the onset of diabetes (DPP, 2002;Sigal et al., 2006).General aerobic activity such as walking can be monotonous for some people, leading to dropping out of activity participation (ACSM, 2018).Thus, adding a variety of exercise into the daily routine can add fun and promote social interaction that can make exercise more enjoyable. We intended to investigate the effects of functional exercise training on exercise capacity, glucose metabolism, and metabolic profiles in obese individuals with IGT.The functional exercise is simple, economical, and can be carried out at a gym or at home without utilizing many pieces of equipment and can be an alternative form of exercise that complements general aerobic activity.This type of exercise trains the muscles to work together and prepares them for daily living activities (Silva-Grigoletto, Brito, & Heredia, 2014).Additionally, it strengthens the body's core and improves stability, which can result in better posture and balance (Lagally, Cordero, Good, Brown, & McCaw, 2009).This type of exercise simulates common movements that can be done at home or at work while using the upper and lower body simultaneously.Previous studies have shown that functional exercise training improved mobility in older adults (Whitehurst, Johnson, Parker, Brown, & Ford, 2005), and it significantly improved physical fitness components in male college students (Shaikh, & Mondal, 2012).However, little is known about its effects on the metabolic profiles in obese individuals. This study aimed to investigate the effects of functional exercise training on exercise capacity, glucose metabolism, and metabolic profiles in obese individuals with IGT.We hypothesize that FET will produce favourable changes in exercise capacity, glucose metabolism, and metabolic profiles. Participants Sixteen obese males and females supporting staff from Nakhon Ratchasima Rajabhat University age 50.4±1.3years (43 to 59 yrs) with impaired glucose tolerance (IGT) were recruited to participate in the study; they were randomly classified into experimental (n=8) or control (n=8) groups.The obesity classification was in accordance with WHO Asian guidelines: ≥25 kg/m² is considered obese (WHO, 2000) and IGT as classified by the American Diabetes Association (ADA, 2018) as having 2-hrs glucose ≥140 mg/dL -1 and ≤199 mg/dL -1 .The participants were contacted by the primary investigator and were invited to the orientation session where they signed informed consent, filled out the health questionnaire, underwent the screening process, performed oral glucose tolerance test (OGTT), exercise testing, and blood chemistry analysis.To be eligible for the study, the participants had to have impaired glucose tolerance, had not participated in any formal exercise for the previous six months, and be free from heart disease, hypertension, diabetes, orthopaedic, and neuromuscular problems.All the testing was performed at baseline and at 12 weeks which was similar to the previous studies by Whitehurst et al. (2005) Anthropometric measurement Bodyweight (kg) was assessed using the standard digital (Nagata BW-110, Taiwan).Height (cm) was measured using the standard stadiometer (Nagata BW-110, Taiwan).Bodyweight and height were measured to the nearest 0.01 kg and 0.01 cm, respectively.Body mass index (BMI) was calculated by dividing body weight in kilogram (kg) by height in metre-squared (m²).Waist circumference (cm) was measured at the horizontal plane at the iliac crest. Exercise capacity The participants underwent the Astrand maximal cycle test (ACSM, 2018) to assess their exercise capacity (Cateye-EC1600 Bicycle Ergometers, Japan).Each participant was briefed on the testing procedure, adjusted seat height, fitted with a wireless heart rate monitor (Polar model H7, Finland), and was allowed time to warm up on the cycle ergometer for three minutes with a resistance of zero watts.Following the warm up, the subject was instructed to pedal the cycle ergometer at 50 rpm for two minutes in the initial stage with a load of 100 watts (men) or 50 watts (women).After the initial stage, the workload of 50 watts (men) or 25 watts (women) was incrementally increased every three minutes until the participant reached volitional fatigue or was unable to maintain the instructed cadence.Testing was terminated in accordance with the standard guidelines (ACSM, 2018).The exercise capacity was calculated for maximal oxygen uptake (VO 2 max) and metabolic equivalent (MET) value. OGTT and Blood chemistry analysis After an overnight fast, a 75-g OGTT was performed on the participants, and blood samples were obtained at baseline plasma glucose and insulin and every 30 min interval for 120 min after an oral glucose load (Slentz et al., 2016).Glucose areas under the curve (AUC) was calculated using the trapezoidal principle.Early and total phase glucose tolerances were calculated as total area under the curve (tAUC) using the trapezoidal model (Matthews et al., 1985).The homeostasis model assessment of insulin resistance (HOMA) was calculated as described previously (Matthews et al., 1985;Vogeser et al., 2007).Blood samples taken at baseline were also analysed for HbA1C, total cholesterol (TC), triglycerides (TG), LDL-Cholesterol (LDL-C), HDL-Cholesterol (HDL-C).Insulin resistance was estimated by the homeostatic model assessment (HOMA-IR).Blood samples were analysed for glucose, HbA1C; the lipid profile was determined using hexokinase method was measured using a cobas6000 (c501) clinical chemistry analyser system and insulin was determined using electrochemiluminescence immunoassay; the ECLIA method was measured using a cobase411 insulin analyser.Blood samples were measured by the clinical laboratory (Lab Plus Professional Laboratory Ultimate Service, Theptarin Hospital, Thailand). Exercise Programme For the 12-week study, the participants were randomly assigned into two groups: functional exercise training (FET) and control (CON).The FET group engaged in the functional exercise training in a circuit manner; they had to complete three circuits of exercises in a session (Whitehurst et al., 2005).A circuit consisted of 12 exercises that had to be performed consecutively with 60 seconds of rest in between each exercise; the functional exercise programme details are described in Table 1 and Figure 1.Each exercise session consisted of a 10 minute warm up followed by 30 minutes of exercise session and concluded with a cool down (10 min).The participants engaged in supervised exercise sessions three times per week and performed the same exercise routine at home two times per week.All group exercise sessions were monitored and supervised by the primary investigator to ensure safety and proper technique.The task difficulty was increased by having participants balance on one leg, perform a choreographed movement, and add external hands weights.At every three-week increment, from week 4 to week 12, 2 water bottles filled with sand weighing 335 g, 500 g, and 750 g each, respectively, were added as an external weight to increase resistance while performing these exercises (illustrated in Figure 1).The participants were instructed to perform a warm up, cool down, and stretching for every exercise session. The participants in the CON group were instructed to continue their normal daily activity and were encouraged to engage in a walking routine on their own.All participants in both groups were educated on healthy diet.A text messaging group was set up for two-way communication to provide assistance and answer any questions.• The participants were instructed to perform the movement in a controlled manner. • A metronome was used to control the movement, and the pace was set at 100-110 beats/min. Final Testing All testing was repeated after 12 weeks of intervention.Due to the quick change in glucose metabolism after the cessation of exercise, OGTT and blood sampling were conducted within 36 hours of the final exercise bout. Statistical Analysis Both groups baseline characteristics were analysed, and variables are presented as the mean ± SD.The dependent t-test was used to detect the intragroup differences over time for each variable.The extent of the change in variables was calculated by subtracting the baseline data with post-12 weeks of intervention.The differences in variables between the two groups (FET and CON) were compared using the independent t-test.Statistical significance was set at P<0.05.All statistical analyses were performed using SPSS statistical software version 23 (IBM SPSS Inc., Chicago, USA). Baseline measurements Participants' characteristics in the FET and CON groups are presented in Table 2.The participants were similar in most variables at baseline.The FET group exhibited significantly higher FPG and glucose AUC at baseline than the CON group (p<0.05).Legend: The participants performed 12 exercises per circuit and completed 3 circuits of exercises (12 exercises equal 1 circuit).Each exercise was performed at 3 sets of 10 repetitions. Warm up/Cool down (continued from previous page) Anthropometric variable and exercise capacity After 12 weeks of intervention, the FET group showed a significant decrease in BW (p<0.05),BMI (p<0.05), and waist circumference (p<0.05).A significant improvement in func-tional capacity, as shown by the increase in VO2max and MET (p<0.05) were observed in this group.Conversely, no changes were observed in the previously mentioned variables in the CON group (Table 3). The Glucose AUC was significantly decreased with training at 12 weeks in the FET group (p<0.05) Figure 2. Discussion This study shows that functional exercise training (FET) performed in a circuit manner resulted in improved exercise capacity, expressed in VO2 max and MET.The participants in the FET group significantly increased their exercise capacity by 1 MET from baseline (p<0.05); on the other hand, the CON group did not show a significant improvement in this parameter.It is conceivable that this change occurred as a result of repetitive physical training that induced physiological response in favour of cardiorespiratory endurance.Our findings are inconsistent with those of Whitehurst et al. (2005) that looked at the benefits of functional exercise training in older adults.Their findings showed that functional exercise performed in a circuit manner improved the timed walk test by 7.4% from baseline, indicating cardiorespiratory fitness improvement.The FET programme utilized large muscle groups for movement and was done continuously for a certain amount of time which elevated exercise heart rate and stimulated hemodynamic changes.When performed on a regular basis for 12 weeks, it caused physiological adaptation and improved fitness for this group. Improvement in exercise capacity translates into a better quality of life in those with risk factors (e.g.impaired glucose tolerance) and/or chronic medical conditions such as heart disease, diabetes, hypertension, or obesity (E.Teixeira-Lemos, Nunes, F. Teixeira, & Reis, 2011).A previous study by Myers et al. (2002) showed that an improvement in fitness over time yielded a better prognosis and a marked reduction in the risk of death from all causes.The result of this 12 weeks study provides evidence that the FET programme can result in fitness gain when compared to the CON group that carried out the usual walking routine.The absolute change of 1 MET may not appear substantial, but the benefit is clinically significant.A meta-analysis conducted by Lee et al. (2011) shows that for each MET increase in exercise capacity is associated with a 15% reduction in risk of all-cause mortality and a 13% reduction in the future risk of CVD and CHD events.Fit individuals have lower all-cause and CVD mortality risk than unfit counterparts, regardless of adiposity classification and medical conditions.Thus, the improvement in exercise capacity showed in our FET group will result in a better prognosis for these individuals. While the FET group's triglycerides concentration significantly reduced (p<0.05)at post-training, no significant change was observed in total cholesterol, HDL-C, and LDL-C.It is speculated that the change in triglycerides concentration may have been attributed to high energy expenditure during the exercise training.The FET group performed exercise in a circuit manner that requires major muscle groups to work in a coordinated fashion, which yielded high energy expenditure and higher fatty acid oxidation.Similarly, a study by Westcott (2012) showed that the reduction in triglyceride concentration is related to sufficient energy expenditure and previous level of physical activity, which is inconsistent with our findings.The participants in the FET group were sedentary upon entering the study; thus, engaging in a prescribed functional exercise training would have increased their activity level from sedentary to active, which may explain the observed reduction in triglycerides concentration. In contrast, the CON group exhibited significant changes in HDL-C (p<0.05) at the end of the 12 weeks.It is widely accepted that HDL-C is inversely correlated with heart dis-ease, and the improvement of HDL-C is related to the volume of physical activity and exercise (Durstine, Grandjean, Cox, & Thompson, 2002).The participants in the CON group were instructed to carry out their usual walking routine daily.It is possible that these individuals were walking in greater quantities, which resulted in the HDL-C change during the study.Our finding agrees with that of Koba et al. (2011), who showed that HDL-C change has a positive correlation with the amount of walking distance per week, and it increases in a dose-dependent manner. In the current study, the body weight, BMI, waist circumference, fasting plasma insulin, 2-hrs glucose, and glucose AUC of the FET group were significantly decreased (p<0.05) at 12 weeks.However, when the absolute changes in these parameters were compared between the two groups, the FET group shows significant reductions in body weight, BMI, waist circumference, 2-hrs glucose, and glucose AUC (p<0.05).The reduction in 2-hrs glucose and glucose AUC (Figure 2) is postulated to be related to the body weight reduction and the decrease in waist circumference.Our result is consistent with that of McNeilly et al. (2012), in which the research group discovered that weight loss through moderate exercise training resulted in a reduction in blood biomarkers for cardiovascular risks.In our study, weight loss induces changes in many cardiometabolic parameters and improved insulin sensitivity which helps to lower the glucose appearance in the blood.The Diabetes Prevention Programme (DPP) (2002) showed that a 7% reduction in body weight from baseline has a significant impact on the glucose metabolism in prediabetes.The data from O' Gorman et al. (2006) showed that acute exercise training improves GLUT-4 response, which facilitates the glucose transport into the cell, which lowers blood glucose.Our participants in the FET group performed exercise for 12 weeks, which could have improved the GLUT-4 effectiveness that would result in lower 2-hrs glucose and glucose AUC (p<0.05). Additionally, the fasting plasma insulin in the FET group significantly changed (p<0.05) at the end of the study, but the magnitude of change was not statistically significant compared to the CON group.It is speculated that functional exercise training exerted a certain effect on plasma insulin response.Our finding is supported by the study conducted by Rice, Janssen, Hudson, and Ross (1999), which concluded that physical training exerts a lowering effect on insulin concentrations in the plasma in obesity.Decreased plasma insulin concentration after physical training could be due to either decreased insulin secretion or an increase in peripheral clearance of insulin rate, or both (Eriksson et al., 1998;Pratley et al., 2000). Limitation The authors understand that the small sample size is a limitation of this study.It is difficult to find and recruit obese individuals with IGT that are not taking any medications or have other comorbidities.Despite the small sample size, the study was able to show the effects of 12 weeks of functional exercise training. Conclusion This study illustrates that 12 weeks of functional exercise training performed in a circuit manner is an effective means of inducing body weight change, increasing exercise capacity, and alters the cardiometabolic variables such as triglycerides HDL-C, 2-hrs glucose, and glucose AUC.It appears that the functional exercise training programme can be utilized as a cost-effective therapeutic means to help manage obesity and impaired glucose metabolism.Exercise training remains a cornerstone intervention for blood glucose management.The functional exercise training programme can be an alternative form of exercise for obese individuals. FIGURE 1 . FIGURE 1. Description of Functional Exercise Training Programme Table 1 . Description of Functional Exercise Training Programme Table 2 . Baseline characteristics of FET and CON groups Legend: * Statistically significant between-group baseline training (p<.05);BW-Body weight; Ht-Height; FPG-fasting plasma glucose; glucose AUC-glucose area under the curve; HbA1c-Glycosylate haemoglobin; FPI-fasting plasma insulin and HOMA-IR-homeostasis model assessment of insulin resistance Table 3 . Changes in body weight, body mass index; BMI, waist circumference, VO 2 max and lipid profiles at baseline and after 12 weeks of training in FET and CON groups Table 4 . Changes in metabolic and glycaemic at baseline and after 12 weeks of training in FET and CON groups Table 5 . Absolute changes in body weight, body mass index; BMI, waist circumference, VO 2 max, lipid profiles, metabolic and glycaemic after 12 weeks training between FET and CON groups FIGURE 2. A. Training-induced changes in glucose values at each time point during OGTT; B. Change in glucose AUC following 12 weeks of training and glucose AUC.*Statistically significant (p<.05).
v3-fos-license
2022-07-29T06:17:43.055Z
2022-07-26T00:00:00.000
251133232
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/1422-0067/23/15/8240/pdf?version=1658913760", "pdf_hash": "5f5793da306f268a899f6630932d79298f6c5a52", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43682", "s2fieldsofstudy": [ "Biology" ], "sha1": "6b7a39dec13fc2cac33793af60c5ee9bd715306d", "year": 2022 }
pes2o/s2orc
Microglia Remodelling and Neuroinflammation Parallel Neuronal Hyperactivation Following Acute Organophosphate Poisoning Organophosphate (OP) compounds include highly toxic chemicals widely used both as pesticides and as warfare nerve agents. Existing countermeasures are lifesaving, but do not alleviate all long-term neurological sequelae, making OP poisoning a public health concern worldwide and the search for fully efficient antidotes an urgent need. OPs cause irreversible acetylcholinesterase (AChE) inhibition, inducing the so-called cholinergic syndrome characterized by peripheral manifestations and seizures associated with permanent psychomotor deficits. Besides immediate neurotoxicity, recent data have also identified neuroinflammation and microglia activation as two processes that likely play an important, albeit poorly understood, role in the physiopathology of OP intoxication and its long-term consequences. To gain insight into the response of microglia to OP poisoning, we used a previously described model of diisopropylfluorophosphate (DFP) intoxication of zebrafish larvae. This model reproduces almost all the defects seen in poisoned humans and preclinical models, including AChE inhibition, neuronal epileptiform hyperexcitation, and increased neuronal death. Here, we investigated in vivo the consequences of acute DFP exposure on microglia morphology and behaviour, and on the expression of a set of pro- and anti-inflammatory cytokines. We also used a genetic method of microglial ablation to evaluate the role in the OP-induced neuropathology. We first showed that DFP intoxication rapidly induced deep microglial phenotypic remodelling resembling that seen in M1-type activated macrophages and characterized by an amoeboid morphology, reduced branching, and increased mobility. DFP intoxication also caused massive expression of genes encoding pro-inflammatory cytokines Il1β, Tnfα, Il8, and to a lesser extent, immuno-modulatory cytokine Il4, suggesting complex microglial reprogramming that included neuroinflammatory activities. Finally, microglia-depleted larvae were instrumental in showing that microglia were major actors in DFP-induced neuroinflammation and, more importantly, that OP-induced neuronal hyperactivation was markedly reduced in larvae fully devoid of microglia. DFP poisoning rapidly triggered massive microglia-mediated neuroinflammation, probably as a result of DFP-induced neuronal hyperexcitation, which in turn further exacerbated neuronal activation. Microglia are thus a relevant therapeutic target, and identifying substances reducing microglial activation could add efficacy to existing OP antidote cocktails. Introduction Organophosphates (OPs) are a family of organic compounds that includes highly toxic chemicals widely used as pesticides, flame retardants, plasticizers, and to a lesser extent, warfare nerve agents, making OP poisoning a major public health issue worldwide. In recent years, several million intoxications have been reported annually, causing more than 200,000 deaths, primarily suicides [1,2]. OPs bind covalently to acetylcholinesterase (AChE) and irreversibly inhibit its activity, inducing a large accumulation of acetylcholine (ACh) at cholinergic synapses and, therefore, hyperactivation of acetylcholine receptors (AChR), resulting in the so-called cholinergic syndrome. At the neuronal level, AChR overstimulation causes a massive hyperexcitation of cholinergic neurons with large glutamate release and excitotoxicity, and eventually neuronal death [3]. In poisoned humans and preclinical models, acute OP exposure induces peripheral manifestations and impacts the central nervous system, with seizures that may worsen into status epilepticus (SE), a life threat if not quickly treated [4]. Besides direct neurotoxicity, OP poisoning can trigger long-term neuronal disorders, such as OP-induced delayed neuropathy (OPIDN), a complex syndrome associating cognitive and psychomotor deficits [5]. Existing countermeasures combine the AChR antagonist atropine with an AChE reactivator oxime such as pralidoxime (2-PAM), and a γ-aminobutyric acid (GABA) receptor agonist of the benzodiazepine family, such as diazepam. However, while such antidote cocktails do mitigate the acute toxicity of OPs, they must be delivered in the very first minutes after exposure and they do not alleviate all long-term neurological deficits [6]. More potent countermeasures with extended therapeutic and temporal windows are thus needed. Besides neuronal hyperexcitation, data from preclinical models, mostly rodents, have shown that exposure to OPs rapidly induces massive, sustained brain inflammation [7][8][9], a harmful condition that may be at least partly responsible for the long-term psychomotor comorbidities observed in intoxicated patients and animal models [10]. Specifically, it has been clearly established that neuroinflammation creates an environment that is not conducive to the maintenance of brain homeostasis and can promote epileptogenesis [11]. In line with these findings, Gonzales et al. [12] recently showed that microglial cells, the brain-resident macrophages which are key actors in brain inflammation, likely play an important role in the cognitive deficits observed 1 month post-acute OP intoxication in juvenile rats. Importantly, at the therapeutic level, the evidence suggests that agents able to mitigate the OP-induced neuroinflammation could be a promising additional treatment in future antidote cocktails to relieve both the immediate effects of OPs and their long-term consequences. We previously described a zebrafish model of acute OP intoxication using diisopropylfluorophosphate (DFP), a prototypic OP structurally similar to the G-class nerve agent sarin and widely used in toxicological research due to its moderate toxicity and low volatility [10,13]. This zebrafish model of acute DFP poisoning reproduces almost all the major neuropathological defects seen in exposed humans and preclinical models, including AChE inhibition, massive neuronal excitation leading to epileptiform activity, an imbalance of glutamatergic/GABAergic synaptic activity, and increased neuronal death [14]. Here, we used in vivo imaging and transgenic lines encoding fluorescent reporter proteins, combined with genetic microglia ablation method, to: (1) assess whether our zebrafish model of acute DFP poisoning faithfully reproduced the brain inflammatory response observed in rodent models, (2) study the role played by microglia in DFP-induced neuroinflammation, and (3) investigate the consequences of microglia-mediated inflammation on the subsequent functioning of neuronal networks in individuals acutely exposed to DFP. Our findings confirm that microglia are key players in DFP-induced neuroinflammation and, more importantly, that this inflammatory environment of the brain may further exacerbate DFP-induced excitability of neuronal networks. Our results therefore suggest that microglia are a novel therapeutic target to identify compounds mitigating OP-induced neuroinflammation, and that they could be used to improve the efficacy of existing antidote cocktails. DFP Exposure Induced Dramatic Phenotypic Remodelling of Microglia To investigate the consequences of acute DFP poisoning on the physiology of microglia, we first made use of our well-established zebrafish model of DFP-intoxication [14] and the zebrafish transgenic line Tg[mpeg1:mCherryF], which enables live imaging of these cells in (B ,B ) Magnification of the white-circled microglia from a control larva (B ), and corresponding 3D reconstruction (B ). (C ,C ) Magnification of the white-circled microglia from a DFP-treated larva (C ), and corresponding 3D reconstruction (C ). Scale bar: 10 µm. (D-I) Changes in microglia morphological parameters: sphericity (Sp) (scaled from 0, fully disordered morphology, to 1, perfect sphericity) (D), surface area (S) (E), volume (V) (F), mean branch number (NB) (G), total branch length (TL) (H), and mean branch length (ML) (I) in control (DMSO) (N = 13 embryos, n = 327 cells) and DFPtreated larvae (DFP) (N = 14 larvae, n = 294 cells). (J) Sholl analysis of microglia branch complexity in control (black) and DFP-exposed larvae (blue). Error bars on all graphs represent the standard error of the mean (SEM). Statistics: ***, p < 0.001; n.s., not significant. (K) Clustering of microglial cell populations in control (DMSO) and DFP-exposed larvae (DFP), using five of the previously described morphological parameters (Sp, NB, TL, ML, and S) (see Materials and Methods). Each column corresponds to a single microglial cell, and each parameter is scaled from black ('resting' state) to red ('activated' state); black dotted lines separate the different microglial populations (ramified, transitional, and amoeboid). To further characterize the consequences of acute DFP poisoning on microglial morphological changes, we performed a cluster analysis of these cells in control and DFP-treated larvae, based on the five morphological parameters that significantly changed following DFP exposure, namely Sp, S, NB, ML, and TL, as indicated above (see Materials and Methods). Results showed that microglial cells could be clustered into three distinct populations in controls ( Figure 1K). The largest cluster comprised 47.1% of the cells and included microglia showing highly branched morphology and low sphericity, likely corresponding to 'resting' microglia. The smallest cluster, corresponding to 14.7% of the cells, represented microglia with a low process number and a high sphericity, likely corresponding to M1type 'activated' microglia. The third cluster, which contained 38.2% of the cells, comprised microglia displaying both an intermediate branch number and an intermediate sphericity. We refer to these cells as 'intermediate' microglia. In contrast, only two main clusters were observed in DFP-treated larvae ( Figure 1K). The larger one, which contained 73.1% of the cells, included microglia resembling 'activated' microglia. The other cluster, comprising 25% of the cells, contained microglia showing the 'intermediate' phenotype as defined above. It is of note that fewer than 2% of the microglia showed a 'resting' phenotype in DFP-exposed larvae, compared to 47% in controls, suggesting that DFP caused a massive, brain-wide microglial activation. Because the above data indicate that microglia displayed deep morphological changes after 6 h of acute DFP exposure, we next investigated the dynamics of these changes using real-time confocal imaging on live 5 dpf Tg[mpeg1:mCherryF] larvae during 6 h of exposure to 15 µM DFP. In agreement with our previous data [16], prior to DFP addition, microglia were highly branched with several long processes that permanently scan their environment and neighbour cells, and during the first 2 h of DFP exposure, no significant morphological changes could be detected (Figure 2A,B). In contrast, from 2.5 h of exposure, clear remodelling of the cells was observed ( Figure 2C,D), which included an increased sphericity and a decrease in the length of the processes. After 3.5 h of exposure, almost all the cells displayed a phenotype resembling 'activated' microglia ( Figure 2E,F). Excerpts of these representative videos are shown in Supplemental Video S1 (https://urlz.fr/iNwl, accessed on 11 July 2022) for 5 dpf wild-type embryos and Supplemental Video S2 for 5 dpf DFP-treated embryos (https://urlz.fr/iNz4, accessed on 11 July 2022). We next tracked microglia in WT and DFP-treated embryos and measured the distance travelled by these cells (Supplementary Figure S1). Results show microglia travelled a greater distance in DFP-treated embryos than in WT larvae (Supplementary Figure S1A-C). addition, microglia were highly branched with several long processes that permanently scan their environment and neighbour cells, and during the first 2 h of DFP exposure, no significant morphological changes could be detected (Figure 2A,B). In contrast, from 2.5 h of exposure, clear remodelling of the cells was observed ( Figure 2C,D), which included an increased sphericity and a decrease in the length of the processes. After 3.5 h of exposure, almost all the cells displayed a phenotype resembling 'activated' microglia ( Figure 2E,F). Excerpts of these representative videos are shown in Supplemental Video S1 (https://urlz.fr/iNwl, accessed on 11 July 2022) for 5 dpf wild-type embryos and Supplemental Video S2 for 5 dpf DFP-treated embryos (https://urlz.fr/iNz4, accessed on 11 July 2022). We next tracked microglia in WT and DFP-treated embryos and measured the distance travelled by these cells (Supplementary Figure S1). Results show microglia travelled a greater distance in DFP-treated embryos than in WT larvae (Supplementary Figure S1A-C). DFP Exposure Induced Microglia-Mediated Overexpression of Inflammatory Cytokines Microglia phenotypic changes observed in DFP-exposed larvae were highly reminiscent of those of 'activated' M1-type microglia observed not only in human epileptic brains [18], but also in DFP-exposed rats [19]. To confirm that the remodelling of microglia observed in DFP-treated larvae does reflect an M1-like inflammatory type of activation of these cells, we next investigated, by qRT-PCR analysis of whole-body RNAs, the expression levels of transcripts encoding a set of pro-inflammatory (Il1β and Il8) and immuno-modulatory cytokines (Il4), before and at different time points during a 6 h exposure to either 1% DMSO or 15 µM DFP. In close agreement with the results obtained in preclinical models of DFP intoxication [20][21][22], our data first revealed a massive expression of il1β (fold change (fc): 413 ± 67, p < 0.0001) and DFP Exposure Induced Microglia-Mediated Overexpression of Inflammatory Cytokines Microglia phenotypic changes observed in DFP-exposed larvae were highly reminiscent of those of 'activated' M1-type microglia observed not only in human epileptic brains [18], but also in DFP-exposed rats [19]. To verify that the massive overexpression of RNAs encoding pro-inflammatory cytokines observed in DFP-exposed larvae did reflect brain inflammation, we next investigated, by qRT-PCR analysis of RNAs extracted from dissected brains, the expression levels of transcripts encoding the same set of cytokines (Il1β, II8, and Il4) in larvae exposed for 6 h to either DMSO or DFP. Results confirmed that a 6 h DFP exposure induced massive expression of il1β (fc: 189 ± 37, p < 0.01) and il8 (fc: 42 ± 19, p < 0.05), and increased expression of il4 transcripts (fc: 2.9 ± 0.7, p < 0.05) in the brain of exposed larvae (Figure 4), confirming that DFP exposure induced bona fide brain inflammation. Figure 4. DFP exposure induced massive brain inflammation. Expression levels of transcripts encoding cytokines Il1β, Il8, and Il4, relative to that of reference tbp transcripts, in dissected brain RNAs from larvae exposed for 6 h to 15 µM DFP or DMSO (in each condition, N = 5 samples, n = 10 brains/sample). Error bars on all graphs represent the standard error of the mean (SEM). Statistics: *, p < 0.05; **, p < 0.01; n.s., not significant. Kinetics of Neuronal Activity in DFP-Exposed Larvae We previously showed that larvae exposed to 15 µM DFP displayed massive neuronal hyperactivation from 1 h to 1.5 h of exposure to DFP [14]. To further investigate this point, we studied, by qRT-PCR analysis of RNAs extracted from larvae at different time points of DFP exposure, the temporal expression profile of fosab, an immediate early gene (IEG) whose expression is an early, sensitive marker of neuronal activation, especially epileptiform seizures [23]. A significantly increased expression of fosab transcripts was detected in larvae exposed to DFP for 1 h, (fc: 2.89 ± 0.4, p < 0.01), which then gradually increased over the next 5 h (fc: 21.7 ± 2.9, p < 0.001) ( Figure 5). This result suggests, in agreement with our previous calcium imaging data and the results in rodent models [8, 14,21] that neuronal hyperactivation in zebrafish larvae is an early consequence of DFP poisoning that has already started as early as 1 h post-exposure. . DFP exposure induced massive brain inflammation. Expression levels of transcripts encoding cytokines Il1β, Il8, and Il4, relative to that of reference tbp transcripts, in dissected brain RNAs from larvae exposed for 6 h to 15 µM DFP or DMSO (in each condition, N = 5 samples, n = 10 brains/sample). Error bars on all graphs represent the standard error of the mean (SEM). Statistics: *, p < 0.05; **, p < 0.01; n.s., not significant. Kinetics of Neuronal Activity in DFP-Exposed Larvae We previously showed that larvae exposed to 15 µM DFP displayed massive neuronal hyperactivation from 1 h to 1.5 h of exposure to DFP [14]. To further investigate this point, we studied, by qRT-PCR analysis of RNAs extracted from larvae at different time points of DFP exposure, the temporal expression profile of fosab, an immediate early gene (IEG) whose expression is an early, sensitive marker of neuronal activation, especially epileptiform seizures [23]. A significantly increased expression of fosab transcripts was detected in larvae exposed to DFP for 1 h, (fc: 2.89 ± 0.4, p < 0.01), which then gradually increased over the next 5 h (fc: 21.7 ± 2.9, p < 0.001) ( Figure 5). This result suggests, in agreement with our previous calcium imaging data and the results in rodent models [8, 14,21] that neuronal hyperactivation in zebrafish larvae is an early consequence of DFP poisoning that has already started as early as 1 h post-exposure. Inflammatory Cytokines Expression in DFP-Treated Larvae without Microglia Two glial cell types mediate brain inflammation, including that induced by acute DFP exposure: microglial cells and astrocytes [24,25]. Accordingly, we next undertook to Figure 5. DFP exposure rapidly induced massive neuronal activation. Expression levels of fosab RNA relative to that of tbp RNA transcripts, from control (DMSO) and DFP-exposed (DFP) larvae, at different time points of exposure (in each condition, N = 8 samples, n = 7 larvae/sample). Error bars on all graphs represent the standard error of the mean (SEM). Statistics: **, p < 0.01; ***, p < 0.001. Inflammatory Cytokines Expression in DFP-Treated Larvae without Microglia Two glial cell types mediate brain inflammation, including that induced by acute DFP exposure: microglial cells and astrocytes [24,25]. Accordingly, we next undertook to assess the role played by microglia in the neuroinflammatory process induced by DFP exposure. To this end, we analysed the expression levels of the same three cytokine RNAs and tnfα, which encodes one of the main pro-inflammatory cytokines, in larvae fully devoid of microglia as the result of morpholino-oligonucleotide-mediated inactivation of the pU.1 gene, hereafter referred to as pU.1 morphants [26]. Results indicated that il1β, il8, and tnfα transcripts were still overexpressed in pU.1 morphants exposed for 6 h to DFP, albeit at markedly lower levels than observed in their wild-type counterparts (fc: 134 ± 44.3 vs. 413 ± 94, p < 0.01, fc: 15.7 ± 4.7 vs. 44 ± 6, p < 0.001, and fc: 4.8 ± 1.6 vs. 22.7 ± 4.3, p < 0.01, respectively), suggesting that while microglial cells are important players in DFPinduced neuroinflammation, other cells are also involved in the process. In contrast, no overexpression of il4 was detected in pU.1 morphants exposed to DFP (fc: 1.2 ± 0.2, p = 0.81) (Figure 6), supporting the hypothesis that microglia were the main cell type overexpressing this cytokine following DFP exposure. . Microglia are key players in DFP-induced inflammation. Expression levels of transcripts encoding cytokines Il1β, Il8, Il4, and Tnfα relative to that of reference tbp transcripts, from control larvae and pU.1 morphants exposed for 6 h to either 1% DMSO (DMSO) or 15 µM DFP (DFP) (in each condition, N = 7-8 samples, n = 7 larvae/sample). Error bars on all graphs represent the standard error of the mean (SEM). Only statistically significant differences between samples are shown: *, p < 0.05; **, p < 0.01; ***, p < 0.001. DFP-Induced Neuronal Hyperactivation Was Markedly Reduced in Larvae without Microglia It has long been known that brain inflammation creates an environment that favours neuronal hyperexcitation and epileptogenesis [27]. Therefore, we next set out to evaluate the consequences of microglia activation and subsequent inflammation on the neuropathological processes induced by DFP poisoning. For this purpose, we used pU.1 morphants lacking microglia and showing reduced inflammatory response to DFP to study the consequences of the absence of microglia on DFP-induced neuronal activation as revealed by fosab transcript expression. DFP-Induced Neuronal Hyperactivation Was Markedly Reduced in Larvae without Microglia It has long been known that brain inflammation creates an environment that favours neuronal hyperexcitation and epileptogenesis [27]. Therefore, we next set out to evaluate the consequences of microglia activation and subsequent inflammation on the neuropathological processes induced by DFP poisoning. For this purpose, we used pU.1 morphants lacking microglia and showing reduced inflammatory response to DFP to study the consequences of the absence of microglia on DFP-induced neuronal activation as revealed by fosab transcript expression. The results indicated that fosab RNAs were still overexpressed in pU.1 morphants exposed to DFP, albeit at significantly reduced levels when compared to that observed in their wild-type counterparts (fc: 7.3 ± 1.5 vs. 22.2 ± 2.9, p < 0.001) (Figure 7), suggesting that DFP-induced neuronal hyperactivation was markedly reduced in larvae without microglia. the consequences of the absence of microglia on DFP-induced neuronal activation as re-vealed by fosab transcript expression. Discussion The first important finding of the present study is that acute DFP poisoning in zebrafish larvae rapidly triggered the activation of microglial cells and the synthesis of inflammatory mediators. Here we used in vivo imaging of microglia to describe the dynamics of microglia/macrophage activation after DFP exposure. These microglial morphological changes comprised a rounding of the cells and a decrease in both the number and length of their branches, associated with an increased distance travelled by microglial cell bodies. Interestingly, our cluster analysis further revealed the extent of this remodelling with the population of resting microglia, which decreased from 47.1% to 2% after 6 h of exposure, while that of activated microglia increased from 14.7% to 73.1% over the same period. Such phenotypic remodelling of microglia/macrophages has already been described, and is characteristic of cells committed to an M1-like macrophage activation observed in different brain injury situations, including OP poisoning and various forms of epilepsy [18,19]. We previously showed that zebrafish larvae exposed for 6 h to DFP displayed a marked neuronal hyperexcitation, likely due to a shift in the synaptic excitation/inhibition balance towards an excitatory state associated with an increased number of apoptotic neurons in the brain [14]. The deep remodelling of microglia observed in DFP-exposed larvae might thus reflect an increase in the phagocytic capacities of these cells to cope with the increasing neuronal death and synaptic pruning. Moreover, consistent with the deep remodelling of microglia, we also showed that DFP induced a massive expression of inflammatory cytokines Il1β, Tnfα, and Il8, confirming that DFP poisoning induced a massive and brainwide activation of microglial cells towards an M1-like inflammatory phenotype. These results are in close agreement with the data from rodent models of DFP poisoning, which showed that DFP exposure triggers a robust neuroinflammatory response resulting from the activation of both microglia and astrocytes [10,19,21,[28][29][30][31]. In preclinical rodent models, several teams have reported that DFP poisoning triggers a neuroinflammatory response as early as 1-2 h post-exposure [21,29], making this response an early event in the physiopathology of OP poisoning. Using real-time recording of microglial morphological changes in live transgenic Tg[mpeg1:mCherryF] larvae, we first showed here that phenotypic reprogramming could be detected after approximately 2.5 h of DFP exposure and was clearly seen after 3.5 h of exposure. This result was further refined by the qRT-PCR analysis of the expression of RNAs encoding inflammatory cytokine, which showed significant increases of il8 and il1β transcripts after 1 h and 2 h of DFP exposure, respectively. In the zebrafish DFP model, the increased expression of both il8 and fosab RNAs was detected after 1 h of exposure, making it difficult to establish a causal link between neuronal hyperactivation and neuroinflammation. Thus, as was shown in rodents, in the zebrafish acute DFP model, microglia-mediated neuroinflammation is an early event in OP poisoning that starts as early as 1 h post-exposure. Several authors have hypothesized that neuroinflammation of the brain observed after DFP exposure, and more generally OP poisoning, is the consequence of the excitotoxicity induced by the massive release of glutamate after the overstimulation of AChRs in brain neurons. Data have shown that the severity of seizures in rodents acutely exposed to DFP is positively correlated with the neuroinflammatory response [32]. Moreover, administration of either a low dose of anaesthetic urethane or diazepam to rats during early stages of DFP intoxication, 1 h or 10 min, respectively, markedly mitigated not only neuronal hyperactivation, but also microglia activation and astrogliosis [28,30]. More generally, it has long been known that epileptic seizures trigger a marked neuroinflammatory response that includes the differentiation of microglial cells towards an activated state and the production of inflammatory mediators [33,34]. In human patients with pharmaco-resistant epilepsy, post-mortem analysis of brain tissues revealed a significant microglial activation, which was directly correlated with seizure severity [18,34]. Thus, although the hypothesis of a direct effect of DFP on microglia activation cannot be formally ruled out, the massive microglia-mediated inflammation observed in DFP-exposed larvae is likely caused, directly or indirectly, by the overactivation of cholinergic neuronal networks in the brain. Using a genetic method, we produced larvae completely devoid of microglia and showed that they displayed a markedly reduced inflammatory response to DFP compared to that of their wild-type counterparts. In particular, we observed an approximately fourfold reduction in the expression levels of il1β, tnfα, and il8 transcripts, strong evidence that microglia play a major role in the inflammatory response following DFP poisoning. However, the inflammatory response observed in larvae without microglia, albeit reduced, also suggests that other cell types are involved in the process, likely activated astrocytes. Thus, as already described in DFP-exposed rats [19] and mice [35], our data suggest that DFP-induced neuroinflammation in zebrafish larvae is first mainly mediated by the activation of microglia. In contrast, the expression of RNAs encoding the immuno-modulatory Il4 cytokine, which was increased in the brain of larvae exposed for 5 h and 6 h to DFP, was not augmented in microglia-depleted individuals similarly exposed, suggesting that overexpression of this cytokine is mainly mediated by microglial cells in exposed larvae. This result also suggests that following DFP poisoning, microglia first respond through M1like activation and synthesis of inflammatory mediators, followed a few hours later by the expression of regulatory mediators, possibly reflecting an attempt by these cells to restore brain homeostasis. Previous results had already shown that in addition to inflammatory substances, production of anti-inflammatory mediators is also increased in microglia after epileptic seizures, demonstrating that the microglial response to seizures is not limited to the classic M1-like pro-inflammatory activation [36]. However, it is not known whether the same or distinct microglial populations are involved in the two types of responses. Future work will need to clarify this point. Using pU.1 morphants, which are devoid of microglia and displayed a significantly reduced inflammatory response to DFP intoxication, we showed that DFP-induced overexpression of fosab RNAs was significantly reduced in larvae without microglia. This result suggests that while neuronal activation was first a direct consequence of DFP-induced CNS AChR overstimulation, inflammatory mediators synthesized by activated microglia further exacerbated neuronal activation. Moreover, the relationship between epileptic seizures and brain inflammation is complex, since besides the neuronal activation-induced neuroinflammatory process mentioned above, inflammatory stimuli have also been identified as causative agents of epileptogenesis [37][38][39][40], suggesting a possible vicious circle involving the two processes. In particular, it has been shown that inflammatory Il1β is widely implicated in epileptogenesis [27,33,35] and, more importantly, that pharmacological inhibition of Il1β signalling with anakinra, a modified recombinant isoform of the human Il1R agonist, Il1Ra, was shown to drastically reduce seizure numbers in epileptic patients unresponsive to conventional anti-epileptic drugs [41][42][43][44]. In particular, data suggested that Il1β might play a role in the pathophysiology of epilepsy through increasing glutamatergic signalling [45] and N-methyl-D-aspartate (NMDA) receptor activity [46], and decreasing GABAergic transmission [47]. Statistics. Statistical analyses were performed using GraphPad Prism 8.4.3.686 (https:// www.graphpad.com/scientific-software/prism/, accessed on 11 July 2022). Data were first challenged for normality using the Shapiro-Wilk test. Data with a normal distribution were analysed with a two-tailed unpaired t-test with or without Welch's correction, depending on the variance difference of each sample. For the statistical analysis of the results obtained with the pU.1 morphants, treated or not, with DFP, Anova tests were used. Data not showing a normal distribution were analysed using a two-tailed Mann-Whitney test. All graphs show mean ± SEM. Conclusions In conclusion, our results confirm that the zebrafish model of acute DFP poisoning precisely reproduces the key pathological features observed in rodent preclinical models, including AChE inhibition, epileptiform seizures, neuronal death, and microglia-mediated brain inflammation. In addition, we found that larvae lacking microglia displayed markedly reduced neuronal activation following DFP exposure, suggesting that microglia-mediated neuroinflammation further potentiates DFP-induced neuronal network hyperactivation. Microglia therefore appear as a key part of a vicious circle involving neuronal activation and neuroinflammation following DFP poisoning. These cells could thus be a therapeutic target to identify substances mitigating neuroinflammatory processes. They could thereby add to existing antidote cocktails and improve their efficacy. Data Availability Statement: The data presented in this study are available on request from the corresponding author.
v3-fos-license
2024-04-21T15:13:01.269Z
2024-04-01T00:00:00.000
269254405
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/2076-2615/14/8/1226/pdf?version=1713503374", "pdf_hash": "144c210d91f8ff7b09d0f842de8e497e99e49028", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43683", "s2fieldsofstudy": [ "Computer Science", "Agricultural and Food Sciences" ], "sha1": "d0d50ebd09043f1d6d3b64306109ff9e7dd834e4", "year": 2024 }
pes2o/s2orc
Improved YOLOv8 Model for Lightweight Pigeon Egg Detection Simple Summary The utilization of computer vision technology and automation for monitoring and collecting pigeon eggs is of significant importance for improving labor productivity and the breeding of egg-producing pigeons. Currently, research both domestically and internationally has predominantly focused on the detection of eggs from poultry such as chickens, ducks, and geese, leaving pigeon egg recognition largely unexplored. This study proposes an effective and lightweight network model, YOLOv8-PG, based on YOLOv8n, which maintains high detection accuracy while reducing the model’s parameter count and computational load. This approach facilitates cost reduction in deployment and enhances feasibility for implementation on mobile robotic platforms. Abstract In response to the high breakage rate of pigeon eggs and the significant labor costs associated with egg-producing pigeon farming, this study proposes an improved YOLOv8-PG (real versus fake pigeon egg detection) model based on YOLOv8n. Specifically, the Bottleneck in the C2f module of the YOLOv8n backbone network and neck network are replaced with Fasternet-EMA Block and Fasternet Block, respectively. The Fasternet Block is designed based on PConv (Partial Convolution) to reduce model parameter count and computational load efficiently. Furthermore, the incorporation of the EMA (Efficient Multi-scale Attention) mechanism helps mitigate interference from complex environments on pigeon-egg feature-extraction capabilities. Additionally, Dysample, an ultra-lightweight and effective upsampler, is introduced into the neck network to further enhance performance with lower computational overhead. Finally, the EXPMA (exponential moving average) concept is employed to optimize the SlideLoss and propose the EMASlideLoss classification loss function, addressing the issue of imbalanced data samples and enhancing the model’s robustness. The experimental results showed that the F1-score, mAP50-95, and mAP75 of YOLOv8-PG increased by 0.76%, 1.56%, and 4.45%, respectively, compared with the baseline YOLOv8n model. Moreover, the model’s parameter count and computational load are reduced by 24.69% and 22.89%, respectively. Compared to detection models such as Faster R-CNN, YOLOv5s, YOLOv7, and YOLOv8s, YOLOv8-PG exhibits superior performance. Additionally, the reduction in parameter count and computational load contributes to lowering the model deployment costs and facilitates its implementation on mobile robotic platforms. Introduction As of today, pigeon farming has rapidly emerged as an industry, with pigeons now recognized as the fourth major poultry species alongside chickens, ducks, and geese [1].Pigeon eggs are rich in nutrients such as protein, iron, and calcium, and they are easily digestible and absorbable.Consuming pigeon eggs can improve skin quality and blood circulation, and they possess detoxifying properties, making them a high-quality nutritional product [2,3].However, pigeon eggshells are relatively soft and prone to breakage from being stepped on or pecked by pigeons.In the actual production and breeding process, when the phenomenon of egg laying is detected, pigeon eggs should be immediately placed into electrical incubation equipment and replaced with fake eggs to minimize the impact on the physiological habits of pigeons, ensuring production efficiency and preventing the occurrence of abandoned hatching [4].Therefore, the rapid and accurate identification of pigeon eggs is particularly important for the management of pigeon eggs. Currently, both domestically and internationally, there are numerous techniques for detecting various types of poultry eggs using computer vision technology.With the continuous development of technology, certain characteristics and advantages have been formed.In 2008, Pourreza et al. proposed grayscale thresholding of target regions for surface-defect detection on poultry eggs [5].By comparing the ratio of the projected area after thresholding to the target region with a threshold value, they achieved a detection accuracy of 99%.In 2010, Deng et al. used image-enhancement algorithms to highlight crack features and applied visual detection to egg-crack detection, achieving a high detection accuracy of 98% [6].In 2012, Lunadei et al. designed a multispectral image-processing algorithm to differentiate stains from normal egg colors, with a recognition speed of 50 ms and a recognition rate of 98% [7].In 2014, Wang extracted 24 physical parameters for egg-crack detection, achieving a detection accuracy of 96.67% [8].In 2017, Sunardi et al. [9] applied smartphones, thermal imaging cameras, and MATLAB for poultry egg recognition, achieving a recognition accuracy of 100% [9].In the same year, Ang et al. proposed a method combining robots to statistically count and collect eggs in free-range chicken farms, with a positioning error within 2 cm [10].In 2018, Ab Nasir A et al. designed an automatic egg-grading system with positioning and recognition accuracies exceeding 95% [11].In 2023, Li et al. used an improved YOLOv7 network (MobileOne-YOLO) for detecting fertilized duck eggs, significantly improving FPS performance by 41.6% while maintaining the same accuracy as YOLOv7 [12].With the development of computer vision and deep learning, the accuracy of many poultry egg-detection models has exceeded 95%, and they perform well in complex background environments.However, most methods only focus on accuracy and do not consider model parameter count and computational load [13]. In 2023, researchers developed machine vision solutions for egg detection on conveyor belts.Huang et al. utilized the CA attention mechanism, BiFPN, and GSConv to enhance YOLOv5 and combined it with byte-tracking algorithms to detect broken eggs [14].On the other hand, Luo et al. improved YOLOv5 using BiFPN and CBAM to detect leaky eggs [15].Both Huang and Luo et al. employed different approaches to enhance YOLOv5 for detecting egg defects.It is worth noting that their methods were installed above or at the end of the egg conveyor belt.However, pigeon eggs, with lower shell strength and greater fragility compared to eggs from other poultry such as chickens, ducks, and geese, are not typically transported using conveyor belts.Instead, they are inspected and collected manually, leading to significant manual labor and issues such as egg breakage and embryo death due to prolonged exposure in pigeon coops.To enhance the level of intelligence and automation in pigeon egg breeding, and to increase labor productivity, it is crucial to use pigeon egg-picking robots for detecting egg laying and collecting pigeon eggs.The key to the development of pigeon egg-picking robots lies in the development of precise and efficient pigeon egg-detection algorithms. Currently, research on egg detection has predominantly focused on poultry such as chickens, ducks, and geese, leaving pigeon egg recognition largely unexplored.Moreover, existing deep learning models based on image and video understanding mostly remain in the research stage.Many existing systems rely on cloud computing models, and few scholars have deployed algorithms on embedded devices to accelerate model deployment.Therefore, this study established a comprehensive database of pigeon eggs from Silver King pigeons, with multiple angles.It distinguished fake pigeon eggs by labeling and specifically adopted C2f-Faster-EMA (CFE) and C2f-Faster (CF) to replace C2f in the backbone and neck networks.Additionally, it introduced the dynamic upsampler Dysample and designed the EMASlideLoss classification loss function to improve the YOLOv8 object-detection algorithm.The proposed YOLOv8-PG algorithm model is efficient and lightweight, making it more suitable for deployment on edge devices and pigeon egg-picking robots, thereby contributing to the scientific nature of egg breeding processes for pigeons and reducing manual costs.Comparison with other common object-detection algorithms in pigeon egg recognition tasks shows that YOLOv8-PG outperforms most models in terms of both accuracy and efficiency. Data Acquisition From 1 April 2022 to 15 May 2022, video data of pigeon eggs from 150 pairs of breeding pigeons were collected daily.The Silver King pigeon breed was selected as the focus of the study.Custom brackets with Hikvision T12HV3-IA/PoE cameras from Hikvision, Hangzhou, China, were mounted on feeding machines in each row of pigeon coops to capture side-view videos of egg-laying activities.The collected videos were in RGB format.The speed of the cameras was approximately 1 m/min, synchronized with the speed of the feeding machines.Each pigeon coop had a width of 50 cm.Data collection was conducted for 30 min for every batch of 50 pairs of breeding pigeons.A total of 2832 original images were obtained by extracting frames from the collected videos.An illustration of the data collection setup is shown in Figure 1. Animals 2024, 14, x FOR PEER REVIEW 3 of 18 King pigeons, with multiple angles.It distinguished fake pigeon eggs by labeling and specifically adopted C2f-Faster-EMA (CFE) and C2f-Faster (CF) to replace C2f in the backbone and neck networks.Additionally, it introduced the dynamic upsampler Dysample and designed the EMASlideLoss classification loss function to improve the YOLOv8 object-detection algorithm.The proposed YOLOv8-PG algorithm model is efficient and lightweight, making it more suitable for deployment on edge devices and pigeon eggpicking robots, thereby contributing to the scientific nature of egg breeding processes for pigeons and reducing manual costs.Comparison with other common object-detection algorithms in pigeon egg recognition tasks shows that YOLOv8-PG outperforms most models in terms of both accuracy and efficiency. Data Acquisition From 1 April 2022 to 15 May 2022, video data of pigeon eggs from 150 pairs of breeding pigeons were collected daily.The Silver King pigeon breed was selected as the focus of the study.Custom brackets with Hikvision T12HV3-IA/PoE cameras from Hikvision, Hangzhou, China, were mounted on feeding machines in each row of pigeon coops to capture side-view videos of egg-laying activities.The collected videos were in RGB format.The speed of the cameras was approximately 1 m/min, synchronized with the speed of the feeding machines.Each pigeon coop had a width of 50 cm.Data collection was conducted for 30 min for every batch of 50 pairs of breeding pigeons.A total of 2832 original images were obtained by extracting frames from the collected videos.An illustration of the data collection setup is shown in Figure 1. Dataset Annotation and Partition In this study, pigeon eggs were categorized into two categories: real eggs (true) and fake eggs (false).To distinguish fake eggs, they were marked with a black permanent marker to create a cross symbol on their surface.The target extraction of both classes of pigeon eggs was performed on the collected image data, with examples shown in Figure 2.For the detection of real and fake pigeon eggs, the LabelImg image annotation tool was employed to create the COCO dataset, as depicted in Figure 3.A total of 2832 image samples were manually annotated.The training, validation, and testing dataset proportions were set at 8:1:2, resulting in dataset sizes of 1982, 283, and 567, respectively. Dataset Annotation and Partition In this study, pigeon eggs were categorized into two categories: real eggs (true) and fake eggs (false).To distinguish fake eggs, they were marked with a black permanent marker to create a cross symbol on their surface.The target extraction of both classes of pigeon eggs was performed on the collected image data, with examples shown in Figure 2.For the detection of real and fake pigeon eggs, the LabelImg image annotation tool was employed to create the COCO dataset, as depicted in Figure 3.A total of 2832 image samples were manually annotated.The training, validation, and testing dataset proportions were set at 8:1:2, resulting in dataset sizes of 1982, 283, and 567, respectively. 3. For partially occluded pigeon egg targets, the annotation should include the occluding object along with the target, ensuring that no clearly identifiable pigeon egg targets are missed.4. If pigeon eggs are heavily obscured by pigeon cages, feathers, feces, or other breeding pigeons, making them difficult for the human eye to identify, then those targets should not be annotated. YOLOv8 Network Model The YOLO (You Only Look Once) series algorithms [17][18][19] are single-stage detection algorithms that balance detection speed with accuracy.YOLOv8, an anchor-free singlestage object-detection algorithm, has become one of the mainstream state-of-the-art (SOTA) models.It supports various computer vision tasks such as object detection, instance segmentation, and object tracking.YOLOv8 offers five scaled versions: YOLOv8n, YOLOv8s, YOLOv8m, YOLOv8l, and YOLOv8x.These versions share the same principles but differ in network depth and width.Considering that the output channel numbers vary across different-scale models, a ratio parameter is used to control the channel count.The overall structure of the network includes the input (receives input images and converts them into a format that the model can process), the backbone network (extracts the target feature from the image), the neck network (realizes image feature fusion), and the adhere to the requirements of the pigeon egg schema.3.For partially occluded pigeon egg targets, the annotation should include the occluding object along with the target, ensuring that no clearly identifiable pigeon egg targets are missed.4. If pigeon eggs are heavily obscured by pigeon cages, feathers, feces, or other breeding pigeons, making them difficult for the human eye to identify, then those targets should not be annotated. YOLOv8 Network Model The YOLO (You Only Look Once) series algorithms [17][18][19] are single-stage detection algorithms that balance detection speed with accuracy.YOLOv8, an anchor-free singlestage object-detection algorithm, has become one of the mainstream state-of-the-art (SOTA) models.It supports various computer vision tasks such as object detection, instance segmentation, and object tracking.YOLOv8 offers five scaled versions: YOLOv8n, YOLOv8s, YOLOv8m, YOLOv8l, and YOLOv8x.These versions share the same principles but differ in network depth and width.Considering that the output channel numbers vary across different-scale models, a ratio parameter is used to control the channel count.The overall structure of the network includes the input (receives input images and converts them into a format that the model can process), the backbone network (extracts the target feature from the image), the neck network (realizes image feature fusion), and the All annotated bounding boxes should accurately cover the target, closely fitting the target's edges in a minimal rectangular box format [16].They should not exclude any part of the real or fake pigeon eggs, while also avoiding the inclusion of excessive background information.2. Annotations for real and fake pigeon eggs should maintain consistency and strictly adhere to the requirements of the pigeon egg schema. 3. For partially occluded pigeon egg targets, the annotation should include the occluding object along with the target, ensuring that no clearly identifiable pigeon egg targets are missed.4. If pigeon eggs are heavily obscured by pigeon cages, feathers, feces, or other breeding pigeons, making them difficult for the human eye to identify, then those targets should not be annotated. YOLOv8 Network Model The YOLO (You Only Look Once) series algorithms [17][18][19] are single-stage detection algorithms that balance detection speed with accuracy.YOLOv8, an anchor-free singlestage object-detection algorithm, has become one of the mainstream state-of-the-art (SOTA) models.It supports various computer vision tasks such as object detection, instance segmentation, and object tracking.YOLOv8 offers five scaled versions: YOLOv8n, YOLOv8s, YOLOv8m, YOLOv8l, and YOLOv8x.These versions share the same principles but differ in network depth and width.Considering that the output channel numbers vary across different-scale models, a ratio parameter is used to control the channel count.The overall structure of the network includes the input (receives input images and converts them into a format that the model can process), the backbone network (extracts the target feature from the image), the neck network (realizes image feature fusion), and the detection head network (the output component of the model, responsible for generating inspection results). It is worth noting that, building upon the success of YOLOv5, YOLOv8 introduces new features and improvements to further enhance its performance and flexibility, including [20]: 1. In the backbone network and neck network, YOLOv8 incorporates the design concept of YOLOv7 ELAN [21]. Fasternet Block To ensure the network has good detection performance, many improvements have focused on reducing the number of floating-point operations (FLOPs).However, the reduction in FLOPs simultaneously leads to frequent memory access by ordinary convolution operations.In this study, we replaced the Bottleneck in C2f with the Fasternet Block [22] to obtain C2f-Faster, thereby significantly reducing the parameter count and computational load of the network model. The Fasternet Block utilizes a novel convolutional operation called PConv, which effectively extracts spatial features by reducing redundant computations and memory accesses.PConv applies filters only to a subset of input channels for feature extraction while keeping the remaining channels unchanged.Its structure is illustrated in Figure 5a.Here, ℎ and represents the height and width of the feature graph, represents the batch normalization, and represents a rectified linear unit. Assuming the input and output feature maps have an equal number of channels, denoted as , represents kernel size and represents the number of PConv channels. YOLOv8-PG Model Improvement Strategy 2.3.1. Fasternet Block To ensure the network has good detection performance, many improvements have focused on reducing the number of floating-point operations (FLOPs).However, the reduction in FLOPs simultaneously leads to frequent memory access by ordinary convolution operations.In this study, we replaced the Bottleneck in C2f with the Fasternet Block [22] to obtain C2f-Faster, thereby significantly reducing the parameter count and computational load of the network model. The Fasternet Block utilizes a novel convolutional operation called PConv, which effectively extracts spatial features by reducing redundant computations and memory accesses.PConv applies filters only to a subset of input channels for feature extraction while keeping the remaining channels unchanged.Its structure is illustrated in Figure 5a. Efficient Multi-Scale Attention (EMA) By incorporating attention mechanisms, algorithms can focus more on key areas or features in images and allocate more attention to them, thereby enhancing the performance of the algorithm in object detection.Due to the presence of dust, feces, feathers, and other debris in pigeon coops, there is significant interference with the detection of pigeon eggs, which prevents the original model from fully extracting features.In this study, the EMA attention mechanism [23,24] is introduced into the YOLOv8n network structure to enhance the model's focus on pigeon egg targets.This enhancement improves the model's ability to extract spatial features more effectively and reduces interference from the complex environment of pigeon coops.Specifically, by leveraging the flexibility and lightweight nature of EMA, it is incorporated into the Fasternet Block to design the Fasternet-EMA Block, as shown in Figure 6b. The EMA module does not perform channel downsampling.Instead, it reshapes the channel dimension into a batch dimension using partial channel dimensions, avoiding downsampling through generalized convolution and preventing the loss of feature information.EMA adopts parallel substructures to reduce sequential processing in the network and decrease network depth.The structure of the EMA module is illustrated in Figure 6a.Here, ℎ represents the height of the image, represents the width of the image, represents the number of channels in the image, and represents channel grouping. Specifically, when presented with specific input feature maps, the EMA attention mechanism initially divides them into sub-feature maps along the channel dimension to facilitate learning of different semantic features.The EMA module consists of three parallel branches, with two parallel branches located in the 1 × 1 branch and the third branch in the 3 × 3 branch.The 1 × 1 branch utilizes two one-dimensional global average pooling operations to encode channels along two spatial directions, while the 3 × 3 branch stacks single 3 × 3 kernels to capture multi-scale feature representations. which is only a quarter of the regular convolution. Figure 5b illustrates the design of the Fasternet Block module.The Fasternet Block module consists of a PConv and two 1 × 1 Conv layers forming a residual block, with shortcut connections included to reuse important features.Normalization layers and activation layers are only applied after the intermediate 1 × 1 Conv layer, aiming to preserve feature diversity and achieve lower latency. Efficient Multi-Scale Attention (EMA) By incorporating attention mechanisms, algorithms can focus more on key areas or features in images and allocate more attention to them, thereby enhancing the performance of the algorithm in object detection.Due to the presence of dust, feces, feathers, and other debris in pigeon coops, there is significant interference with the detection of pigeon eggs, which prevents the original model from fully extracting features.In this study, the EMA attention mechanism [23,24] is introduced into the YOLOv8n network structure to enhance the model's focus on pigeon egg targets.This enhancement improves the model's ability to extract spatial features more effectively and reduces interference from the complex environment of pigeon coops.Specifically, by leveraging the flexibility and lightweight nature of EMA, it is incorporated into the Fasternet Block to design the Fasternet-EMA Block, as shown in Figure 6b. The EMA module does not perform channel downsampling.Instead, it reshapes the channel dimension into a batch dimension using partial channel dimensions, avoiding downsampling through generalized convolution and preventing the loss of feature information.EMA adopts parallel substructures to reduce sequential processing in the network and decrease network depth.The structure of the EMA module is illustrated in Figure 6a.Here, h represents the height of the image, w represents the width of the image, c represents the number of channels in the image, and g represents channel grouping. Specifically, when presented with specific input feature maps, the EMA attention mechanism initially divides them into G sub-feature maps along the channel dimension to facilitate learning of different semantic features.The EMA module consists of three parallel branches, with two parallel branches located in the 1 × 1 branch and the third branch in the 3 × 3 branch.The 1 × 1 branch utilizes two one-dimensional global average pooling operations to encode channels along two spatial directions, while the 3 × 3 branch stacks single 3 × 3 kernels to capture multi-scale feature representations. Dysample In object-detection tasks, upsampling operations are required to adjust the size of input feature maps to match the dimensions of the original image, allowing the model to effectively detect objects of various sizes and distances.Traditional upsampling methods typically rely on bilinear interpolation [25].These methods have inherent limitations and may result in the loss of crucial image details.Moreover, traditional kernel-based upsampling processes entail a significant amount of computation and parameter overhead, which is not conducive to achieving lightweight network architectures [26].In real-world pigeon-coop scenarios, pigeon egg images are relatively small, and issues such as pixel distortion may occur, leading to the loss of fine-grained details and difficulty in learning features during recognition tasks.To address this issue, this paper introduces Dysample [27], a highly lightweight and effective dynamic upsampler, aimed at enhancing the detection capabilities for low-resolution images or smaller pigeon egg targets, while reducing instances of false positives and false negatives.Dysample utilizes a point-based sampling method and a perspective of learning sampling for upsampling, completely avoiding time-consuming dynamic convolution operations and additional sub-networks.It requires fewer computational resources and can enhance image resolution without adding extra burden, thus improving model efficiency and performance with minimal computational cost. The network structure of Dysample is illustrated in Figure 7.Its sampling set consists of the original sampling grid () and generated offsets ().The offsets are generated using a "linear + pixel shuffle" method, where the range of offsets can be determined by static and dynamic factors.Specifically, taking the static factor sampling method as an example, given a feature map of size × ℎ × and an upsampling factor , the feature map first passes through a linear layer with input and output channels of and 2 , respectively.Then, it is reshaped using the pixel shuffle method into 2 × ℎ × , where 2 represents the and coordinates.Finally, the upsampled feature map of size × ℎ × can be generated. Dysample In object-detection tasks, upsampling operations are required to adjust the size of input feature maps to match the dimensions of the original image, allowing the model to effectively detect objects of various sizes and distances.Traditional upsampling methods typically rely on bilinear interpolation [25].These methods have inherent limitations and may result in the loss of crucial image details.Moreover, traditional kernel-based upsampling processes entail a significant amount of computation and parameter overhead, which is not conducive to achieving lightweight network architectures [26].In real-world pigeoncoop scenarios, pigeon egg images are relatively small, and issues such as pixel distortion may occur, leading to the loss of fine-grained details and difficulty in learning features during recognition tasks.To address this issue, this paper introduces Dysample [27], a highly lightweight and effective dynamic upsampler, aimed at enhancing the detection capabilities for low-resolution images or smaller pigeon egg targets, while reducing instances of false positives and false negatives.Dysample utilizes a point-based sampling method and a perspective of learning sampling for upsampling, completely avoiding time-consuming dynamic convolution operations and additional sub-networks.It requires fewer computational resources and can enhance image resolution without adding extra burden, thus improving model efficiency and performance with minimal computational cost. The network structure of Dysample is illustrated in Figure 7.Its sampling set S consists of the original sampling grid (O) and generated offsets (G).The offsets are generated using a "linear + pixel shuffle" method, where the range of offsets can be determined by static and dynamic factors.Specifically, taking the static factor sampling method as an example, given a feature map of size c × h × w and an upsampling factor s, the feature map first passes through a linear layer with input and output channels of c and 2s 2 , respectively.Then, it is reshaped using the pixel shuffle method into 2 × sh × sw, where 2 represents the x and y coordinates.Finally, the upsampled feature map of size c × sh × sw can be generated. EMASlideLoss The problem of sample imbalance, wherein the quantity of easy samples is often significantly larger than that of difficult samples, has garnered widespread attention.In the SlideLoss loss function [28], the Intersection over Union (IoU) value between predicted boxes and ground truth boxes is utilized as an indicator to distinguish between easy and difficult samples.Due to the limited ability to discern difficult samples, the network model cannot effectively utilize the data during training.The function f(x) is employed to assign a higher weight to difficult samples and a lower weight to easy samples, thereby ensuring that the loss function pays more attention to difficult samples.The allocation rule function is as follows: The function represents the sliding function operation; denotes the Intersection over Union (IoU) between predicted boxes and ground truth; and represents the weight threshold. In specific, the SlideLoss method utilizes the average IoU value of all bounding boxes as the threshold , considering values below as negative samples and values above as positive samples.In this study, we employ the concept of exponential moving average (EXPMA) [29] to optimize the parameter within the model.The specific calculation formula is as follows: EMASlideLoss The problem of sample imbalance, wherein the quantity of easy samples is often significantly larger than that of difficult samples, has garnered widespread attention.In the SlideLoss loss function [28], the Intersection over Union (IoU) value between predicted boxes and ground truth boxes is utilized as an indicator to distinguish between easy and difficult samples.Due to the limited ability to discern difficult samples, the network model cannot effectively utilize the data during training.The function f(x) is employed to assign a higher weight to difficult samples and a lower weight to easy samples, thereby ensuring that the loss function pays more attention to difficult samples.The allocation rule function is as follows: The function f (x) represents the sliding function operation;x denotes the Intersec- tion over Union (IoU) between predicted boxes and ground truth; and µ represents the weight threshold. In specific, the SlideLoss method utilizes the average IoU value of all bounding boxes as the threshold µ, considering values below µ as negative samples and values above µ as positive samples.In this study, we employ the concept of exponential moving average (EXPMA) [29] to optimize the parameter µ within the model.The specific calculation formula is as follows: where θ t represents all parameter weights obtained in the t-th update and µ t denotes the moving average of all parameters in the t-th update.β denotes the weight parameter. The sliding average can be regarded as the average value of a variable over a certain period of time.Compared to direct assignment, the value obtained through sliding average is smoother and less jittery in the graph, and it does not fluctuate significantly due to occasional abnormal values.The sliding average can enhance the robustness of the model on test data.Although the dataset used in this study is already quite extensive and rich, there is still a shortage in using these data to train the model due to the relatively small number of difficult samples.Therefore, this study proposes the EMASlideLoss loss function, which optimizes the SlideLoss function using the exponential moving average (EXPMA) approach to address the issue of sample imbalance and enhance the robustness of the model. YOLOv8-PG Network Model In summary, this paper has made improvements to the YOLOv8n model in the four aspects mentioned above.The overall network structure of the improved YOLOv8-PG model is illustrated in Figure 8. Specifically, the C2f-Faster-EMA module was designed for the backbone network, and the C2f-Faster module was used to replace the C2f module in the neck network and introduced the Dysample to the neck network.Regarding the loss function, the EMASlideLoss classification loss function was designed. Animals 2024, 14, x FOR PEER REVIEW 9 of 18 where represents all parameter weights obtained in the t-th update and denotes the moving average of all parameters in the t-th update. denotes the weight parameter. The sliding average can be regarded as the average value of a variable over a certain period of time.Compared to direct assignment, the value obtained through sliding average is smoother and less jittery in the graph, and it does not fluctuate significantly due to occasional abnormal values.The sliding average can enhance the robustness of the model on test data.Although the dataset used in this study is already quite extensive and rich, there is still a shortage in using these data to train the model due to the relatively small number of difficult samples.Therefore, this study proposes the EMASlideLoss loss function, which optimizes the SlideLoss function using the exponential moving average (EXPMA) approach to address the issue of sample imbalance and enhance the robustness of the model. YOLOv8-PG Network Model In summary, this paper has made improvements to the YOLOv8n model in the four aspects mentioned above.The overall network structure of the improved YOLOv8-PG model is illustrated in Figure 8. Specifically, the C2f-Faster-EMA module was designed for the backbone network, and the C2f-Faster module was used to replace the C2f module in the neck network and introduced the Dysample to the neck network.Regarding the loss function, the EMASlideLoss classification loss function was designed. Experimental Details The model for detecting real and fake pigeon eggs was trained based on the PyTorch framework [30].The training was conducted for 300 epochs with a batch size of 16.The initial learning rate was set to 0.01, and the weight decay coefficient was set to 0.0005.A warm-up training strategy was employed with warm-up epochs set to 3 and warmup momentum set to 0.8.To reduce memory usage and improve training speed, mixed precision training strategy was adopted.Additionally, mosaic image augmentation was disabled in the last 10 epochs.For detailed experimental environment and hyperparameter settings, please refer to Tables 1 and 2, respectively. Model Evaluation Index To better evaluate the performance of the model, this experiment adopts the following metrics: F1-score (F1), mean average precision (mAP), model parameters (Params), and giga floating-point operations per second (GFLOPs).The specific calculation formulas are as follows: Among them, n index is the number of detection types, and two types of objects are detected in this experiment.TP represents the number of correct detections, and FP indicates the number of errors detected.FN indicates the number of missed tests, and F1 represents the harmonic mean of precision and recall with the confidence threshold is 0.5.AP represents the accuracy of a certain category, and mAP represents the average accuracy for all categories. The metrics mAP50 and mAP75 correspond to different thresholds of Intersection over Union (IoU).Specifically, mAP50 is calculated using an IoU threshold of 0.50, while mAP75 uses a threshold of 0.75.On the other hand, mAP50-95 refers to the mean average precision calculated over a range of IoU thresholds from 0.50 to 0.95, with increments of 0.05.Generally, the mAP50-95 index is the most stringent as it considers a wider range of IoU thresholds, followed by mAP75, while mAP50 has the lowest threshold and less-strict requirements. Ablation Experiment Results To verify the effectiveness of each improvement on the YOLOv8n algorithm, we conducted ablation experiments on the test set of the same pigeon egg dataset, based on the original YOLOv8n model.The ablation experiments are as follows: A: Replaced C2f with C2f-Faster-EMA in the backbone network to reduce the interference of complex environments on pigeon egg detection.B: Replaced C2f in the neck network with C2f-Faster for lightweight processing.C: Improved the upsampling in the neck network using Dysample to enhance the detection ability of low-resolution and small target pigeon eggs.D: Replaced the loss function with EMASlideLoss to enhance the robustness of the model.The experimental results are shown in Table 3. √ " represents the model's usage of a particular module, while "×" indicates that the model does not utilize that module.A: Replaced C2f with C2f-Faster-EMA in the backbone network; B: Replaced C2f with C2f-Faster in the backbone network; C: Introduce Dysample in the neck network; D: Replaced the loss function with EMASlideLoss. Table 3 shows: (1) The first group presents the experimental results of the baseline YOLOv8n, serving as the benchmark for the following four groups of experiments.Its F1 is 97.54%, and mAP50, mAP75, and mAP50-95 are 99.19%,85.01%, and 68.86%, respectively.The parameter count is 3.006 M, and the GFLOPs value is 8.1 G. (2) The experiments from the second group to the fifth group progressively incorporate one improvement point at a time.After replacing the Bottleneck of C2f with the Fasternet-EMA Block module in the backbone network of YOLOv8n, the model's accuracy in various aspects improved, and both the computational complexity and parameter count decreased.Further introducing Fasternet Block into the neck network C2f results in a slight decrease in accuracy, but the model becomes more lightweight, with a reduction of 0.7 G in parameters and 0.344 M in computations.After introducing the ultra-lightweight and effective dynamic upsampler Dysample, F1, mAP50-95, and mAP75 all improved, with negligible impact on parameters and computations.Finally, replacing the EMASlideLoss classification loss function does not increase the model's parameter count or computational complexity but improves the imbalance of samples, leading to a further improvement in model accuracy. (3) The fifth group shows the results of adding all improvement points.Compared to the baseline model, the YOLOv8-PG model increased F1 by 0.76%, and the mAP50, mAP75, and mAP50-95 metrics improved by 0.14%, 4.45%, and 1.56%, respectively.Additionally, the computational complexity significantly decreased, with GFLOPs reduced from 8.1 G to 6.1 G and parameters reduced from 3.006 M to 2.318 M, representing reductions of 24.69% and 22.89%, respectively. Experimental Comparison with Other Models To validate the superiority of the proposed algorithm, this study conducted comparative experiments with a series of object-detection algorithms, including Faster R-CNN, YOLOv5s, YOLOv7, YOLOv8n, and YOLOv8s.The experimental results demonstrate that the proposed algorithm achieves an mAP50 of 99.33%, surpassing Faster R-CNN by 9.86%.For the more stringent mAP75 metric, the proposed algorithm outperforms YOLOv5s, YOLOv8n, and YOLOv8s by 1.22%, 4.45%, and 2.06%, respectively.Compared to the YOLOv7 model, the proposed algorithm's mAP50-95 shows a slight decrease of 0.26%, but its parameter count and computational load are only 6.353% and 6.201% of the YOLOv7 model, respectively.The comparative experimental results of the algorithms are presented in Table 4.The experimental results demonstrate that the optimized YOLOv8n model proposed in this study maintains high detection rates and accuracy while reducing memory overhead and detection time.It consumes fewer memory resources compared to the original YOLOv8 model and mainstream object-detection algorithms, making it suitable for deployment on mobile terminals or mobile robotic platforms.Figure 9 visualizes the detection results of real and fake pigeon eggs for different models. Model Improvement Visualization The heatmap is a visualization technique used in object detection, which can display the distribution of intensity of the objects detected by the model in the input image.Brighter areas indicate higher attention from the model.To visually demonstrate the optimization effect of the proposed YOLOv8-PG model on the real and fake pigeon egg dataset, this study employs the Grad-CAM [31] (Gradient-weighted Class Activation Mapping) algorithm for visual analysis.Partial detection results before and after algorithm improvement are shown in Figure 10. From Figure 10, it can be observed that the pigeon farm environment is complex.When YOLOv8n is used for the detection of real and fake pigeon eggs, the background area introduces noise to the model, causing it to focus on some background areas with weaker focus.However, after adding the attention module to improve the model, the noise from the background area is significantly reduced.The model can accurately focus on the pigeon egg area, effectively enhancing the model's ability to extract features of pigeon egg targets in complex environments. Figure 11 illustrates the detection effects of YOLOv8n and YOLOv8-PG models on real and fake pigeon eggs under different IoU thresholds-that is, AP values of each category, presented in the form of heat maps.With the increase in IoU, the AP value of each category decreased gradually.When the IoU threshold is lower than 0.8, the AP value of both models is higher than 85% for the real pigeon egg target.The results show that the two models can detect the real pigeon eggs well.For fake pigeon eggs, when the IoU is 0.7 to 0.8, the pigeon eggs are sticky and occluding seriously, and the YOLOv8n model cannot be effectively detected, while the YOLOv8-PG model can reduce the interference of the complex environment and the detection accuracy is increased by 2.3%, 9%, and 9.42%, respectively.senting reductions of 24.69% and 22.89%, respectively.From Figure 10, it can be observed that the pigeon farm environment is complex.When YOLOv8n is used for the detection of real and fake pigeon eggs, the background area introduces noise to the model, causing it to focus on some background areas with weaker focus.However, after adding the attention module to improve the model, the noise from the background area is significantly reduced.The model can accurately focus both models is higher than 85% for the real pigeon egg target.The results show that the two models can detect the real pigeon eggs well.For fake pigeon eggs, when the IoU is 0.7 to 0.8, the pigeon eggs are sticky and occluding seriously, and the YOLOv8n model cannot be effectively detected, while the YOLOv8-PG model can reduce the interference of the complex environment and the detection accuracy is increased by 2.3%, 9%, and 9.42%, respectively. Discussion Since the proposal of detection models based on deep learning, they have been widely applied in various industries, with researchers making great efforts to design suitable models and continuously optimize them.Commonly used object-detection methods can be roughly divided into two categories: single-stage detectors and two-stage detectors.Xu et al. [32] deployed an improved Mask R-CNN, a two-stage detector, on mobile robots for egg detection, achieving a high accuracy of 94.18%, but it had a slow detection speed of only 0.76 FPS.Due to not requiring a proposal generation stage, single-stage detectors can obtain detection results in a single pass, often resulting in higher processing speeds compared to two-stage detectors.With the iterations of the YOLO series of single-stage detectors, YOLOv8 utilizes anchor-free and decoupled heads to independently process objects, allowing each branch to focus on its own task.YOLOv8 prioritizes the balance between speed and accuracy-a crucial consideration for integrated system applications.Furthermore, YOLOv8 is more friendly towards detecting small objects, making it the chosen baseline model for our research. Research has shown that incorporating attention mechanisms and introducing upsamplers [33][34][35][36] have been proven to effectively enhance model detection accuracy.Zeng et al. [37] proposed a YOLOv8 model based on the CBAM mechanism, which can effectively select key features of targets, achieving high-precision recognition of coal and gangue.Li et al. [38] proposed an algorithm based on an improved YOLOv5s to achieve target detection and localization in tomato picking.This algorithm replaces upsampling with the CARAFE structure, which improves network sensitivity and accuracy while maintaining lightweightness.It is worth mentioning that different application environments and datasets require the customization, modification, and development of models based on single-stage detectors (such as YOLO).Even if the working principle of the model remains unchanged, meaningful architectural modifications must be proposed to perform sufficient customization of deep learning models.Considering the complex environment of pigeon coops and the deployment requirements of models, we improved YOLOv8 using C2f-Faster-EMA, C2f-Faster, and Dysample to reduce the interference of complex environments on the model and enhance the model's ability to detect low-resolution and small targets.Wang et al. [39] addressed the problem of imbalanced difficulty samples by introducing SlideLoss, but they used a fixed value as the threshold for discriminating difficult samples, which cannot improve the model's generalization ability.We optimized the threshold using exponential moving average (EXPMA) and proposed the EMASlideLoss loss function, effectively improving model performance and enhancing model robustness.Combining the above four improvements, this study proposes the YOLOv8-PG model, with F1, mAP50-95, and mAP75 values of 98.3%, 70.42%, and 89.46%, respectively, indicating that the model effectively recognizes real and fake pigeon eggs.The parameters and computation amount of the model are 2.318 M and 6.4 G, respectively, indicating that the model architecture is lightweight and has the potential to be deployed on embedded devices or mobile platforms. In recent years, scholars have conducted research on model deployment.Ju et al. [40] proposed the MW-YOLOv5s rice-recognition model and successfully deployed it on a weeding robot, meeting the practical requirements for both detection accuracy and speed.Yu et al. [41] utilized strategies such as SPD-Conv, CARAFE, and GSConv to propose the lightweight model SOD-YOLOv5n, with a model size of only 3.64 M. They successfully deployed the model on Android devices for the real-time detection and counting of winter jujubes.In the future, we plan to deploy the YOLOv8-PG model on robots for automated egg detection.However, the model still has some limitations.Firstly, the data collection method in this study involved installing cameras on feeders, resulting in a high installation height and wide field of view.However, the actual perspective of future intelligent eggpicking robots may be limited and more prone to occlusion issues.Secondly, the real and fake pigeon egg dataset in this study only collected images of pigeon eggs from the Silver King breed, which may introduce biases compared to eggs from other breeds.Therefore, in future research, we will further expand the dataset by adding images of pigeon eggs from different angles and different breeds to enhance the model's robustness.Additionally, to deploy the trained model on actual robots, this study will consider converting the trained PyTorch format weights to formats such as ONNX and TensorRT.Furthermore, we will utilize TensorRT for acceleration and test the model's performance on edge devices such as Jetson and Raspberry Pi using methods such as FP16 and INT8 quantization. Conclusions This paper addresses the high rate of pigeon egg breakage and the high cost of manual labor in pigeon egg breeding by proposing a pigeon egg-detection algorithm called YOLOv8-PG, based on a YOLOv8 design.This algorithm is capable of detection in complex environments with limited computational resources.Firstly, by combining the Fasternet Block with the EMA attention mechanism and introducing them into the backbone network of the algorithm, the model's feature-extraction capability for pigeon egg targets is enhanced.Secondly, introducing C2f-Faster into the neck network of the algorithm further reduces the weight of the model, reducing the model's parameter and computational complexity.Additionally, the dynamic upsampler Dysample, based on point sampling, is introduced into the model to enhance its detection capability for lowresolution and small-target objects with minimal computational overhead.Finally, the SlideLoss loss function is optimized using the EXPMA concept, and EMASlideLoss is proposed to address the problem of sample imbalance, enhance model robustness, and improve algorithm performance. The experimental results demonstrate that compared to the two-stage algorithm Faster R-CNN, the model designed in this study shows significant competitiveness in terms of detection accuracy.Compared to other YOLO algorithms in the same series, YOLOv8-PG reduces memory overhead and detection time while maintaining a high detection rate and accuracy.Relative to the baseline YOLOv8n, the YOLOv8-PG model shows improvements in the F1 score by 0.76% and the mAP50, mAP75, and mAP50-95 metrics by 0.14%, 4.45%, and 1.56%, respectively.Additionally, there is a significant reduction in computational complexity, with GFLOPs decreasing from 8.1 G to 6.1 G and parameters decreasing from Figure 1 . Figure 1.Image data acquisition hardware platform. Figure 1 . Figure 1.Image data acquisition hardware platform. Figure 3 . Figure 3. Image annotation interface.Annotation guidelines: 1.All annotated bounding boxes should accurately cover the target, closely fitting the target's edges in a minimal rectangular box format[16].They should not exclude any part of the real or fake pigeon eggs, while also avoiding the inclusion of excessive background information.2.Annotations for real and fake pigeon eggs should maintain consistency and strictly adhere to the requirements of the pigeon egg schema.3.For partially occluded pigeon egg targets, the annotation should include the occluding object along with the target, ensuring that no clearly identifiable pigeon egg targets are missed.4.If pigeon eggs are heavily obscured by pigeon cages, feathers, feces, or other breeding pigeons, making them difficult for the human eye to identify, then those targets should not be annotated. It replaces the C3 structure of YOLOv5 with the C2f structure.The C2f module can combine advanced features with context information to enhance the gradient flow of the model and the feature representation capability of the network by adding additional jump connections, thus improving detection accuracy.The specific module structure is shown in Figure 4. 2. YOLOv8 replaces the detection head with a decoupled-head structure, separating the classification head from the detection head.Additionally, it switches from Anchor-Based to Anchor-Free detection.3. Regarding the loss function, YOLOv8 separates the regression and classification tasks in object-detection prediction.For the regression task, it employs Distribution Focal Loss (DFL Loss) and Complete Intersection over Union Loss (CIoU Loss).For the classification task, it uses Binary Cross-Entropy Loss (BCE Loss).imals 2024, 14, x FOR PEER REVIEW 5 of 18 detection head network (the output component of the model, responsible for generating inspection results).It is worth noting that, building upon the success of YOLOv5, YOLOv8 introduces new features and improvements to further enhance its performance and flexibility, including [20]: 1.In the backbone network and neck network, YOLOv8 incorporates the design concept of YOLOv7 ELAN [21].It replaces the C3 structure of YOLOv5 with the C2f structure.The C2f module can combine advanced features with context information to enhance the gradient flow of the model and the feature representation capability of the network by adding additional jump connections, thus improving detection accuracy.The specific module structure is shown in Figure 4. 2. YOLOv8 replaces the detection head with a decoupled-head structure, separating the classification head from the detection head.Additionally, it switches from Anchor-Based to Anchor-Free detection.3. Regarding the loss function, YOLOv8 separates the regression and classification tasks in object-detection prediction.For the regression task, it employs Distribution Focal Loss (DFL Loss) and Complete Intersection over Union Loss (CIoU Loss).For the classification task, it uses Binary Cross-Entropy Loss (BCE Loss). Figure Figure 5b illustrates the design of the Fasternet Block module.The Fasternet Block module consists of a PConv and two 1 × 1 Conv layers forming a residual block, with shortcut connections included to reuse important features.Normalization layers and activation layers are only applied after the intermediate 1 × 1 Conv layer, aiming to preserve feature diversity and achieve lower latency. Figure 5 . Figure 5. Fasternet Block and its key components: (a) Partial convolution, (b) Fasternet Block.* stands for convolution operation.Assuming the input and output feature maps have an equal number of channels, denoted as c, k represents kernel size and c p represents the number of PConv channels.The FLOPs of PConv are calculated as h × w × k 2 × c 2 p .When c p is c/4, the FLOPs of PConv become one-sixteenth of those of regular convolution.Additionally, the memory access of PConv is computed as h × w × 2c p + k 2 × c 2 p ≈ h × w × 2c p , which is only a quarter of the regular convolution.Figure5billustrates the design of the Fasternet Block module.The Fasternet Block module consists of a PConv and two 1 × 1 Conv layers forming a residual block, with shortcut connections included to reuse important features.Normalization layers and activation layers are only applied after the intermediate 1 × 1 Conv layer, aiming to preserve feature diversity and achieve lower latency. Figure 7 . Figure 7. Dysample network structure: (a) Sampling based dynamic upsampling; (b) Sampling point generator in DySample.The input feature, upsample feature, generated offset, and original grid are denoted by , , and , respectively. denotes the sigmoid function.ℎ represents the sampled height, represents the sampled width. represents the number of channels after the feature graph passes through the linear layer. Figure 7 . Figure 7. Dysample network structure: (a) Sampling based dynamic upsampling; (b) Sampling point generator in DySample.The input feature, upsample feature, generated offset, and original grid are denoted by χ, χ ′ , G and O, respectively.σ denotes the sigmoid function.sh represents the sampled height, sw represents the sampled width.gs 2 represents the number of channels after the feature graph passes through the linear layer. 3. 5 . 6 . Model Improvement Visualization 3.The heatmap is a visualization technique used in object detection, which can display the distribution of intensity of the objects detected by the model in the input image.Brighter areas indicate higher attention from the model.To visually demonstrate the optimization effect of the proposed YOLOv8-PG model on the real and fake pigeon egg dataset, this study employs the Grad-CAM [31] (Gradient-weighted Class Activation Mapping) algorithm for visual analysis.Partial detection results before and after algorithm improvement are shown in Figure 10. Figure 10 . Figure 10.Comparison of thermal map visualization results: (a) YOLOv8n (b) YOLOv8-PG.Colors represent scalars of one order of magnitude, with hot tones (such as red or yellow) representing areas of high activity or importance, and cool tones (such as blue or green) representing areas of low activity or importance. Figure 11 . Figure 11.AP value heat map of each detection target. Table 3 . Comparison results of ablation experiments of different models. Table 4 . Performance comparison between various network models.
v3-fos-license
2018-05-08T18:31:03.589Z
2011-04-13T00:00:00.000
18127608
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://bmcpublichealth.biomedcentral.com/track/pdf/10.1186/1471-2458-11-S3-S35", "pdf_hash": "64e7de4fc86e1ab8ee6590ee08170835f4d1fa3d", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43685", "s2fieldsofstudy": [ "Medicine" ], "sha1": "64e7de4fc86e1ab8ee6590ee08170835f4d1fa3d", "year": 2011 }
pes2o/s2orc
Comparing estimates of child mortality reduction modelled in LiST with pregnancy history survey data for a community-based NGO project in Mozambique Background There is a growing body of evidence that integrated packages of community-based interventions, a form of programming often implemented by NGOs, can have substantial child mortality impact. More countries may be able to meet Millennium Development Goal (MDG) 4 targets by leveraging such programming. Analysis of the mortality effect of this type of programming is hampered by the cost and complexity of direct mortality measurement. The Lives Saved Tool (LiST) produces an estimate of mortality reduction by modelling the mortality effect of changes in population coverage of individual child health interventions. However, few studies to date have compared the LiST estimates of mortality reduction with those produced by direct measurement. Methods Using results of a recent review of evidence for community-based child health programming, a search was conducted for NGO child health projects implementing community-based interventions that had independently verified child mortality reduction estimates, as well as population coverage data for modelling in LiST. One child survival project fit inclusion criteria. Subsequent searches of the USAID Development Experience Clearinghouse and Child Survival Grants databases and interviews of staff from NGOs identified no additional projects. Eight coverage indicators, covering all the project’s technical interventions were modelled in LiST, along with indicator values for most other non-project interventions in LiST, mainly from DHS data from 1997 and 2003. Results The project studied was implemented by World Relief from 1999 to 2003 in Gaza Province, Mozambique. An independent evaluation collecting pregnancy history data estimated that under-five mortality declined 37% and infant mortality 48%. Using project-collected coverage data, LiST produced estimates of 39% and 34% decline, respectively. Conclusions LiST gives reasonably accurate estimates of infant and child mortality decline in an area where a package of community-based interventions was implemented. This and other validation exercises support use of LiST as an aid for program planning to tailor packages of community-based interventions to the epidemiological context and for project evaluation. Such targeted planning and assessments will be useful to accelerate progress in reaching MDG4 targets. Background Although there are encouraging trends in some key countries, meeting Millennium Development Goal (MDG) 4 for reduction of child mortality will be challenging, given current trends. [1] Community-based intervention packages are not commonly implemented at large scale, although recent evidence demonstrates that they are effective for neonatal and child mortality reduction at moderate scale in various resource-constrained settings. [2,3] This has prompted calls for greater emphasis on community-level delivery, especially preventive interventions and integrated strategies. [2,4,5] Analysis of effectiveness of this type of programming is hampered by its cost and complexity. It is difficult to estimate the mortality impact of packages of interventions in realistic field settings, as well as effectiveness of component interventions within packages. [6] Projects implementing interventions under these conditions usually lack the resources necessary to carry out mortality impact evaluations. The Lives Saved Tool (LiST) produces mortality reduction estimates by modelling the mortality effect of increases in population coverage for key child health interventions. LiST calculates this by combining coverage change data with data on effectiveness of each intervention against common serious child illnesses, and country-specific cause of death profiles. This is explained in detail elsewhere. [7] By producing intuitive and equivalent outputs from otherwise disparate data, such as the percentage reduction in mortality rates and number of deaths averted, LiST facilitates comparisons that are otherwise difficult to make. Population based surveys in which mortality is directly measured are costly, difficult, and time-consuming, and LiST modelling could be an attractive alternative to estimate mortality reduction. In order to validate LiST-produced estimates of child mortality reduction in community-based NGO programming, a search was done of such projects with complete coverage data for their child health interventions and independent child mortality reduction estimates. One met criteria for inclusion. Search for community-based NGO projects Projects with data available for validation of LiST were sought based on the following criteria: 1. Study was of a community-based NGO child health project; 2. Baseline and endline population coverage indicators were available for at least two child health interventions; 3. Mortality data were available at least at baseline and endline and independently verified. A comprehensive search of the published literature had been run on PubMed by one of the authors (HP) for effectiveness of community-based interventions. The 3,000 articles from this search were reviewed and one project was identified that fit selection criteria, a USAID-funded child survival project implemented by World Relief in Mozambique from 1999-2003. [8] A search for similar projects not published in the peer-reviewed literature was then run on USAID's Development Experience Clearinghouse database (http://dec. usaid.gov) and Child Survival and Health Grants database (http://www.mchipngo.net). Five additional candidate projects were identified. Project documentation was reviewed and knowledgeable staff interviewed. None of these additional projects met inclusion criteria. Table 1 shows selected key characteristics of this project which was not a research project and had no control or comparison group. The project intervened on all major causes of under-five mortality in the area (neonatal conditions, malaria, pneumonia, diarrhea, and measles) except HIV/ AIDS. Direct mortality data were available from an independent retrospective mortality assessment carried out in 2004 by a research team from the Mozambican National Institute of Statistics, Ministry of Health, World Relief and other NGOs, and designed by collaborators from Johns Hopkins School of Public Health. This research team used a pregnancy history survey adapted from the birth history in the women's questionnaire of the 2003 Mozambique Demographic and Health Survey. Coverage data used for modeling in LiST The project collected population coverage data on seven LiST interventions pertaining to its areas of intervention. Coverage for an eighth LiST intervention (education for complementary child feeding) was not available, but data collected for increased food intake during previous pregnancy, another nutrition education intervention included in its nutrition education package (Table 2), was used as a proxy. These eight indicators cover all project technical intervention areas. Coverage data was collected at baseline and endline using a small-sample survey instrument known as the Knowledge, Practices, and Coverage (KPC) survey, based on DHS questions. Households were selected according to a standard cluster sampling methodology, with 30 independent clusters and 10 households in each cluster. Cluster selection was based on village level population data, with probability of selection proportional to population size. The method used is explained in detail elsewhere [9] The project target geographic area remained invariant from baseline to endline and comprised the entire district of Chokwe except Chokwe town -48 villages with an estimated population of 119,467 at baseline). The KPC survey collects data on multiple indicators important to the project, and the sample is designed to detect statistically significant baseline/ endline differences of at least ± 16% (alpha = 0.05, beta = 0.20) if no sub-sampling is done and an indicator starts at a baseline of 50%. Ninety five percent confidence intervals for project data used for LiST modelling are shown in Table 2. KPC surveys were carried out in October 1999 and July 2003. Project activities started in March 2000, so this is taken as the baseline year for LiST modelling. KPC surveys cover mothers/caretakers of children 0-23 months of age. The surveys were carried out by the project staff themselves. In order to minimize possible bias, interviewers were not assigned clusters in which they themselves were working in their day-to-day project activities. The data was checked for consistency by an independent team at ICF Macro before being entered in a publically available database (http://www.mchipngo. net). The other child health indicators in LiST for which the project did not have data were reviewed. Coverage data were estimated for most of the interventions being implemented at the time. The values of and sources for non-project data are shown in Table 3. Most data are from the 1997 and 2003 Demographic and Health Surveys (DHS). When 1997 data is used, its value assumed • Outreach workers (socorristas) increased in number from 3 to 32 • Increase in access to trained providers of care for sick children from 65% to 99% • Health providers trained in IMCI increased from 0% to 100% in project area Main health activities of other organizations in Gaza District during project period • Oxfam assisted in distribution of ITNs to all women of fertile age and children under 5. • NGO assistance to MOHtrain socorristas in community-based child health activities. • National vaccination campaigns, polio eradication campaigns x 2 by LiST to change linearly toward its 2003 value, and the 2000 value assigned by LiST is used as the baseline to estimate mortality reductions. LiST modeling LiST is a cohort model of child survival from 0-59 months of age. Its structure and assumption are described in detail elsewhere. [7,10] LiST provides estimates of the cause-specific child mortality impact of over 40 interventions with strong evidence of effect on child survival. The user must supply the values of changes in coverage for these interventions. LiST has country-specific baseline under-five and infant mortality rates and cause of death profiles needed for its calculations. These parameters can be manipulated by the user if desired. The Child Health Epidemiology Reference Group (CHERG) meets periodically to weigh published evidence, determine which interventions to include in the model and what effect sizes to assign them. [11] The under-five mortality modeling is contained within the Spectrum platform which models demographic trends, given assumptions about population growth rates and prevalence of use of family planning methods. [12] Version 4.2 of the LiST tool was used for modeling and was downloaded from the Johns Hopkins Institute for International Programs web site. [7] The under-five and infant mortality rates used were those specific to the project area at baseline, as measured in the pregnancy history and described in detail elsewhere. [8] National cause of death profiles, population structure, and fertility data were used. All available coverage data both from the World Relief Mozambique project and other sources were examined to determine which coverage indicators matched those in LiST. The authors discussed the indicator definitions and corresponding coverage data that best fit the interventions in LiST. Eight project indicators (Table 2) were mapped to LiST interventions. The fit between project indicators and LiST was exact for seven of the indicators. For one LiST indicator (complementary feeding) the project had no direct data, but had intervened for a package of behavior change practices that included both maternal nutritional practices during pregnancy and child feeding practices. The project had data on the coverage for increased food intake during last pregnancy, and this was used as a proxy for child complementary feeding practices. Of the other 21 indicators in LiST for interventions being implemented in Mozambique at the time, information was available from other sources for eight; LiST estimates the value of nine others from available data (e.g. LiST estimates coverage for syphilis screening from ANC coverage). Non-project data used for LiST modelling is summarized in Table 3. In summary, data was available for all but four of the indicators in LiST for interventions being implemented in Mozambique in the relevant time period. Sensitivity analyses were run for the LiST estimates, by varying all the parameters used in the model: Coverage data was varied within the limits of the 95% confidence intervals. The values assigned for intervention effectiveness, baseline mortality figures, and cause of death profiles were varied by ± 10% as well. 2004. An unpublished Fortran program written by one of the authors of Edward, et. al. takes as input the time period (beginning and end dates in months) and age group (minimum and maximum) for mortality estimation and calculates m(x) for this age group in the time period. Using the formula of Chiang [13] and the calculation of mean time lived in the interval for those dying in the interval, 1 q 0 and 5 q 0 were calculated and the data plotted in a lexis diagram. Standard errors were calculated assuming a Poisson distribution. Results LiST modeled mortality estimates and the corresponding directly measured mortality estimates are shown in Table 4. The project had several measures of under-five LiST interventions not being implemented to a significant extent in Mozambique at the time (coverage set to zero at baseline and final): child ART, PMTCT, preventive postnatal care, kangaroo mother care, active early detection of maternal and neonatal complications, multiple micronutrient supplementation, oral antibiotic case management of severe neonatal infections, injectable antibiotic case management of severe infections in neonates, zinc for prevention/treatment of diarrhea, rotavirus/Hib/pneumococcal vaccines. and infant mortality reduction, derived both from a project-implemented community-based vital events registration system and from the independent pregnancy history survey. [8] The latter was felt to be the most accurate mortality measure for use as a comparison to Accuracy and completeness of coverage data We used survey data generated as part of standard program monitoring and evaluation activities to model mortality impacts using LiST. Although the available data was not collected as part of a research project, the coverage data input into LiST was of sufficient quality to generate relatively accurate estimates within the limits of the tool. A standard survey instrument was used; data was collected by professional project staff; the potential for bias reduced by avoiding having interviewers collect information from villages where they worked; supervisory spot checks were performed for reliability of information; and data was reviewed for quality by technical support staff from ICF Macro on entry into the online child survival project database. Although project interventions targeted children 0-59 month olds, which is the cohort modeled in LiST and whose mortality was measured directly in the pregnancy history, the coverage data used for LiST was collected for children 0-23 months old. The inaccuracy caused by this is likely to be small for the following reasons: (1) Even though the KPC measures are collected for children 0-23 months of age, in fact the project implemented interventions for the entire 0-59 month cohort of children, so we expect that the coverage for 0-59 month olds to be substantially the same. (2) we expect that 79% of deaths in children 0-59 months occurred in 0 to 23 month olds, so coverage of 0-23 month olds is the most critical. This calculation was done using the Model Life Tables for Developing Countries published by the United Nations [14] for an area with the project's baseline U5MR and IMR. (3) LiST assigns the same effect size to relevant age groups 0-23 months and 24-59 months for all modelled project interventions. As part of the sensitivity analysis presented in Table 5, the effect was examined of halving the coverage change among 24-59 month olds compared to that measured in the KPC for 0-23 month olds. This dropped the estimate of U5MR reduction by 4.8%. This project had several nutrition education interventions for well, sick, and malnourished children as well as pregnant mothers, but only an estimate for complementary feeding matched the nutrition interventions in LiST. The CHERG has not included other project interventions in LiST like continued feeding during diarrheal episodes because of a lack of published high quality data needed to accurately estimate an effect size, even though they are likely to have an effect on child mortality. Accuracy of modelled mortality estimates The accuracy of the U5MR reduction estimate (39% LiST; 37% Pregnancy History) was better than the IMR reduction estimate (34% LiST; 48% Pregnancy History). Both were within the 95% CI of the parameter, but LiST's underestimation of the reduction in IMR may be caused by the fact that the only nutritional intervention with probable infant mortality impact that could be modelled in LiST was complementary feeding. The results of a sensitivity analysis of the LiST model are shown in Table 5. LiST estimates of mortality reduction are calculated based on several inputs: The baseline mortality rate, cause of death profile, change in coverage for each of the interventions in the model, and their effect sizes. Table 5 shows the effect on LiST's estimates of the reductions in U5MR and IMR caused by changing one of the most critical examples of each of these parameters by 10%. The manipulations of the diarrhea and malaria parameters are shown in the table, as the project had large coverage changes for highly effective interventions for these causes of death. The modelled changes in mortality are more sensitive to changes in parameters that affect the calculation of the overall baseline mortality than they are to changes in the estimation of coverage or intervention effectiveness. This is not surprising, as the value of the baseline mortality affects the calculations for all interventions in the model. One of the potential strengths of LiST is its ability to simplify analysis of situations in which multiple interventions are implemented simultaneously. Yet to date there have been few published reports on the accuracy of LiST estimates for mortality reductions in areas where packages of community-based child health interventions are being implemented. The LiST validation with data from the evaluation of Accelerated Child Survival and Development programs is similar to this one [15] and to some extent the national level exercises with DHS data. [16] The current analysis shows that even in the context of relatively complex community-based NGO programming with interventions designed to affect less proximate determinants of child health like level of community organization and women's empowerment, LiST accurately estimated mortality changes. Limitations of validation analysis Although coverage data for project interventions was fairly complete and the time periods coincided well with the mortality estimates calculated from the pregnancy history, the main limitations of the current exercise are (1) that the data was not available from the project for 21 relevant LiST indicators, and had to be estimated mainly from consecutive DHS surveys and (2) the 95% confidence intervals are quite wide for the mortality estimates derived from the pregnancy history. There are cautions that must be kept in mind when using LiST. The accuracy of its estimates is dependent on having accurate information on the causes of death in the program area. National cause of death profiles are now available through CHERG, but there may be important variation from one region of a country to another. The outputs from LiST must also be interpreted in light of complementary considerations. For example, when used in planning LiST could mistakenly give the impression that mature interventions like vaccination that already have achieved high levels of coverage are not important, as simply maintaining high coverage yields no additional lives saved. LiST also does not take account of the mode of delivery and the fact that delivery of some interventions like antenatal care or vaccination establishes a platform that can serve for adding other interventions, like ITN or vitamin A distribution. Even with an awareness of these limitations and caveats, LiST can be a valuable aid in prioritizing choices for deployment of scarce resources. Although only a single project was identified for study, it is typical of integrated, community-based NGO programming and implemented under realistic field conditions in a resource-constrained setting typical of conditions of other community-based NGO programming and the type of settings in which greater progress needs to be made to reach MDG4 targets. Conclusions A validation exercise has confirmed that in a relatively routine field setting of an NGO child survival project implementing a package of community-based interventions in Mozambique, the Lives Saved Tool (LiST) provides a reasonably accurate estimate of under-five and infant mortality reduction when compared to independent directly measured mortality estimates. These are the kinds of routine programming conditions that LiST attempts to simulate with its modeling. These findings support the use of LiST as a practical tool for estimating the mortality effect of NGO community-based child health programs that is less costly than direct mortality measurement. These findings also support the use of LiST as a planning tool for choosing among child survival interventions in an attempt to maximize mortality impact in pursuit of MDG4. Role of the funding source Several authors (JR, DP, LR) have been associated with USAID's Child Survival and Health Grants Program during manuscript preparation. Several others (MM, PE) were associated with World Relief, whose Vurhonga II project was funded by USAID through this mechanism. The study sponsors had no role in the study, data collection, or analysis. The corresponding author had final decision-making authority over interpretation of the results and the decision to submit this paper.
v3-fos-license
2018-11-15T18:38:58.456Z
2018-11-09T00:00:00.000
53248253
{ "extfieldsofstudy": [ "Medicine", "Biology" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0207344&type=printable", "pdf_hash": "d90657554780c1cda914900428177551903f1fbb", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43686", "s2fieldsofstudy": [ "Biology" ], "sha1": "a0380eec40db481c8a63af0bb8af382cb155cb21", "year": 2018 }
pes2o/s2orc
Analysis of bHLH genes from foxtail millet (Setaria italica) and their potential relevance to drought stress Foxtail millet is very a drought-tolerant crop. Basic helix–loop–helix (bHLH) transcription factors are involved in many drought-stress responses, but foxtail millet bHLH genes have been scarcely examined. We identified 149 foxtail millet bHLH genes in a genome-wide analysis and performed Swiss-Prot, GO, and KEGG pathway analyses for these genes. Phylogenetic analyses placed the genes into 25 clades, with some remaining orphans. We identified homologs based on gene trees and Swiss-Prot annotation. We also inferred that some homologs underwent positive selection in foxtail millet ancestors, and selected motifs differed among homologs. Expression of eight foxtail millet bHLH genes varied with drought stress. One of these genes was localized to a QTL that contributes to drought tolerance in foxtail millet. We also perform a cis-acting regulatory element analysis on foxtail millet bHLH genes and some drought-induced genes. Foxtail millet bHLH genes were inferred to have a possible key role in drought tolerance. This study clarifies both the function of foxtail millet bHLH genes and drought tolerance in foxtail millet. Introduction The basic helix-loop-helix (bHLH) transcription factor family is a large gene super-family found in plant and animal genomes [1], and its members play very key roles in a wide range of metabolic, physiological, and developmental processes [2][3][4][5]. bHLH family members have many different functions [6], and they each contain a core bHLH domain of approximately 60 amino acids, including a basic region (at the N-terminus) and a HLH region [7][8]. bHLH proteins can interact with each other and form homo-dimers or hetero-dimers that are promoted by the bHLH domains [1,9]. As a core transcription factor domain, the bHLH domain is involved in DNA binding [10], with bHLH domains or bHLH proteins binding to E-box (5 0 -CANNTG-3 0 ) and G-box (5 0 -CACGTG-3 0 ) cis elements and regulating gene expression [4,11]. Only a small number of plant bHLH transcription factors have been characterized functionally, far fewer than have been characterized in animals [6]. A previous study showed that PLOS bHLH transcription factors can act as transcriptional activators or repressors and are involved in the regulation of fruit dehiscence, anther and epidermal cell development, hormone signalling, and other similar processes in plants [12]. The plant bHLH protein PIF3 is a direct phytochrome reaction partner in the photoreceptor's signalling network [4] and is involved in controlling the expression of light-regulated genes [13]. Some bHLH transcription factors can interact with MYB transcription factors and WD40 or WDR proteins to form a MYB-bHLH-WD40 (MBW) complex or MYB-bHLH-WDR (MBW) complexes, which can activate anthocyanin biosynthesis genes, resulting in anthocyanin pigment accumulation and fiber development in plants [14][15][16][17]. Some functions of unknown bHLH transcription factors as well as some new functions of known bHLH transcription factors have been gradually identified in different plant species. In the medicinal plant Catharanthus roseus, the bHLH transcription factor BIS2 is essential for monoterpenoid indole alkaloid production [18]. In Salvia miltiorrhiza, bHLH transcription factors are related to tanshinone biosynthesis [19]. Arabidopsis bHLH129 appears to regulate root elongation [20], while Arabidopsis bHLH109 is associated with somatic embryo induction [21]. Additionally, the Arabidopsis bHLH transcription factor PIF4 plays a major role in integrating multiple signals to regulate growth [22]. Research has shown that grasses can use an alternatively wired bHLH transcription factor network to establish stomatal identity [23], further enriching our understanding of plant bHLH transcription factors. Some plant bHLH transcription factors have also been recently reported to be related to responses to abiotic stresses such as drought and cold. For example, Feng et al. recently found that a novel tomato bHLH transcription factor, SlICE1a, could confer cold, osmotic-stress, and salt tolerance to plants [24]. Similarly, Eleusine coracana bHLH57 transcription factors are related to tolerance of drought, salt, and oxidative stresses [25]. bHLH122 plays an important role in drought and osmotic-stress resistance in Arabidopsis [26], where it regulates the expression of genes involved in abiotic stress tolerance [27]. In sheep grass (Leymus chinensis), many bHLH transcription factor family members were identified via RNA-seq to be responsive to drought stress [28]. Drought stress could affect plant growth, agricultural yields, and survival. Plants have evolved highly complex reactions to drought stress, and many genes are involved in drought stress [29,30]. Plant bHLH genes are likely very important in responses to drought stress [29,30]. Foxtail millet has been proposed as a new model organism for functional genomics studies of the Panicoideae and has the potential to become a new model organism for the study of drought stress responses because of its outstanding tolerance to drought stress [29][30][31]. We identified the foxtail millet bHLH transcription factors in a genome-wide survey and studied the expression of bHLH genes in foxtail millet in various tissues under drought stress conditions. Our purpose was to identify foxtail millet bHLH transcription factor family members, find candidates that may be relevant to drought stress, and improve the current understanding of drought tolerance mechanisms in foxtail millet. Data collection and identification of bHLH genes Whole genome sequences of foxtail millet (Setaria italica) were obtained from the 2012 Foxtail Millet Database (http://foxtailmillet.genomics.org.cn/page/species/index.jsp) [32]. The bHLH domain is conserved within bHLH proteins, and the HMM ID of the bHLH domain is (PF00010) in the pfam database (http://pfam.xfam.org/). The amino acid sequences of HMMs were used as queries to identify all possible candidate bHLH protein sequences in the foxtail millet genome database using BLASTP (E < 0.001). SMART online software (http://smart. embl-heidelberg.de/) was used to identify integrated bHLH domains in putative foxtail millet bHLH proteins. Candidate proteins without integrated bHLH domains were discarded. Swiss-Prot, GO and KEGG pathway annotation We performed Swiss-Prot function annotation analysis based on the UniProtKB/Swiss-Prot database (http://www.uniprot.org/), GO function annotation analysis based on the GO database (http://geneontology.org/page/go-database), and KEGG pathway annotation analysis based on the KEGG database (http://www.kegg.jp/kegg/ko.html). Phylogenetic analysis We aligned the foxtail millet bHLH protein sequences using Clustal Omega online software (http://www.ebi.ac.uk/Tools/msa/clustalo) and constructed neighbor-joining (NJ) trees using MEGA 6.0 with the aligned foxtail millet bHLH protein sequences. Support for inferred evolutionary relationships was calculated from 1000 bootstrap samples [33]. Selection pressure analysis The codeml portion of the phylogenetic analysis maximum likelihood (PAML) program (version 4.7 software) [35] was used to infer potential selective pressures. A comparison of site models M0-M3 was used to determine which kinds of selective pressure the genes underwent, and a M7-M8 comparison was used to identify sites shaped by positive selection [33,36]. Identification of foxtail millet bHLH genes within drought tolerance QTLs QTLs for drought tolerance were identified from previous research by Qie et al [37], and the physical locations of foxtail millet bHLH genes were collected from the foxtail millet genome database (http://foxtailmillet.genomics.org.cn/page/species/index.jsp). The bHLH genes that overlapped with QTLs were inferred to be the genes located within each QTL. Plant material, stress treatments and RNA isolation To induce drought conditions, 14-day-old foxtail millet cv. 'Yugu1' shoots were grown under a 20% polyethylene glycol 6000 (PEG 6000) treatment [29] for 0, 0.5, 6, and 12 h; the 0-h treatment was the control (CK) treatment, while the other treatments simulated droughts of various lengths. The 14-day-old foxtail millet shoots were also grown under a 100 mm/L ABA treatment [38] for 0, 0.5, 6, and 12 h. RNA was isolated using the CTAB method, and we performed reverse transcription according to a previously described protocol [33]. Gene expression analysis Quantitative RT-PCR (qRT-PCR) analysis was conducted as previously described [31]. Three replicates were carried out in this study and t-tests were used to analyze significance. The qRT-PCR primers are provided in S1 Table. A heat map was generated based on RPKM values using Multiexperiment View software. All the PRKM values or RNA-seq data were based on RNA data hosted by the foxtail millet genome database (http://foxtailmillet.genomics.org.cn/ page/species/index.jsp) [32]. RPKM values less than 0.3 were considered unexpressed genes in this study [39]. Identification, annotation, and phylogenetic analysis of foxtail millet bHLH genes The amino acid sequences of bHLHs were extracted from the foxtail millet genome database (http://foxtailmillet.genomics.org.cn/page/species/index.jsp) using BLASTP with amino acid sequences of bHLH domains (Pfam: PF00010) as queries. We identified 149 bHLH family members distributed among all nine chromosomes. We assayed their annotated functions based on the UniProtKB/Swiss-Prot database (http://www.uniprot.org/). All of these bHLHs were annotated based on the best-hit proteins (S2 Table). In a previous study, the function of some bHLHs from Arabidopsis had also been reported [12]. Swiss-Prot functional annotation revealed that most homologs of these Arabidopsis bHLHs can be found in foxtail millet excluding some members, including NAI1 (ER body formation); RHD6 and RSL1 (root hair formation); LHW (root development); PRE1, PRE2, PRE3, PRE4, and PRE5 (gibberellin signalling transduction); KDR (light signal transduction); and some orphans. We also found some functional annotations of bHLH genes that were not identified by a previous study of Arabidopsis bHLHs [12], including LAX_ORYSJ transcription factor LAX PANICLE, WIT1_ARATH WPP domain-interacting tail-anchored protein 1, AIB_ARATH transcription factor ABA-INDUCIBLE bHLH-TYPE, MGP_ARATH Zinc finger protein MAGPIE, PP425_ARATH Pentatricopeptide repeat-containing protein, BH032_ARATH transcription factor AIG1, and Anthocyanin regulatory Lc protein (S2 Table). We used the full-length amino acid sequences of the 149 foxtail millet bHLHs for phylogenetic analysis, in which clades with relatively high bootstrap support (�50) were considered. The phylogenetic tree revealed 25 clades (clades 1-25) in the foxtail millet bHLH family and some orphans (Fig 1). The identified orphan genes were consistent with previous findings by Feller et al., as were the divisions of the clades [12]. Nine foxtail millet homologs of the PIF subfamily members were also found based on Swiss-Prot annotation, and analysis of the KEGG pathway annotation showed that all foxtail millet homologs of PIF subfamily members could be mapped to a KEGG pathway. (Swiss-Prot ID, PIF1), and Millet_GLEAN_10009041 (Swiss-Prot ID, PIF1) were in clade 6. These genes were mapped to the plant hormone signal transduction (Ko04075) and circadian rhythm-plant (ko04712) pathways, with corresponding KEGG annotations of PIF4 (K16189) and PIF3 (K12126). Selection pressure and motif analysis of foxtail millet bHLH genes The bHLH genes that were placed into the same clades and had the same annotation categories were considered homologs. We wanted to know if the genes in one homologous group are functionally redundant or functionally divergent, so we performed a selection pressure analysis of some homologous groups. Molecular signatures of selection were categorized as purifying, positive, and neutral. The d n /d s value (ω) can provide a measurement for changes in selective pressures. Values of ω that are equal to, less than, or greater than one indicate neutral evolution, purifying selection, or positive selection on the target genes, respectively [40]. Motif divergence was observed in many homologous groups, such as bHLH82, FIT, bHLH35, BIM2, bHLH51, and myc2 groups. Different motifs may indicate different functions or functional divergence [31]. Motifs of some homologs were in agreement, such as the ILR3, bHLH30, and UNE12 groups (Fig 2). We analyzed the PI, grand average of hydropathicity, instability index, nuclear localization signals, and transmembrane domains of some homologous groups and found functional divergence may also exist in the homologs containing the same motifs. For example, some bHLH30 and ILR3 members contained nuclear localization signals but some did not (S4 Table). Previous research has indicated that some bHLH genes are duplicated [2,6]. Duplicated genes are the raw material for the evolution of new biological functions and thus play crucial roles in adaption [47]. Expression profile of foxtail millet bHLH genes The expression profiles of each identified foxtail millet bHLH gene were analysed among several tissues: root, leaf, stem, and spica. The expression levels of foxtail millet bHLH genes in the four tissues based on the previous RNA-seq data (http://foxtailmillet.genomics.org.cn/ page/species/index.jsp) and the expression level were captured as RPKM values. Most of these genes were expressed in at least one tissue, and only 20 genes (14.7%) were not expressed in the other three tissues (Fig 3A and S5 Table). According to RPKM values, Millet_GLEAN_100 29834, Millet_GLEAN_10037807, Millet_GLEAN_10010494, Millet_GLEAN_10006 968, Millet_GLEAN_10018454, Millet_GLEAN_10022618, Millet_GLEAN_10016705, Millet_GLEAN_10001930, Millet_GLEAN_10005609, Millet_GLEAN_10019878, Millet_GLEAN_10023721, Millet_GLEAN_10023987, and Millet_GLEAN_10027159 were only expressed in spica tissue and not expressed in the other three tissues. Millet_GLEAN_10 006645, Millet_GLEAN_10014239, Millet_GLEAN_10021795, Millet_GLEAN_10023722, Millet_GLEAN_10023723, Millet_GLEAN_10000529, Millet_GLEAN_10021329, Millet_-GLEAN_10033765, and Millet_GLEAN_10034296 were only expressed in root tissue and not expressed in the other three tissues. In contrast, just one gene, Millet_GLEAN_10010503, was only expressed in leaf tissue and not expressed in the other three tissues. No gene was only expressed in stem tissue and not expressed in the other three tissues. The expression of Mill-et_GLEAN_1002038 (Swiss-Prot ID, ILR3_ARATH transcription factor ILR3) was highest wherever it was expressed in leaf, stem, spica, and root. Its homologs are involved in metal homeostasis, auxin-conjugate metabolism, and salicylic-dependent defence signalling responses in plants [12,[48][49]. In total, 116 foxtail millet bHLH genes were only expressed in root tissue, 77 were only expressed in leaf tissue, 115 were only expressed in spica tissue, and 72 were only expressed in stem tissue. Just 61 genes were expressed in all four tissues (Fig 3B). In contrast, 72 foxtail millet bHLH genes were not expressed in leaf tissue, 33 genes were not expressed in root tissue, 34 genes were not expressed in spica tissue, and 77 were not expressed in stem tissue (Fig 3B and S5 Table). This suggests that foxtail millet bHLH genes are biased towards expression in root and spica tissue. Some foxtail millet bHLHs are related to drought stress Because foxtail millet is remarkably tolerant to drought stress, it has substantial potential to become a new model organism for understanding this trait, which will become even more vital as climate change continues [29]. Previous studies have shown that some plant bHLH genes are involved in tolerance to drought stress [24]. To understand which foxtail millet bHLH members are involved in tolerance to drought stress, candidate genes that are related to drought stress tolerance are first required. In our study, we mainly focused on three kinds of foxtail millet bHLH genes: class A, genes located in QTLs that contribute to drought tolerance; class B, genes whose homologs in other plants were reported to be involved in drought tolerance; and class C, genes that respond to drought stress in foxtail millet. Six QTLs (LOD > 2.5) for drought tolerance have been identified in foxtail millet, including QGSI_D_7A, QCLD_D_1A, QLRND_D_7A, QCLD_D_1B, QCLR_D_6A, and QSR_D_1A [37]. In this study, we determined that only the QTLs QGSI_D_7A (chr7, 33,221,000-27,196, 000), QCLD_D_1A (chr1, 29,834,000-32947000), and QLRND_D_7A (chr7, 30,571,000-21, 648,000) contain bHLH genes in foxtail millet. QTL QCLD_D_1A is related to coleoptile length decreases in foxtail millet [37]. This QTL was estimated to contribute 7% of the observed phenotypic variance [37], and it also contained the bHLH genes Millet_GLEAN_10023797 (Swiss-Prot ID, bHLH95), Millet_GLEAN_10023798 (Swiss-Prot ID, bHLH95), and Millet_ GLEAN_10035595 (Swiss-Prot ID, bHLH128). QTL QLRND_D_7A was related to a lateral root number decrease of foxtail millet [37]. It was estimated to contribute 10% of the observed phenotypic variance [37], and it contained the bHLH genes Millet_GLEA N _1 00 02496 (Swiss-Prot ID, bHLH25) and Millet_GLEAN_10029582 (Swiss-Prot ID, bHLH113). QTL QGSI_D_7A was related to the germination stress tolerance index [37]. It was estimated to contribute 14% of the phenotypic variance [37], and it contained the bHLH genes Millet_ GLEAN_10016232 (Swiss-Prot ID, ARLC_MAIZE Anthocyanin regulatory Lc protein), Mill-et_GLEAN_10037248 (Swiss-Prot ID, bHLH91), and Millet_GLEAN_10002496 (Swiss-Prot ID, bHLH25, which is also contained by the QTL QLRND_D_7A) ( Table 2). By referring to previous RNA-seq data [29], we found eight foxtail millet bHLH genes were involved in the response to drought stress (i.e., the 20% PEG 6000 treatment). Most class A and B genes did not respond to drought conditions, except for Millet_GLEAN_10035595 (Swiss-Prot ID, bHLH128; the function of bHLH 128 is unknown). The other seven genes that did respond were Millet_GLEAN_10023721 (Swiss-Prot ID, bHLH25), Millet_GLEAN_10 007270 (Swiss-Prot ID, bHLH35), Millet_GLEAN_10008844 (Swiss-Prot ID, UNE10; UNE10 is involved in the fertilization process), Millet_GLEAN_10005488 (Swiss-Prot ID, factor bHL H49), Millet_GLEAN_10036595 (Swiss-Prot ID, ORG2; ORG2 is involved in Iron homeostasis), Millet_GLEAN_10007267 (Swiss-Prot ID, bHLH35), and Millet_GLEAN_10030390 (Swiss-Prot ID, bHLH49). Excluding Millet_GLEAN_10008844 and Millet_GLEAN_100 36595, the function of homologs from other plant species of the other six genes were unknown. Additionally, the identified function of homologs from other plant species, Millet_GLEAN_10 008844 and Millet_GLEAN_10036595, are not thought to be involved in tolerance to drought stress, as shown by a previous study [12]. We analyzed the expressions of the genes that respond to drought stress using qRT-PCR. The 14-day foxtail millet shoots were subjected to 20% PEG 6000 for 0, 0.5 h, 6 h and 12 h treatment. The expression levels of the eight genes treated for 6 h and 12 h under PEG were significantly changed, and they all showed similar variation trends with that of the Qi's RNAseq data excluding Millet_GLEAN_10036595 (Fig 4A) [29]. Many candidate genes among the foxtail millet bHLH members have been linked to drought stress tolerance in other species. Accordingly, bHLH genes are likely to play an important role in drought stress tolerance in fox millet, though the homologs in other species of some candidates were not determined to be involved in tolerance to drought stress by previous studies, such as the homologs of bHLH128, bHLH35, bHLH25, and ORG2. We guessed these candidates (homologs of bHLH128, bHLH35, bHLH25 and ORG2) may have evolved different function from their homologs in foxtail millet or other species, and their functions may have contributed to drought stress tolerance, though perhaps they were not assayed clearly. Our inference is based on the analysis of sequence motifs and a molecular signature of selection. Divergence in motifs and positive selection between homologs or duplication pairs suggested functional divergence may have occurred in these gene trees while genes with de novo functions were created through expansion of the foxtail millet bHLH family. The role of bHLH in foxtail millet drought resistance ABA-dependent and -independent signalling pathways appear to be involved in drought stress tolerance. However, previous studies ignored a direct link between AREB and bHLH (such as ICE and Myc) in tolerance to drought stress tolerance. Rather, bHLH was only considered part of the DREB/CBF pathway, which may be involved in tolerance to drought and cold stress [52]. However, analysis of candidate promoters showed that most contained AREB elements (cis-acting elements involved in abscisic acid responsiveness; S6 Table). This research showed these candidates may be regulated by AREB. The 14-day foxtail millet shoots were subjected to exogenous ABA treatments of 0, 0.5, 7, and 12 h. The changes in expression of the eight genes under the ABA treatment were similar to the changes in expression of the eight genes under the PEG treatment (i.e., drought stress) over the same treatment times (Fig 4B). This indicated that these foxtail millet bHLHs genes may be regulated by ABA and that drought may regulate these genes through the ABA-dependent signalling pathway. Analysis of promoters revealed that most promoters of the candidate genes we identified contain MBS elements (MYB binding sites involved in drought inducibility; S6 Table) and some contain G-box element. Accordingly, MYB genes may regulate bHLH genes, and bHLHs may in turn regulate drought responses. Additionally, we comprehensively identified the promoters of 40 drought-responsive genes in common between foxtail millet and some monocot and dicot species [29,[53][54][55] including the drought-responsive marker gene COR47 [29], which contains E-box and G-box elements (S7 Table). These genes may accordingly be regulated by bHLH genes. Thus, we propose the following hypothesis. When foxtail millet is under drought stress, some foxtail millet bHLH genes may be regulated by ABA-dependent signalling pathways, including AREB, MYB, or bHLH transcript factors; it is these genes that could affect downstream genes that are directly involved in drought stress responses.
v3-fos-license
2018-01-13T18:09:03.485Z
2016-05-18T00:00:00.000
20066359
{ "extfieldsofstudy": [ "Computer Science" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "http://eudl.eu/pdf/10.4108/eai.18-5-2016.151251", "pdf_hash": "db970d4c4a93c6cbd9f67b934c7ea804a72ff85e", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43687", "s2fieldsofstudy": [ "Computer Science" ], "sha1": "db970d4c4a93c6cbd9f67b934c7ea804a72ff85e", "year": 2016 }
pes2o/s2orc
CEEDS : A Cost Effective Event Detection System for Energy Efficient Railway Bridge Monitoring with Wireless Sensor Network Railway Bridge Health Monitoring (RHM) is of prime importance as damages in bridges can lead to huge casualties. Wireless sensor network (WSN) has come up as a promising technology for health monitoring. WSN has severe energy and hardware constraints. In this paper, we propose an event detection system for WSN deployed for RHM. Our proposed system takes different constraints of sensor networks into consideration and efficiently uses the limited resources of sensors. The system keeps the sensors awake only during the time a train passes over the bridge and in sleep mode otherwise. The real time exponential moving average of the vibration signal of a sensor placed on the railway track is computed by our algorithm and the arrival of the train is detected if consecutive series of samples lie within two threshold bounds. Theoretical and experimental results indicate that our proposed system can considerably increase the service lifetime of sensor networks and aid in automating the RHM. Introduction Due to the thriving development of mobile communications and wireless networking in recent years, applications of wireless sensor network (WSN) for sensing and monitoring parameters in different fields have increased enormously [1].Akylidiz et al. [2] have stated that WSN consists of a large number of nodes that are deployed to sense various phenomena.Essentially, small nodes with sensing ability, limited computation and communication capability constitutes a WSN.It is mostly an autonomous system with severe energy constraints [3] [4] [5].The performance of WSN is governed by the hardware, wireless radio communication characteristics, the battery of the sensor nodes, time synchronization, etc. [2], [3].Applications of WSN extend from military movement tracking, environment monitoring, vehicle tracking, habitat monitoring, etc.In There are two operational paradigms of data collection [11] [12] in BHM using WSN, viz.continuous data-gathering and event-driven data collection.The former method periodically reports sensor data to a remote base station while the latter one sends data upon the occurrence of an intended event. An event is defined as any change in the state of the system from its normal undisturbed state.In a wireless sensor network, we need to effectively and accurately determine the occurrence of an intended event with a distinguishable characteristic in a specific time interval.We will refer to this process as event detection. Our intended field of application is long term railway bridge health monitoring using WSN [13].In our application, accelerometer sensors will be installed on the railway bridge and train induced vibration data will be collected by the sensors every time a train passes over the bridge.The data will be sent to a remote server for further processing and analysis.This analysis will lead to damage detection, damage localization and estimation of damage severity.In this way, the health of the bridge can be monitored for an extended period of time.Since we are focusing on long term monitoring, issues of long term operation of WSN need to be looked upon. In most countries with large railway network (e.g., Indian Railways, Russian Railways (RZD), etc.), the bridges are situated in remote locations where there is no source of electricity.Hence, the wireless sensors if installed on these bridges will have no power other than their own battery.It may be noted that for long term monitoring of bridges, these wireless sensors may need to operate for an extended period of time without human intervention.So, an energy efficient solution is very essential to increase the battery life of the sensors, which in turn will increase the service lifetime of the WSN as a whole.Usually, the longevity of the sensor nodes depends on the type of battery and the battery's ampere-hour rating.Sensors available commercially can operate approximately for a duration of 1 day to 2 months before their internal batteries need replacement or recharging.The longevity of the batteries depends upon the frequency of usage and some other factors like sampling rate and number of active channels [14].So, we need to optimize between various parameters like accuracy, resolution, sampling rate, operation period, reliability, etc. Power consumption of the network can be drastically reduced in event driven application scenario by keeping the network in operation only during an intended event, while the rest of the time, it will remain in sleep mode.In sleep mode, power consumption of the nodes is in the range of milliwatts /micro watts [14].In our application, keeping the sensors on the bridge in data collection mode only during the time interval when the train is on the bridge (i.e.once an event is detected) can save a lot of power and improve the longevity of the network. A cost effective event detection system should be simple in terms of its hardware and software requirements yet able to detect the event with accuracy and minimal time delay by utilizing the available resources efficiently.The event needs to be detected with minimum time lag, so the algorithm should have minimum time complexity.Event detection designed should not give false positives or false negatives. In this paper, we are proposing a cost effective automated event detection system for railway bridge monitoring which can extend the service lifetime of the wireless sensor network adhering to all the points mentioned above.In this system, two sets of sensors will be used for monitoring.One set of sensors called master sensors will be assigned the task of event detection, i.e. detection of a train approaching a bridge and another set of sensors called child sensors that are placed on the bridge will be activated by the master sensors only when the desired event is detected.The underlying algorithm for detecting the train arrival event is based on the concept of moving average and corresponding threshold bounds. Related Work Prior work has been done in the field of event detection for WSN.Researchers have either used data from a single sensor for event detection or they have designed distributed event detection schemes using data from multiple sensors.R. Bischoff et al. [15] have used a single MEMS accelerometer sensor to measure the acceleration of a bridge continuously.In their bridge monitoring system, data recording is initiated once the accelerometer data crosses a fixed threshold value.Similar works based only on few sensor data points for calculation of threshold and subsequent event detection can be found in [16].The underlying algorithm of these types of event detection system is based on magnitude difference.This is the simplest form of an algorithm, where the difference between amplitude of two time slots can be used to check if a threshold is crossed to trigger an event.The major drawback of these systems and algorithms is that the presence of noise in the signal at a particular time slot can raise the signal value above the threshold giving false positives.Hence systems relying on only one threshold values are not reliable. Short term energy method [17] is another method where the energy of the signal is used instead of the magnitude to detect an event.The ratio between the energy of two nearby windows forms the basis of its algorithm.However, this method is also prone to noise as in some cases, energy of background noise may subdue the energy of the actual event.This will lead to the generation of false positives.Slope of the signal envelope is used as the event indicator in SURF method proposed by Pauws [18].The slope is calculated using a quadratic polynomial.The estimation of the coefficients of a polynomial expression is quite time consuming, hence, this method is not suitable for time critical applications.Event detection can also be done by analysing the signal in the frequency domain.High-frequency content (HFC) method proposed by Maris and Bateman in [19] is one such detection method which is also prone to failure if affected by background noise. Distributed event detection schemes employ multiple sensors for the detection [20].In one such scheme proposed by Norman Dziengel et al. [21], different fusion techniques viz.feature fusion, classification fusion, cooperative fusion are used.However, their system is unable to differentiate between trained and non-trained patterns.In the case of event detection of train arrival, it is essential to differentiate between train signals used to train the system and noise signals. Some researchers have proposed region matching [22] and envelope matching technique [23] for detecting an event.Dimensionality reduction techniques like principal component analysis (PCA) have been used by Jayant Gupchup et al. to reduce the model for event detection.[24].K. Kapitanova et al. have used fuzzy logic to evaluate the threshold for event detection [25].However, rule-base in fuzzy logic starts increasing in size exponentially and storing fuzzy rule-bases in WSN which has limited power and memory resource is a challenge [26]. Artificial neural networks and ARIMA based statistical methods have been used successfully in many event detection scenarios [27].Both of them require the system to be trained before its actual operation and the training require huge amount of data.These methods are also time consuming and difficult to implement on low configuration hardware of the WSN.Some other popular techniques use classification algorithms like Naïve Bayes classifiers [28].These techniques are computationally complex and have high communication overhead which in turn affects the service lifetime of the WSN as a whole. Wireless Sensor Network Wireless sensor network may either be centralized or distributed.The centralized system uses star type of connectivity, whereas distributed system uses mesh connectivity.Figure 1 shows the network architecture of a star type of wireless sensor network [29].The WSN consists of wireless sensor nodes which form an ad hoc network within themselves or communicate with a centralized base station or data aggregator.The base station controls all the node connectivity and configures them to sense environmental parameters.The communication between the sensor nodes and the base station is achieved by using a wireless ZigBee protocol [30] [31], or IEEE 802.15.4-open communication architecture.Real time monitoring system [32] uses the communication between the base station and remote server using Code Division Multiple Access (CDMA) [30] or Global System for Mobile Communications (GSM) /General Packet Radio Service (GPRS) /3 rd Generation (3G) or Ethernet technologies.The network architecture shown in Figure 1 shows that the sensors N1 to NP and M1 to MP communicate with different base stations or data aggregators.The base stations using Internet Protocol (IP) network transmit the sensor data to the centralized server where further processing is done.The processed data is used to generate warnings or messages [33] [34]. The sensor nodes can send the data to base station either using low duty cycle operation or using synchronized sampling [5].However, in low duty cycle, reception of packets is not guaranteed.The important parameter that determines the accuracy of the received sensor signal is sampling frequency.More the sampling frequency, the more is the power consumption. Another parameter that determines the battery life of the sensor is the power level at which sensor transmit [3].This power level is determined by the node and base station firmware depending upon the distance between them or it can also be manually set up by the user.Since, the packet losses [12] are directly proportional to the distance between the base station and the sensor, one can easily understand that more power is required to transmit packets reliably to greater distances. As stated earlier, sensors send data to a data logger or base station.Figure 2 shows the basic workflow of WSN system.The base station is used to first configure the sensors and the network before actual data collection begins.The configurable options present in any data logger of a WSN are modes of sampling like synchronized sampling and low duty cycle, control options like stop nodes, sleep node and wake node, sampling frequency, power level for the different radio ranges, etc. Event driven Triggering option is also present in some advanced sensors, but with they have limited functionality.Once the network is configured, the system waits for the event and starts streaming data when an event occurs.The end of the event is followed by data transfer to the remote storage centre. Event Detection System As described in the introduction, railway bridge health monitoring is an event driven type of application where data is collected using WSN only when the intended event occurs.Figure 3 shows the event that the train is passing over the bridge. A WSN system installed on a Railway Bridge constitutes accelerometers and/or strain gauge sensor nodes placed at different members of the bridge as shown in Figure 4.A base station or data logger is placed in the radio range of the sensors just outside the bridge.Although the radio range is around 1km for Line of Sight (LOS) between sensor and base station, due to the reflection of the Radio Frequency (RF) signal among the bridge members for multiple number of times, effective range is reduced to about 500 m.Hence it is advisable to put the base station or data aggregator about 100 m to 500 m from the bridge. Sensors on the bridge needs to be put in synchronized sampling mode as mentioned in section 3.1 for collection of bridge parameters like acceleration, displacement, tilt, etc.The trivial method of data collection is to manually start synchronized sampling of the sensors each time a train comes and manually stop once the train leaves the bridge.Synchronized sampling denotes that signals from all sensors are transmitted to the data aggregator in fixed time division multiplexed slots over a single frequency with time stamps. However, to develop an automatic monitoring system, we can deploy the sensors on the bridge and start them in the continuous synchronized sampling mode and keep recording data.This will drain the batteries of the sensors unnecessarily because throughout the day, around 50-100 trains run over a bridge at maximum.Internal firmware based event driven sensor trigger can also be employed to automate data collection.This has a drawback that it only considers a single sample to detect if a threshold is crossed.This may lead to false triggering. There will also be wastage of the battery in case of false triggering if the threshold is crossed due to a person walking on the track or cattle crossing the track or heavy impact or wind.We can only say that a train is really approaching if the sensor signal crosses a predefined threshold bound for a sustainable amount of time. Proposed Technique and Real Time Implementation A proper threshold bound must be derived from experimental results obtained in the field rather than just simulations.For the purpose of obtaining field data, we have done experiments on an Open Web Steel Girder Railway Bridge situated over river Keleghai between Narayangarh-Bhakrabad stations on Howrah-Chennai section in India.One snapshot of our performed experiment is shown in Figure 5. Wireless accelerometer sensors were placed on the bridge.The locations of the sensors during those experiments are shown in Figure 6.We measured the vibration signal of the bridge corresponding to 10 running trains.Measuring the maximum and minimum value (among these 10 trains) of the moving average of vibration signals of a sensor on the railway track will enable us to calculate a threshold bound.Table 1 shows the details of 5 test runs.Our proposed event detection scheme constitutes of two different set of sensors and a central data logger. One set comprises of the sensors placed on different members of the railway bridge, hereafter referred to as child sensors.The number of child sensors is governed by the span length and the number of spans of a bridge.The exact number of child sensors depends on the underlying algorithm for health monitoring.In our experiments, we deployed 3 child sensors on the bridge.The other set of sensors comprises of two low power accelerometer sensors placed at a distance of 300 m and 400 m away from the bridge on the railway track, hereafter referred to as master sensors.Both these sets of sensors connect to data logger acting as a central hub in a star type topology as shown in Figure 7.The reason for the selection of distance of 300 m and 400 m as the location of placement of master sensors will be explained later. EAI The two master nodes work in event driven sensor trigger mode.They turn on and start synchronized sampling once a predefined threshold (firmware based threshold -set in the master node firmware) is crossed at the respective sensors.The master sensors send data to the data logger in real time.The microcomputer or microcontroller directly accesses the real time samples and computes the average of samples over a fixed window size to determine the threshold at each time instant (tick).Exponential window averaging technique is applied from the start and continues finding the average of the samples by shifting the window periodically.Our algorithm will simultaneously check if 90% samples of a window of size 128 fall within the upper and lower bound of the threshold.If this criteria is satisfied at both the master sensors within a predefined interval depending on the speed of the train, then we can infer that train has approached.The algorithm will indicate the base station to send a wake-up signal to all the sleeping child sensors on the bridge.Subsequently, the base station will command the sensors to start synchronized sampling and thus data will be stored in base station/data aggregator. The base station continuously monitors the value of vibration (acceleration) of some of the child sensors present at the leaving end of the bridge.Once the train has left the bridge, damped vibration will be detected.The acceleration value will subsequently reduce to zero after some time.At the instant the base station detects zero acceleration, it will put all the child sensors in sleep mode. Many practical issues cropped up while implementing this technique.The speed of trains in Indian Railways is about 59 km/h to 93 km/h [35].However, the train speed may extend up to 140 km/h when it is driven by the newer high speed locomotives WAP-7 [36].Hence the detection mechanism must be able to detect trains with speeds varying throughout Figure 8 shows the time domain graph of the acceleration measured at a particular sensor node during the passage of 1st Express train.We can observe that notable vibration is observed at a time (ticks) = 1000 and it takes about 800 samples before which the acceleration crosses 2G (unit of acceleration G=9.81 m/s2).This indicates that the train is on the bridge.The average value of the number of samples considering the 5 test runs is around 1000 samples.We have carried out the tests at a sampling frequency of 256 Hz i.e. we receive 256 samples in 1 second.The large accelerometer vibrations corresponds to the train on the bridge, while the small accelerometer reading corresponds to the train entering and leaving the bridge. If we divide, 800 by 256 (which is about~3.1 seconds), we get the time duration between which the sensor actually detects the train to the instant the train reaches the bridge.Hence, we are left with only 3 seconds of time if the master sensor is put on the railway bridge which is responsible for the waking up of all other sleeping sensor nodes on the bridge. Wireless nodes today available may not be awakened so quickly and put to the synchronized sampling/streaming mode.So in the interest of proper detection and proper wake up of all sensors, we propose to put two master sensor nodes at 300-400 m ahead of the bridge on the railway track in the direction of the incoming train. The reason behind placing two master sensor nodes 100 m apart is to make the system robust by not allowing the whole system to be false triggered if only one of the two master nodes gets triggered due to some external agent.Considering the minimum and maximum speed of the trains in Indian Railways (59 -140 km/h i.e. 17 m/s -39 m/s as stated earlier), we observe that we have 7.7 s to 17.6 s from the instant the train crosses the 2nd master sensor to the instant the train reaches the bridge.This is the reason for selection of the distance 300 and 400 m as the placement of master nodes. The train will reach the two master sensors between time interval of 2.8 s -5.8 s.Our algorithm will check whether the moving average of both the sensor signal lie within their threshold bounds within this interval and trigger the system if the criteria is satisfied.This will again reduce chances of false triggering. In a scenario, where up line and down line trains pass over the same railway track through the bridge, 2 pairs of master nodes will be installed, one on each end of the bridge.Each pair will take care of the sleeping and waking of the sensors.The pair of master sensors present at the side of the incoming train will wake up the child sensors and then the other pair placed at the other end of the bridge will take care of putting the child sensors into sleep mode when no relevant data is being sampled by the child sensors.The two pair of master sensors will exchange their respective roles when the train comes from the opposite side.The direction from which the train approaches the bridge can easily be found out by detecting the master sensors which get the excitation signal first.- The proposed algorithm uses exponential moving average for smoothing the data and getting the unique trend of train induced vibrations.The amplitude of the EMA will rise significantly only when sustained vibrations occur. All noise signals have different trends and they do not follow the trend of train signal.In our proposed scheme, upper and lower bounds of the threshold are fixed by taking 10% margin of safety above and below the max and min EMA signals of the trains respectively. However, in a scenario of a bridge where up line and down line trains pass over the same railway track, then 2 pairs of master nodes are installed, one on each end of the bridge.Each pair will take care of the sleeping and waking of the sensors.The pair of master sensors present at the side of the incoming train will wake up the child sensors and then the other pair placed at the other end of the bridge will take care of putting the child sensors into sleep mode when no relevant data is being sampled by the child sensors.The two pair of master sensors will exchange their respective roles when the train comes from the opposite side. The direction from which the train approaches the bridge can easily be found out by detecting the master sensors which get the excitation signal first. Selection Criteria for Exponential Smoothing Average Moving average is popularly used in smoothing and forecasting the trend in a time series.Moving average has found its common application in the stock markets where a trend can be obtained from the fluctuating prices.Moving average also smoothens the data and reduce the effect of noise data in the signal.Some of the most commonly used time series forecasting methods are simple moving average (SMA), exponential moving average (EMA) and double exponential moving averages (DEMA).When any moving average is applied to the absolute value of train signals, it shows a particular trend unique to them as shown in the simulation results.But certain variations can be observed due to different loading characteristics as well as different speed of the trains. Simple Moving Average (SMA) The simple moving average (or arithmetic mean) is the unweighted mean of the previous n data.The equation for simple moving average is as follows: where SMA (n) is the simple moving average of window size n and k is a variable which varies from 1 to n.However, there are two problems associated with SMA [37]. (i) SMA only considers the data included in the selected window (e.g.SMA with a window size of 10 computed on a data takes into account only the last 10 samples and it simply ignores all other samples prior to that particular window.)(ii) SMA also allocates equal weights to all the data in the selected window.However, it is generally argued that recently sampled data should carry more weight than older data (e.g. the most recent observation should get a little more weight than 2nd most recent, and the 2nd most recent should get a little more weight than the 3rd most recent, and so on.)This in turn will reduce the average's lag behind the actual signal. Exponentially Moving Average (EMA) The simplest form of exponential moving average is given by the formula: where α is the smoothing factor, and 0 < α < 1.Thus, the current smoothed value st is an interpolation between the previous smoothed value st-1 and the current observation. Here, α is a measure of the closeness of the interpolated value to the most recent observation.Also, we can say that 1/α denotes the window size. EMA solves both problems associated with SMA.Firstly, more weight is allocated to the recent data samples, thereby reducing the lag.The weights decrease exponentially towards the past samples.Secondly, it also reflects all the past data for any particular observation.This reduces any chance of a sudden increase in the average due to noise. For these reasons, EMA performs better than SMA. Simulation Results All the simulations have been performed in MATLAB 2013a.Simple and Exponential moving average has been computed on the vibration data recorded previously by the master sensor placed on the railway track 300m ahead of the bridge.This experimental data is shown earlier in Figure 8.This real time data was recorded at a sampling rate of 256 Hz using an accelerometer sensor.Three different window sizes of 64, 128 and 256 were selected for both the moving averages.The simulation results with variable window sizes for the moving averages are shown in Figure 9, Figure 10 and Figure 11. Sensitivity and Specificity of Proposed Algorithm The proposed algorithm has been tested by applying it to a set of experimentally collected train signals and noise signals. We have computed four parameters, namely, sensitivity or true positive rate, specificity or true negative rate, false positive rate and false negative rate from the confusion matrix as shown in Table 2.The total number of test cases are as follows: The elements of the confusion matrix are as follows:  True positive: Total no of train signals correctly predicted: 49  False positive: Total no of noise signal incorrectly predicted as train: 1  True negative: Total no of noise signal correctly predicted: 39  False negative: Total no of train signal incorrectly predicted as noise: 1 The parameters computed from the confusion matrix are:  Specificity/True negative rate (TNR) = = 97.5%  False positive rate (FPR) = = 2.5%  False negative rate (FNR) = = 2% Comparison with Existing Schemes We have perfomed simulation of existing detection algorithms on our experimentally collected dataset and compared their true positive rate, false positive rate and total computation time with our proposed algorithm.Table 3 depicts the comparison results.The higher true positive rate and lower false positive rate for our proposed scheme as mentioned in Table 3 is due to the fact that our algorithm does not consider a single threshold unlike other methods.Moreover, the consideration of upper and lower threshold bounds also increase the accuracy of detection. Techniques like region matching and envelope detection are inefficient to track the arrival of a train.It is seen that the envelopes of the train signals are hardly similar as these depend on variable factors like weight of the train, speed of the train, etc. Methods like fuzzy logic based event detection schemes are accurate, but real time implementation of these schemes are not viable.Methods like ARIMA and artificial neural networks are again difficult to implement low cost hardware platforms.Hence, we have not provided any quantitative assessment of these methods. Experimental Results and Observations A series of field trials were conducted at Bridge No. 168 of Narayangarh-Bhakrabad section, West Bengal, India.Master sensors were placed on the railway track 300m and 400m away from the bridge in the incoming train direction.10 child sensors were placed on different members of the bridge for the collection of vibration signal.Figure 13 shows the placement of one master sensor on the railway track.Figure 14 shows the signal of a child sensor collected using our event detection system.Three observations can be made from the figure . t = − 500 to t = 0: there is no signal recorded by the sensor, i.e. the sensor is in sleep mode. t = 0: sensor starts recording data.This shows that our event detection system had detected an incoming train and put the sensor in synchronized sampling mode for data collection. t = 2250: sensor has stopped recording data and has gone into sleep mode as train has left the bridge. The above experimental results and observations validates our event detection system.Figure 15 shows a series of varying noise signals collected by the master sensor when a person was walking on the track for the duration from t = 0 to t = 600 and hitting the tracks with hammer, stones, etc. for the duration from t = 601 to t = 1000.The exponential moving average computed on the noise signal is also shown in the figure.It is observed that the EMA of the noise signal lies between 0 to 0.05 G whereas the lower threshold bound shown in Figure 12 is around 0.5 G † .So, it is clear that the lower threshold bound is never crossed by the EMA of the noise.During this experiment, the child sensors were not awakened during the entire period, thus proving that the algorithm was foolproof to noise signals. Selection of EMA and Window Size (Effect on time delay) After observing the graphs in Figures 9, 10 and 11, we can infer that SMA lags behind EMA as described in section 5.1.The graph thus validates use of EMA in our proposed algorithm.We can also infer that 128 is the ideal window size when the sampling rate is 256 Hz.We observe that curve is steepest in case of window size of 64 and the slope gradually decreases with increasing window size.So, the time lag is the least as the threshold is reached at the earliest.However, a window size of 64 indicates that we compute average of samples over one-fourth of a second (64/256 = 1/4).In case of a window size of 256, the time lag is maximum (1 second) † G is the unit of acceleration.G = 9.81m/s 2 in SI units.which results in delaying the decision.However, the average is computed for duration of one second, which will eliminate the possibility of false triggers. In consideration of the above factors, we finalized 128 as the optimum window size, which takes care of the accuracy of the event detection as well as ensures less time lag.Thus the window size should be half of the sampling rate. Selection of Upper and Lower Threshold Bounds All train signals have similar trend and they generate similar vibration signals as described in the Section 4.1.But some subtle variations among the signals are present due to different loading conditions and varying velocity of different kind of trains.To fix the threshold, a series of train vibration signals were collected and the minimum and maximum values of the exponential moving average of the signals were taken as lower and upper threshold bound respectively.A 10% safety margin above and below the maximum and minimum signals respectively were taken so that any other trains other than the observed trains can also be detected.Both the lower and upper bound threshold is compared with the EMA of the incoming signal.The system algorithm checks if 90% samples of EMA of any window of size 128 lie within these two bounds.The system will be triggered when this criteria is met.Even if EMA of any noise signal lies within these bounds, it is unlikely that about 100 or more EMA values within a window of size 128 continues to lie there. Protection against False Trigger The case of false triggering is also eliminated by our proposed approach in two ways.On one hand, calculating exponential moving average will ensure that in cases like men walking on the track, cattle crossing track, sudden impact, etc. will not result in the average crossing the lower threshold bound.On the other hand, the base station will wake up the sensors on the bridge only when a window of moving average for both the master sensors lies within the upper and lower threshold bound in the predefined time interval.It is highly unlikely that a person or cattle will stand or cause impact simultaneously on two master nodes which are kept 100 m apart.Thus, our event detection scheme is highly accurate and is resistant to false triggering. Prolongation of Sensor Battery Lifetime The proposed event detection scheme increases the service lifetime of the sensor nodes.We consider a specific scenario to evaluate the performance of our scheme. The battery (material: lithium polymer) capacity of the 3axis (3-channel) accelerometer sensor node used in the experimentation is 250 mAh.The average current Cost Effectiveness of Proposed Algorithm Our proposed system is cost effective in two different aspects.Firstly, apart from the data logger and the sensors, our system consists only of a low cost (typically $10-$40) microcomputer/microcontroller.This microcontroller performs local processing of sensor data and transmits them to a remote server for further analysis.It also accesses the real time samples of the master sensors and runs the event detection algorithm. Secondly, our proposed algorithm is cost effective in terms of its time complexity.The total computation time is 0.006 s as mentioned in Table 3.It can also be noted that the proposed algorithm consisting of exponential moving average and a comparison with a series of threshold bounds can be implemented in low cost, constrained hardware platforms (with limited RAM and ROM). Conclusion The contribution of our work is important with respect to the application of wireless sensor networks in railway bridge health monitoring.A cost effective event detection system is proposed which maintains both accuracy as well as low delay and also ensures less battery consumption by keeping the nodes in operation only when a train is on the bridge.Thus the monitoring system using our system becomes highly energy efficient.In future work, there is a scope to train and update the threshold bounds depending on the trains which run throughout the day. This system can be used in other fields of application with some modifications.The concepts used in our proposed scheme may be well used for event detection using WSN in other applications.The novel idea of differentiating the sensors into master and child may be employed in other scenarios where the master sensor may be used for data recording and transmission to a data logger and child sensors may be awakened on the basis of this data.The concept of using a simple technique like exponential moving average and dual threshold bounds can be easily used in algorithms for event detection in different application scenarios. Figure 1 .Figure 2 . Figure 1.Network Architecture of Wireless Sensor Network Figure 3 . Figure 3. Railway Bridge demonstrating event: train passing over bridge Figure 4 . Figure 4. Instrumentation Layout of Railway Bridge European Alliance for Innovation on Future Internet -201 | Volume | Issue | e3 Figure 6 . Figure 6.Snapshots of sensors located on the bridge members Figure 7 . Figure 7. Network topology of proposed scheme showing star type connectivity between data logger, master sensors and child sensors Figure 8 .-Step 1 : Figure 8. Acceleration obtained by accelerometer during passage of a train Figure 12 . Figure 12.Exponential Moving Average for 5 Train running instances and 1 Noise Signal  Volume | Issue | e3 consumption and corresponding life of battery is explained here.Sampling Mode (Mode 1): Sensor Node sampling at 256 Hz with 3 active channels consumes: 12.719 mA  Life of fully charged battery if continuously operated in Mode 1: 250mAh/12.719mA= 19.65 h  Sleep Mode (Mode 2): Sensor node in sleep mode consumes: 0.135 mA  Life of fully charged battery if continuously operated in Mode 2: 250mAh/0.135mA= 1851.85h ~77 days In general, on average, 10 to 20 trains pass over a bridge in a day.The sensor wakes up, samples and transmits bridge vibration data for a duration of 1 minute and it again goes into sleep.The total current consumption by the sensor node using our proposed scheme is shown as follows: Usage of the sensor in 1 day:  Total time for the sensor in Mode 1 (sampling) for 20 trains: 20 min = 0.33 h  Total time for the sensor in Mode 2 (sleep): 23 h 40 min = 23.66 h  Total current consumption in 1 day: 12.719 x 0.33 + 0.135 x 23.66 = 7.3914 mA The life of the fully charged sensor node using our proposed scheme: 250/7.3914= 33.82days = 811.78h Increase in the lifetime of sensor node: 811.78/19.65 = 41 times Table 1 . Details of experimental setup for collection of sensor data at sample rate 256 Hz Table 2 . Confusion Matrix for Event Detection Table 3 . Comparison of Proposed Method with Existing Schemes
v3-fos-license
2018-04-27T04:32:21.546Z
2018-02-12T00:00:00.000
4897927
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "http://downloads.hindawi.com/journals/omcl/2018/8267560.pdf", "pdf_hash": "cba2aec7a13d3158835e2641898cbaf7f2e6bf59", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43688", "s2fieldsofstudy": [ "Biology", "Medicine" ], "sha1": "a53b9a4417f03cc8b5dbe21bf22053fdd18146ca", "year": 2018 }
pes2o/s2orc
Noninvasive Real-Time Characterization of Renal Clearance Kinetics in Diabetic Mice after Receiving Danshensu Treatment Danshensu (DSS) is an active ingredient extracted from the root of the Danshen that could ameliorate oxidative stress via upregulation of heme oxygenase- (HO-) 1. Little is known about the treatment effects of DSS on kidney function in diabetic mice. Therefore, the primary aim of the present study was to characterize the renal clearance kinetics of IRdye800CW in db/db mice after DSS treatment. The secondary aim was to measure several biomarkers of renal function and oxidative stress (urinary F2-isoprostane, HO-1 in kidney and serum bilirubin). Fourteen db/db diabetic mice were randomly assigned into two groups and received either DSS treatment (DM + DSS) or vehicle treatment (DM). A third group that comprised of db/+ nondiabetic mice (non-DM control) received no DSS treatment and served as the nondiabetic control. At the end of a 3-week intervention period, serum and urinary biomarkers of renal function and oxidative stress were assessed and the renal clearance of IRdye800CW dye in all mice was determined noninvasively using Multispectral Optoacoustic Tomography. The major finding from this study suggested that DSS treatment in db/db mice improved renal clearance. Increased expression of HO-1 after DSS treatment also suggested that DSS might represent a potential therapeutic avenue for clinical intervention in diabetic nephropathy. Introduction The hyperglycemic and hyperinsulinemic conditions in diabetes are major risk factors promoting lipid peroxidation [1][2][3] and impair kidney function [4][5][6]. Growing evidence indicates that heme oxygenase-(HO-) 1 and unconjugated bilirubin are potent antioxidants with therapeutic potential in diabetes [7][8][9]. Many bioactive compounds extracted from natural medicinal herbs/fruits, including Danshensu (DSS) and Paeonol, may hold beneficial antioxidant and antiapoptotic effects, mediated via activation of factor-erythroid 2-related factor 2 (Nrf2)/HO-1 signaling [10]. DSS, an active ingredient extracted from the root of the Danshen (Salvia miltiorrhiza), has been used for the treatment of cardiovascular disease [11,12]. Also, the renoprotective effect of DSS has previously been linked with the suppression of oxidative stress [13], inflammation, and fibrosis [14], in addition to a reduction in lipid peroxidation by scavenging free radicals and preventing thiol oxidation [15,16]. Moreover, the combined prescription of DSS with Rheum rhabarbarum is a well-recognized, effective, and safe traditional Chinese medicinal regimen for treating chronic kidney disease [17] and suppressing oxidative stress [18,19]. Insulin glomerular filtration rate currently represents the gold standard assessment method of renal function. However, with recent advances in photoacoustic imaging, assessment of renal function in small animals (including the assessment of IRdye800CW renal clearance) can now be determined noninvasively using Multispectral Optoacoustic Tomography (MSOT). MSOT is an emerging technique that captures photoacoustic signals from chromophoric spectra or molecules that are distributed within tissues [20]. With the development of new imaging probes [21], photoacoustic imaging has now been applied to visualize the anatomy, function, and blood oxygenation in different organs [22,23]. Yet, the assessment of DSS on renal clearance kinetics in a diabetic mice model has not been investigated to date. Inadequate HO-1 expression has been demonstrated in obese diabetic mice [24], and the systemic induction of HO-1 can improve insulin sensitivity, decrease inflammatory cytokine expression, and increase circulating adiponectin [25,26]. Also, the induction of HO-1 within renal structures normalized blood pressure, protected against oxidative injury, and consequently improved renal function in spontaneously hypertensive rats [27]. Bilirubin is generally considered as the by-product of heme catabolism. However, new evidence suggests that it may also possess physiological significance. Despite the uncertainty of its physiological importance, unconjugated bilirubin has demonstrated potent antioxidant capacity in vitro and ex vivo [28,29]. An argument for a physiological role of bilirubin is further supported by reduced bilirubin concentrations in patients who had chronic kidney disease [30]. Similarly, individuals with elevated serum bilirubin have decreased prevalence of kidney complications in diabetes [9]. These findings, therefore, support that HO-1 and bilirubin might protect the kidney from oxidative stress by acting as an antioxidant [31][32][33]. The abrogation of Nrf2/HO-1-dependent signaling cascade has been largely implicated in chronic/acute kidney injury, cardiac/endothelial dysfunction, and cerebral ischemia [34]. Many researchers have demonstrated that DSS-mediated tissue protection against chronic kidney disease occurs via cytoprotective and prosurvival Nrf2/HO-1 and PI3K/Akt signaling pathways [10,35]. However, whether overexpression of HO-1 is implicated in the DSS treatment effect in diabetic renal function remains unknown. In this regard, the present study aimed to (1) characterize the renal clearance kinetics of IRdye800CW dye in db/db mice after DSS treatment and (2) quantify the expression of several biomarkers for renal function and oxidative stress in db/db mice with and without DSS treatment. Materials and Methods 2.1. Animals and Intervention. Female 10 wk old diabetic homozygous (db/db) mice and nondiabetic heterozygous (db/+) mice on a C57BLKS/J background were housed in the Central Animal facilities, Hong Kong Polytechnic University, in a 12 h light/dark cycle and under tight control of temperature and humidity. The db/db homozygotes exhibit persistent hyperphagia and obesity with spontaneously developed elevated leptin, glucose, and insulin concentrations [36]. All mice received regular laboratory chow and tap water ad libitum during the study. After 1 week of acclimation, all diabetic mice were randomly divided into two groups (n = 7/group): DM and DM + DSS, while heterozygote nondiabetic mice (n = 6) were assigned to a non-DM control group. During the intervention period of 3 weeks, all mice were treated according to the following schedule: the non-DM control group received no treatment, the DM group received i.p. vehicle treatment while the DM + DSS group received DSS (HPLC ≥ 98%, dissolved in water, Nanjing Zelang Pharmaceutical Technology Co. Ltd.) at a dose of 10 mg/kg i.p. daily. The kidney absorption level of DSS was found to be at around 69 μg/g of tissue via i.p. method [37]. Experimental protocols were performed in accordance with the approved license granted under the Department of Health and approved by the Animal Subjects Ethics Sub-Committee (ASESC) of Hong Kong Polytechnic University. 2.2. Fasting Glucose, Body Weight, and Urinary Samples. At the start and the end of the study, fasting blood glucose was assessed using a glucometer (Bayer Contour TS), and the body weight of mice was assessed using an electronic scale. Daily urinary samples were collected, for four days before the end of the study using individual metabolic cages for the determination of F2-isoprostane (IsoPs), microalbumin, and creatinine excretion of each mouse. The 24 hr urinary concentration of IsoPs was determined by commercial ELISA method (item number 516351, Cayman Chemical, Ann Arbor, Michigan, USA), while the levels of urinary albumin and creatinine were determined using a clinical chemical analyzer (AU480; Beckman Coulter, Brea, CA, USA). Serum and Kidney Samples. After overnight fasting, the mice were sacrificed and blood samples were collected via cardiac puncture at the end of the study. Serum concentrations of creatinine and bilirubin (total, conjugated, and unconjugated) were assessed using clinical chemistry (AU480; Beckman Coulter, Brea, CA, USA). The concentration of fasting serum insulin was assayed by commercial ELISA method (catalogue number 32270; Li Ka Shing Faculty of Medicine, the University of Hong Kong, Hong Kong). The cortex of the kidney was carefully dissected for the analysis of HO-1, p-Akt, and t-Akt expression using western blot. The total protein concentration was determined using a Bio-Rad Protein Assay Kit II (Bio-rad, catalog number 500-0002). The blots were incubated with primary antibodies overnight, including HO-1 Antibody (Cell Signaling Technology, Beverly, MA, USA), pan-Akt (Cell Signaling Technology, Beverly, MA, USA), and Phospho-Akt Thr308 (Cell Signaling Technology, Beverly, MA, USA). After washing, blots were incubated with horseradish peroxidase-(HRP-) conjugated secondary antibody (Santa Cruz Biotechnology). Finally, protein expression was determined by a microplate reader (Bio-Rad Laboratories, Richmond, CA) and quantified using ImageJ software (IJ 1.46r). Measurement of Renal Clearance of IRdye800CW Dye Using MSOT. On the last day of the intervention period, all mice were anesthetized using isoflurane in oxygen [3-4% per liter of 100% oxygen for induction and 1.5% per liter of 100% oxygen during maintenance], with hair removed from the chest to lower abdomen as per previously published experimental protocols [38]. In brief, mice were put into a water chamber within the MSOT (inVision 128 MSOT imaging system, iThera Medical, Munich, Germany) in a prone position, and the kidney region was then scanned at a rate of 10 Hz continuously using a multispectral protocol for 10 minutes after injection of 200 μl (20 nmol in 0.9% of saline) of IRdye800CW (LI-COR, USA) via the tail vein ( Figure 1). IRdye800CW is a small molecule that is rapidly excreted by the kidneys in unmetabolised form [39]. After multispectral decomposition of IRdye800CW signals over the anatomical background, the time points at the mean of peak signal All data presented as mean ± SD. * denotes p < 0 05 when this group is compared to the non-DM control group. # denotes p < 0 05 when this group is compared to the DM group. intensity (Tmax) over the renal cortex and renal pelvis regions of the right kidney were determined, and the time difference between Tmax-Pelvis and Tmax-Cortex was calculated as "Tmax delay," which represents the efficiency of IRdye200CW dye clearance [38]. Statistical Analyses. The assumptions of normality and homogeneity of variance were first assessed. ANOVA with multiple post hoc LSD adjustments or Kruskal-Wallis H test with multiple post hoc Dunn adjustments was used to compare the differences in the three groups where applicable. Paired t-tests were used to test for significant differences between the start and end fasting glucose concentrations in each group. All data were expressed as means ± SD. All statistical analyses were performed using the Statistical Package for the Social Sciences (SPSS) version 22 for Windows, and the significant level was set at p < 0 05. Results A summary of all measured variables collected from serum, urine, and MSOT in the present study can be found in Table 1. The results indicated that all db/db mice exhibited hyperglycemia and hyperinsulinemia and were more obese ( Figure 2) when compared to db/+ mice at baseline and after 3 weeks. However, the fasting insulin concentration at the end of the study in the DM group (3.50 ± 1.14 nmol/l) was significantly greater when compared to those in the DM + DSS (2.22 ± 1.02 nmol/l, p = 0 035) and non-DM control (0.37 ± 0.16 nmol/l, p = 0 007) groups, suggesting that DSS treatment might improve insulin resistance in the db/db mice. On the contrary, there was no significant change in fasting glucose and body weight between baseline and after 3 weeks in all groups, except that the fasting blood glucose concentration tends to be increased in the DM + DSS group (from 19.60 ± 1.35 mmol/l at baseline to 24.97 ± 2.39 mmol/l after 3 weeks, p = 0 098) (Figure 3). DSS Treatment Failed to Reduce ACR and Serum Creatinine Level but Improved the Tmax Delay (Renal Clearance) in db/db Mice. Both the DM and DM + DSS groups demonstrated increased urinary albumin : creatinine ratio (ACR) (Figure 4(a)) and serum creatinine (Figure 4(b)) when compared to the non-DM control group, which was consistent with a previous study [40]. From the graphs shown in Figure 5, Tmax delay determined by MSOT was longer in a db/db mouse without DSS treatment (Figures 5(a) and 5(c)) when compared to another db/db mouse with DSS treatment (Figures 5(b) and 5(d)). Collectively, the mean value of Tmax delay was significantly longer in the DM group when compared to the DM + DSS (p = 0 001) and non-DM control (p < 0 001; Figure 4(c)) groups, suggesting an improved renal clearance after DSS treatment in the DM + DSS group. DSS Treatment Did Not Increase Serum Bilirubin or Significantly Reduce Urinary F2-Isoprostane Concentrations in db/db Mice. In the present study, the total bilirubin, unconjugated bilirubin, and conjugated bilirubin levels in the three groups ( Figure 6) were similar, and the result was comparable to a previous reported study [41]. (Figure 7). Upregulation of HO-1 Expression in the Kidney of Diabetic Mice after 3 Weeks of DSS Treatment. Finally, we analyzed the renal cortex for expression of HO-1 and the p-Akt/t-Akt ratio. Significantly increased expression of HO-1 (~2-fold) was noted in the DM + DSS group when compared to the DM (p = 0 029) and non-DM control (p = 0 016) groups (Figure 8(a)). Although the p-Akt/t-Akt ratio was also significantly increased (~3-fold, p = 0 011) in the DM + DSS group when compared to the non-DM group, the mean difference of the p-Akt/t-Akt ratio between the DM + DSS and DM groups remained insignificant (p = 0 125; Figure 8(b)). The corresponding western blot data of HO-1 and AKT are presented in Figure 8(c). DSS Treatment and Diabetic Status. db/db mice spontaneously develop hyperinsulinemia due to mutation in the leptin receptor, which leads to impaired function of beta cells of the pancreatic islets. At 4 weeks of age, hyperglycemia, hyperinsulinemia, and insulin resistance are observed [42]. In the present study, although there was no observable change in the fasting glucose level in diabetic mice after DSS treatment, fasting insulin concentrations in the DSS treatment group was decreased when compared to nontreated diabetic group. This finding agreed with a previously published study [13], suggesting the possibility of improved insulin sensitivity mediated by DSS. DSS Treatment and Renal Clearance. Significant reduction in renal function was evidenced in diabetic mice of the present study, as indicated by higher ACR and serum creatinine when compared to the nondiabetic group. However, the DSS antioxidant treatment failed to ameliorate the serum creatinine level, probably due to the difference in the injection approach and hence a lower daily effective dosage of DSS employed in the present study when compared to other published studies [43,44]. ACR and serum creatinine are conventional and clinically relevant parameters for the assessment of kidney function and are significantly correlated with oxidative stress due to inactivation of NO [45]. However, proteinuria and changes in circulating creatinine concentrations or clearance have their limitations in regard to sensitivity and are typically modulated in moderate and late stages of renal disease. Therefore, we applied a novel, noninvasive measurement of renal clearance kinetics to determine the impact of DSS on renal function in diabetic animals, using the same methodology suggested by Scarfe's group [38]. This noninvasive examination technique provides a clear, sensitive, and specific optical signal from the target tissue with the utilization of IRDye800CW. Our results of Tmax delay in our diabetic mouse model were similar to the previous work that studied the acute effect of adriamycin-induced nephropathy on Tmax delay [38]. However, our results on Tmax delay have the following limitations. Firstly, it should be noted that Tmax delay mainly assesses the hyperfiltration of IRdye800CW and does not account for tubular reabsorption of metabolites in the kidney and variations in hourly production of creatinine. Second, IRdye800CW could bind to plasma proteins and lead to underestimation of the true "Tmax delay" in the present study [38]. DSS Treatment and Lipid Peroxidation. DSS treatment was previously reported to ameliorate oxidative stress and lipid peroxidation via Akt/Nrf2/HO-1 [46,47]. Lipid peroxidation is elevated in patients with diabetes, especially in those with increased HbA1c, LDL cholesterol, total cholesterol, and triglycerides [48]. In obese and diabetic patients, the accumulation of lipids and advanced glycation end products in plasma or organs represents an important source of lipid peroxidation, which further leads to DNA damage, protein/enzyme oxidation, and release of proinflammatory cytokines [49][50][51]. Many previous studies have shown that urinary IsoPs are a reliable biomarker of lipid peroxidation and could act as an indicator of oxidative stress [52,53]. In the present study, diabetic mice exhibited higher levels of urinary IsoPs when compared to nondiabetic controls, which agrees with previous findings [54,55]. However, the 3-week period of DSS treatment failed to significantly reduce the urinary IsoP concentration in db/db mice. At present, only a few studies have investigated the effect of Salvia miltiorrhiza (containing DSS) treatment on IsoPs [56,57], with most results indicating that DSS-containing herbs could attenuate IsoPs in nondiabetic murine models. Therefore, our study is the first report to investigate the effect of DSS specifically on IsoP in db/db mice. DSS Treatment and HO-1 Expression. We postulated that DSS is a potential druggable adjuvant in ameliorating diabetic nephropathy via induction of HO-1 synthesis. Previous studies have indicated that the HO system may act as a crucial mediator of cellular redox homeostasis by degrading heme, generating the antioxidant bilirubin, and releasing free iron (bound by ferritin) especially in the renovascular system [27,58,59]. Through activation of the nuclear factorerythroid 2-related factor-2-(Nrf2-) targeting antioxidant response element (ARE)/heme oxygenase-1 (HO-1) signaling cascade, DSS has attenuated acute kidney injury [35]. The induction of HO-1 further activated adiponectin synthesis/ release, which in turn improved cellular redox status, diminished apoptotic signaling kinase-1 expression, and protected from oxidative stress via activating p-Akt/Akt signaling [59][60][61]. In the present study, although DSS treatment was associated with increased expression of HO-1 in the kidney of db/db mice when compared to DSS-treated db/db mice, the levels of total and unconjugated bilirubin in the blood were only mildly elevated, suggesting an argument against HO-1-mediated protection via bilirubin in our diabetic mice model. According to a previous study, downregulation of Akt could attenuate the antioxidant effects of HO-1 [62]; however, our data demonstrate that DSS could only mildly elevate the p-Akt : t-Akt ratio. Therefore, the failure of DSS-induced overexpression of bilirubin and Akt suggests other key players might be involved in mediating the beneficial effects of HO-1, such as carbon monoxide (CO) production or improved heme clearance. In this context, further studies on the effect of DSS treatment on CO production and heme clearance are warranted. This study has several limitations. First, our team failed to collect enough blood for the baseline measurement of all selected biomarkers in the present study. Therefore, we only measured the serum fasting glucose at baseline, which required minimal blood volume. Second, the tail vein cannot be recovered within 3 weeks after the injection of dye; therefore, we could not complete the baseline measurement of renal clearance. In summary, this study suggests that DSS might represent a potential viable preventative/treatment worthy of further investigation in patients with, or at risk of developing, diabetic nephropathy. Although HO-1 is known to ameliorate diabetic nephropathy [63], its effect in db/db mice remained poorly understood. In the present study, DSS treatment significantly improved renal clearance in db/db mice and was associated with upregulation of HO-1/Akt signaling pathways. However, the exact mechanism concerning how DSS mediates HO-1 activity and preserves renal physiological function remains unknown and requires further study. Conflicts of Interest All the authors declare no conflict of interest. Supplementary Materials A video showing the signal intensity of IRdye800CW accumulated in the kidneys of a mouse after postinjection of IRdye800CW for 15 minutes. The region of interest in blue represents the renal cortex of the right kidney while the region of interest in purple represents the renal pelvis. The strength (in arbitrary units) of the photoacoustic signals from both the anatomical background and IRdye800CW dyes were denoted as different grey scale and green color on the right side of the panel. (Supplementary Materials)
v3-fos-license
2023-06-04T15:05:55.183Z
2023-06-01T00:00:00.000
259052812
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://doi.org/10.3390/uro3020017", "pdf_hash": "7aab8309d110c023da131f0f49fe019a3ecdba4a", "pdf_src": "MergedPDFExtraction", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43689", "s2fieldsofstudy": [ "Medicine" ], "sha1": "63492ab12b4877997f258c7b213f59612f92844d", "year": 2023 }
pes2o/s2orc
Current Evidence on the Use of Hyaluronic Acid as Nonsurgical Option for the Treatment of Peyronie’s Disease: A Contemporary Review : Peyronie’s disease is a condition characterized by the formation of fibrous plaques in the tunica albuginea, which can cause pain, curvature, and erectile dysfunction. Preclinical studies have demonstrated the potential benefits of hyaluronic acid in treating Peyronie’s disease, including antifibrotic, anti-inflammatory, and proangiogenic effects, although more research is needed to fully understand its mechanisms of action. Clinical studies have shown promising results, with hyaluronic acid injections leading to improvements in plaque size, penile curvature, and erectile function, and being well tolerated by patients. The findings suggest that HA injections could be a viable and safe treatment option for Peyronie’s disease, particularly in the early stages of the disease. However, more research is needed to determine the optimal dosage and treatment duration for HA injections, and to confirm its efficacy in the stable phase of Peyronie’s disease. Overall, hyaluronic acid is a potentially effective therapy for Peyronie’s disease, with the ability to inhibit fibrosis and promote angiogenesis, and low risk of adverse effects, making it an attractive option for patients who are unable or unwilling to undergo surgery. Introduction Peyronie's disease (PD) is a connective tissue disorder that affects up to 10% of men worldwide, with a peak incidence between the ages of 45 and 60 years [1]. PD is characterized by fibrous plaque formation in the tunica albuginea of the penis, which leads to penile curvature, deformity, and penile pain during erections. The exact etiology of PD is still unclear, although it is thought to involve a combination of genetic, vascular, and mechanical factors [2]. Regardless of the etiology involved, there is currently no definitive cure for PD, and the available treatment options are limited and sometimes associated with adverse effects. Surgery is usually considered as a last resort for patients with severe PD and significant functional impairment [3]. Nonsurgical treatments for Peyronie's disease include a variety of options, such as oral antioxidant therapies or intralesional injections. Antioxidants, such as vitamin E and coenzyme Q10, have been suggested to potentially reduce oxidative stress and inflammation, which are believed to play a role in the progression of Peyronie's disease. However, there is limited research specifically addressing the use of antioxidants in Peyronie's disease [4]. Regarding intralesional injections, collagenase clostridium histolyticum (CCH) is an example of a nonsurgical option that has gained attention in recent years [5]. CCH is an enzyme that breaks down collagen, the protein responsible for the fibrous plaques that develop in Peyronie's disease. It is injected directly into the plaque and has been shown Uro 2023, 3 to improve penile curvature and reduce symptoms in clinical trials [5,6]. However, CCH treatment can also be associated with adverse effects such as swelling, bruising, and penile hematoma [6]. In recent years, hyaluronic acid (HA) has emerged as a novel therapy for PD due to its ability to modulate inflammation, promote tissue repair, and restore extracellular matrix homeostasis [7]. HA is a natural glycosaminoglycan that is found in various tissues of the body, including the penis. It plays an important role in maintaining the structural integrity and function of the extracellular matrix by regulating cell proliferation, differentiation, and migration [7]. HA has also been shown to have anti-inflammatory, antioxidant, and analgesic effects, which may be beneficial in the management of PD [8]. Despite the potential benefits of HA in PD, the current evidence on its efficacy and safety is still limited and inconclusive. Several preclinical and clinical studies have investigated the use of HA in PD, creating new perspectives for the nonsurgical treatment of PD [9]. For example, a recent review by Schifano et al. (2021) reported that although some studies have shown promising results with HA for PD, the overall quality of the evidence is low and more high-quality studies are needed to establish its efficacy [10]. In this review, we aim to critically evaluate the current evidence on the use of HA in the management of PD. By synthesizing the available data, we hope to provide valuable insights into the potential use of HA as a therapeutic option for PD, evaluate other nonsurgical therapies for different phases of PD, and identify the key research gaps and future directions in this area. Materials and Methods This narrative review was conducted by searching electronic databases (PubMed, Embase, and Cochrane Library) using the following keywords: "Peyronie's disease", "hyaluronic acid", "intralesional injection", "erectile function", "penile curvature", and "sexual satisfaction". Only studies published in English from 1990 to 2023 were included. The search yielded a total of 35 studies, and their titles and abstracts were screened to exclude irrelevant studies. The full text of the remaining studies was then reviewed to identify relevant studies that met the inclusion criteria. In addition, reference lists of relevant studies and review articles were searched for additional studies that were not identified through the electronic search. Data from the included studies were extracted and synthesized in a narrative format. Mechanisms of Action of Hyaluronic Acid in Peyronie's Disease HA is a naturally occurring glycosaminoglycan that has been found to play a role in tissue repair and regeneration. In PD, HA has been proposed as a potential treatment option due to its ability to modulate the inflammatory response and promote tissue healing. This section will review the proposed mechanisms of action of HA in PD, including its effects on fibrosis, inflammation, and angiogenesis. Several studies have suggested that HA can inhibit fibrosis by suppressing the production of collagen and other extracellular matrix components [11]. Fibrosis is a pathological process characterized by the excessive deposition of extracellular matrix (ECM) proteins, including collagen, in response to tissue injury or chronic inflammation. It can lead to the formation of fibrotic plaques in the tunica albuginea of the penis, resulting in penile curvature and erectile dysfunction in PD patients. The molecular pathways underlying the inhibitory effects of HA on fibrosis are not fully understood, but some studies have proposed several mechanisms. HA has been shown to interact with cell surface receptors, such as CD44 and RHAMM, to modulate cellular functions and signaling pathways involved in fibrosis [12]. Moreover, HA can inhibit the TGF-β/Smad signaling pathway, which is a major pathway involved in the production of ECM proteins, by downregulating the expression of TGF-β receptors and Smad proteins [13]. Regarding the anti-inflammatory effect, HA is involved in two basic mechanisms that determine its biological functions. Firstly, it acts as a structural molecule by modulating the tissue hydration, osmotic balance, and physical properties of ECM, where it creates a hydrated and stable space for the maintenance of cells, collagen and elastin fibers, and other ECM components. Secondly, HA acts as a signaling molecule when interacting with its binding molecules. The effects of HA are dependent on its molecular weight, location, and specific cell factors such as receptor expression, signaling pathways, and cell cycle. HA and its associated proteins can promote or inhibit inflammation, cell migration, activation, division, and differentiation, depending on these factors [14]. In addition to its effects on fibrosis and inflammation, HA has been shown to promote angiogenesis. The mechanism is not fully understood, but several studies have suggested that it may involve the upregulation of growth factors and cytokines that stimulate angiogenesis. One possible mechanism is the activation of vascular endothelial growth factor (VEGF), a potent angiogenic factor that promotes the growth of new blood vessels [15]. Clinical Studies on the Use of Hyaluronic Acid in Peyronie's Disease Clinical studies on the use of HA in PD have shown promising results. Five studies have been designed to evaluate the effect of HA in PD. Gennaro et al. assessed the efficacy of injectable HA as a local therapy for the acute phase of PD. A total of 83 PD patients received 30 penile infiltrations with 20 mg HA over the course of 6 months, while 81 PD patients were left without any therapy. Follow-up examinations were undertaken after the conclusion of therapy and 12 and 24 months later. All treated PD patients exhibited a reduction in plaque size, an improvement in penile curvature, and an improvement in penile stiffness, with an average rise of 21.1% in the IIEF score at the 12-month follow-up. The stability of these benefits was maintained during the 24-month follow-up. The authors found that intralesional injections of HA into the penile tissue are an effective treatment for PD [16]. Therefore, Zucchi et al. designed a prospective, single-arm, self-controlled, interventional, multicenter pilot study to evaluate the efficacy of intralesional injections of HA in patients with early phase of PD. Sixty-five patients received a ten-week cycle of weekly intraplaque injections of HA and were reassessed two months following the conclusion of treatment. The primary outcome measures were plaque size, penile curvature, the IIIEF-5 score, the VAS score for sexual pleasure, and the Patient's Global Impressions of Improvement (PGI-I) score. Post-treatment improvements in plaque size, penile curvature, IIEF-5 score, and VAS score were statistically significant. Total PGI-I questionnaire improvement was 69%. Intralesional therapy with HA can reduce plaque size, penile curvature, and overall sexual pleasure, and appears to be most appropriate in the early (active) phase of illness [17]. In 2017, another study aimed to compare the efficacy of intraplaque injection of verapamil (ILVI) and HA in treating early onset PD. Sexually active men aged 18 and above were randomly assigned to receive either ILVI or HA injections weekly for 12 weeks. The primary outcome measured was the change in penile curvature, with secondary outcomes including changes in plaque size and IIEF-5 score. The study found that there was no significant difference in plaque size or IIEF-5 score between the two groups. However, there was a significant decrease in penile curvature in the HA group compared to the ILVI group. Additionally, patients in the HA group reported a greater improvement in the PGI-I score. Overall, the study suggested that intralesional HA may be more effective than ILVI in treating PD in terms of penile curvature and patient satisfaction [18]. Cocci et al. compared the effectiveness and safety of intralesional injections of HA versus verapamil in patients with PD in the acute phase. A total of 244 patients were included in the study, with 125 receiving HA injections and 119 receiving verapamil injections. After 8 weeks of treatment, penile curvature decreased more significantly in the HA group than the verapamil group. Additionally, the HA group had greater improvements in the IIEF-15 score and VAS than the verapamil group. The study suggests that intralesional HA injections could be an effective and safe treatment option for patients with acute-phase PD [19]. Another Italian group presented the first prospective, randomized phase III clinical study comparing the efficacy of a combination of oral administration and intralesional injection of HA to intralesional injections alone in individuals with an active phase of PD. Two groups of patients were randomly assigned. Group A received the oral administration of HA in addition to weekly intralesional injections of HA for 6 weeks, while Group B only received weekly intralesional injections for 6 weeks. In comparison to Group B, Group A saw a much greater decrease in penile curvature and a greater improvement in IIEF-5 and PGI-I scores. The research finds that oral administration in conjunction with intralesional HA therapy is more effective at enhancing penile curvature and overall sexual pleasure than intralesional HA treatment alone [20]. Safety and Adverse Effects of Hyaluronic Acid for Peyronie's Disease As with any medical treatment, it is important to consider the safety and potential adverse effects of HA for the treatment of PD. Fortunately, HA has a well-established safety profile with minimal side effects. In fact, intralesional injection of HA may be considered at minimal risk of adverse events (AE). One of the most common adverse effects reported with HA injections is minor bruising or redness at the injection site, but these effects are also typically mild and resolve quickly [16]. These AE have been reported in only one study, whereas in the others, no significant adverse effects were reported nor have injection-site ecchymosis/hematomas been recorded [17][18][19][20]. There have been no reports of serious adverse events associated with the use of HA for PD. However, as with any medical intervention, there is a potential risk of infection. In order to minimize this risk, it is important to follow proper sterile techniques during the injection procedure [16][17][18][19][20]. Nonsurgical Alternatives to Hyaluronic Acid for Peyronie's Disease To date, many nonsurgical options are present and may be used. Interferon-2b intraplaque injections are one of the possibilities. Interferon-2b (IFN-2b) has been shown to decrease fibroblast proliferation and formation of collagen and other extracellular matrix (ECM) proteins by boosting collagenase levels and decreasing metalloproteinases, which inhibit collagenase [21]. These properties of IFN-α-2b have led to its widespread use in the treatment of hypertrophic scars, liver fibrosis, and other fibrotic conditions resulting from fibroblast dysregulation [22]. In a study by Stewart et al. [23], intralesional treatment of interferon-2b resulted in a greater than 20% decrease in penile curvature and a total response rate of 91%, regardless of the location of PD plaque. Likewise, Trost et al. [24] observed equivalent results after intralesional injections of IFN-2b in patients suffering from a curvature of less than 30 degrees, without noticing any changes in penile vascular parameters. Sokhal et al. reported in a prospective trial a substantial improvement in plaque volume and penile curvature after intralesional IFN-2b therapy [25]. Another enzyme that has been tested and proved effective in the last decade is collagenase clostridium hystoliticum (CCH). CCH is a combination of class-I and class-II clostridial collagenases (AUX-I and AUX-II) that exhibit similar and complementary substrate specificity, making it effective in breaking down the fibrotic composition of PD plaques and collagen types I and III [26]. Two large RCTs in phase III, Investigation for Maximal Peyronie's Reduction Efficacy and Safety Studies (IMPRESS) I and II, conducted in 2010, demonstrated a significant improvement in curvature deformity and PD bother domain score of the PD questionnaire (PDQ) [27]. Consequently, in 2013, the Food and Drug Administration (FDA) approved the intralesional injection of CCH in order to treat adult PD patients with PD, palpable plaque, and penile curvature ≥30 • [28]. Since then, many studies have been performed evaluating the efficacy of CCH injections and many protocols have been standardized. No studies have been designed in order to compare such protocols [5,29,30]. Verapamil has been used for a number of decades after its effectiveness in animals was shown. It has been proven that intralesional injection of verapamil into a rat model of Parkinson's disease results in a reduction in plaque size, penile curvature, and levels of collagen and elastin [31]. In the oldest human trial, using intraplaque injection of verapamil, which was later evaluated by other researchers, 14 participants were given injections every 2 weeks for a period of 6 months, with the dosage eventually reaching 10 milligrams per each injection. The plaque was reported to have become softer by all of the individuals, and the penile constriction and curvature were both reported to have improved by 43 percent [32]. In a follow-up study, 38 men who had completed the full treatment course of 10 mg intralesional injections every other week for a total of 12 injections showed that pain had been eliminated in almost all cases. Additionally, 76% of the men had a subjective improvement in curvature, and 72% reported improvement in their ability to engage in penetrative intercourse. A total of 54% of males saw a reduction in the amount of curvature in their spines [33]. These encouraging findings led researchers to conduct a bigger trial with a total of 140 male participants, each of whom received an intralesional injection of 10 mg of verapamil. According to the findings of the study, 62% of males had a reduction in their curvature of 17 degrees or more. A total of 83% of males had an improvement in their waist circumference, 80% saw an improvement in their stiffness distal to the plaque, and 71% saw an improvement in their sexual function [34]. On the other hand, the effects of numerous additional investigations with intralesional verapamil have not been proven to be as strong [35,36]. It is important to note that Cavallini and colleagues later found that the optimal concentration for verapamil was 10 mg in 20 mL of injectable solution, which maximized penile curvature improvement, the size of the plaque, IIEF, and pain [36]. Potential Novel Intralesional Treatments At this point in time, the focus is on mesenchymal stem cells (MSC), which are being closely examined. Because of the potential of MSC in reducing fibrosis, there has been an uptick in interest in investigating their use in the treatment of PD. In rat models of tunica albuginea fibrosis, a subtype of mesenchymal stem cells known as adipose-derived stem cells (ADSC) has been studied. The first animal trial to evaluate the efficacy of MSC treatment for PD comprised injecting ADSCs into the tunica albuginea. This led to a considerable improvement in erectile function, as well as an inhibition of the expression of type III collagen and elastin [37]. ADSCs with and without human IFN-b2b expression were injected into the tunica albuginea of a rat model for PD in a later investigation that was conducted by Gokce and colleagues. Regardless of whether or not IFN-b2b was present, the findings indicated a substantial improvement in erectile function as well as an attenuation of PD-like alterations [38]. In spite of these encouraging results in animal models, there is currently insufficient information regarding the efficacy of stem cell treatment for PD in people, which prevents its endorsement for clinical usage. Discussion Peyronie's disease is a debilitating condition characterized by the formation of fibrous plaques in the tunica albuginea, which can cause pain, curvature, and erectile dysfunction. While surgical intervention has traditionally been the curative option, the use of intraplaque injections of different substances has always been tested as a viable treatment option. One such substance is verapamil, a calcium channel blocker that has been shown to have antifibrotic and anti-inflammatory effects. Several studies have investigated the use of intralesional verapamil injections in patients with PD and have found it to be effective in reducing plaque size and improving penile curvature and erectile function. Another substance that has been investigated for intralesional injections is interferon alpha-2b, a cytokine that has been shown to have antifibrotic and immunomodulatory effects. Studies have suggested that intralesional interferon alpha-2b injections may lead to improvements in plaque size and penile curvature, as well as in penile pain and erectile function. Collagenase, an enzyme that breaks down collagen, has also been used in intralesional injections for PD. The idea behind this treatment is to break down the fibrous plaque that is causing the curvature and other symptoms. Several studies have shown that collagenase injections can lead to improvements in plaque size and penile curvature, as well as in penile pain and erectile function. The use of HA as a noninvasive alternative has gained attention in the last 30 years. Preclinical studies have demonstrated the potential benefits of HA in treating PD. Such results are promising, but there is still much to learn about its mechanisms of action. The antifibrotic, anti-inflammatory, and proangiogenic effects have been partially demonstrated; however, future studies should investigate more the molecular pathways involved in the inhibitory effects of HA on fibrosis. Regarding the anti-inflammatory effects, it is interesting to note that HA has both structural and signaling roles in the body, and these roles can be influenced by various factors such as the molecular weight, location, and cell factors. The ability of HA to modulate tissue hydration and maintain the physical properties of ECM is important for the maintenance of healthy tissues. On the other hand, the signaling role of HA in inflammation and cell behavior highlights its potential in therapeutic applications in various conditions. However, more research is needed to elucidate the signaling pathways involved in HA's anti-inflammatory effects, as well as to optimize the delivery methods and the type of molecule that should be used for the treatment of PD. Not many clinical trials have been conducted on the use of HA in patients suffering from PD; however, all of them have shown promising results. The studies included patients who received intralesional injections of HA over varying periods of time and measured outcomes such as plaque size, penile curvature, and erectile function. Overall, the studies found that HA injections led to improvements in these outcomes and were well tolerated by patients. Additionally, two studies compared the effectiveness of HA injections with another treatment, verapamil, and found that HA injections were more effective in reducing penile curvature and improving patient satisfaction. One study also investigated the combined use of oral and intralesional HA and found that this approach was more effective than intralesional HA injections alone. Clearly, there are some limitations in the whole literature. In particular, the sample sizes of the studies. Several studies included a relatively small number of participants, which may limit the generalizability of the results to the broader population of individuals with PD. Moreover, in some studies, there is a lack of control groups which makes it difficult to determine whether the observed improvements were a result of the treatment or other factors such as natural disease progression or placebo effects. In addition, there is a huge variability in treatment protocols. In fact, there was a wide range of treatment protocols across the studies, such as different dosages (when elicited) and frequencies of injections, which may make it difficult to compare results across studies or determine the optimal treatment approach. The findings suggest that HA injections could be a viable and safe treatment option for PD, particularly in the early stages of the disease. Regarding the stable phase of PD, the absence of studies cannot lead to any conclusions and, obviously, further research is needed to confirm these results and to determine the optimal dosage and treatment duration for HA injections, either in active or in chronic PD. Conclusions In conclusion, hyaluronic acid has shown promising results as a nonsurgical treatment option for Peyronie's disease, both in preclinical and clinical studies. Its ability to inhibit fibrosis and promote angiogenesis makes it a potentially effective therapy. Moreover, the low risk of adverse effects makes it an attractive option for patients who are unable or unwilling to undergo surgery. While more research is needed to fully understand its efficacy and optimal dosage, the current evidence suggests that hyaluronic acid is a safe and effective treatment for Peyronie's disease.
v3-fos-license
2020-02-20T09:05:29.370Z
2020-02-01T00:00:00.000
211234148
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.cureus.com/articles/26583-treatment-of-a-large-skull-defect-and-brain-herniation-in-a-newborn-with-adams-oliver-syndrome.pdf", "pdf_hash": "d082c5112c5149a1d50c97c1303788335d05c877", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43690", "s2fieldsofstudy": [ "Medicine" ], "sha1": "5db98ebeb811de6f9fcef193f4818268334fffa9", "year": 2020 }
pes2o/s2orc
Treatment of a Large Skull Defect and Brain Herniation in a Newborn With Adams-Oliver Syndrome Adams-Oliver syndrome (AOS) is a rare congenital disorder characterised by a wide variety of clinical expression ranging from the occurrence of aplasia cutis congenita (ACC), transverse limb defects, and cutis marmorata telangiectica to extensive lethal anomalies. In this article, we present the conservative and surgical management of a male newborn infant diagnosed with AOS. Surgical treatment included wound management, the removal of protruding brain, and treatment of cerebrospinal fluid (CSF) leakage. After spontaneous reepithelization of the wounds, conservative treatment was chosen instead of reconstruction with an occipital flap; this was continued until the total healing of the dermal defect after eight months, during which the patient was continuously treated with antibiotics. At 17 months, the child was in good physical condition with a three-month development delay in comparison with infants of his age and no evidence of neurological deficit. Introduction Adams-Oliver syndrome (AOS) is a rare congenital disorder characterised by a wide variety of clinical expressions. This ranges from the occurrence of aplasia cutis congenita (ACC), transverse terminal limb defects (TTLD) and cutis marmorata telangiectatica congenita (CMTC, 19%) to extensive lethal anomalies to the central nervous system (CNS, 23%) and congenital heart defects (23%) [1,2].This syndrome was first described in 1945 by Adams and Oliver [3]. Most cases of AOS are assumed to be autosomal dominant with reduced penetrance and variable expression, however, in some cases there appeared to be an autosomal recessive inheritance pattern. The common occurrence of cardiac and vascular anomalies suggests a primary defect of vasculogenesis, although the molecular basis of this disorder still remains unknown [4][5][6][7][8]. Six causative genes (NOTCH1, DLL4, DOCK6, ARHGAP31, EOGT, and RBPJ) have been identified [9][10][11]. Several cases of ACC have previously been described; some of them were associated with the AOS. We report a unique case of a male newborn infant with an exceptionally large congenital scalp and skull defect exposing the dura, and herniation and active bleeding of the brain, owing to AOS. Furthermore, the patient had small TTLDs (minimal brachydactyly). Management of skull defects resulting from cutis aplasia remains controversial, probably because of the low prevalence, which has been estimated at 1 per 10.000 live births [12]. Both surgical intervention and conservative management, or a combination of the two have been described in the literature [12][13][14][15][16][17][18][19]. As recent literature has highlighted the potential, serious risks of nonsurgical management of large extended congenital skull defects [12], we decided to report our case in favour of a conservative approach. Case presentation A male newborn infant was born through an acute caesarean section at 39 weeks gestation at another medical hospital. He was the second child of phenotypically normal, non-consanguineous parents. The pregnancy was complicated by intrauterine growth restriction (IUGR) (<P10). Antenatal ultrasound at 32 and 35 weeks' gestation at the University Hospital in Maastricht revealed a delayed growth of the head compared to the growth of the body. Brain structures could not be evaluated very well due to the low position of the head behind the pubic bone. Family history was negative for scalp defects, cardiac malformations or anomalies involving extremities. Physical examination instantly after birth revealed a large scalp defect, 14 x 10 cm, over the vertex from the frontal bone extending to the parietal bones on both sides. There was a matching underlying skull ossification defect and dura defect, allowing visualization of the brain, only covered by a thin, translucent membrane. A continuously bleeding prolaps of a parasagittal parietal part of the brain was visible through a tear in the thin membrane. The bleeding source was probably the superior sagittal sinus. Further inspection revealed a (minimal) brachydactyly of the first four digits of hands and feet, and hypoplastic nails. The skin showed a cutis marmorata. Neurological examination revealed a hypotonic status on the left side of the body and less spontaneous movements on that side. The combination of ACC, TTLD,s and cutis marmorata led us to the diagnosis of AOS. Investigation and treatment The newborn was transferred from the hospital in Roermond to our academic hospital immediately after birth, using a mobile neonatal intensive care unit. Immediately after arrival, a computed tomography (CT) scan verified the extensive defect of the scalp and skull, the herniation of brain tissue, and showed active bleeding along the falx cerebri and superior sagittal sinus ( Figure 1). The CT scan also showed a traumatic fracture and molding of the existing skull. Under continuous blood transfusion, a multidisciplinary team of paediatric anaesthesiologists, neurosurgeons, and plastic surgeons evaluated the defect and the general condition and prognosis of the infant, and decided to perform surgery on the scalp and skull defect three hours after birth. The occipital prolapse of the brain measured approximately eight cubic centimetres and was amputated up to the level of the defect. The dural defect was closed with a dural graft implant (DuraformTM, Johnson&Johnson, Codman, NJ, USA) and Surgicel Tabotamp* (Johnson&Johnson-Ethicon, NJ, USA). The skin defect was covered with Integra® Dermal Regeneration Template (Integra Life Sciences Corp., New Jersey, USA). The entire defect was wrapped with betadine antiseptic gauzes, followed by sterile gauzes and a bandage. Postoperatively, the infant was taken to the neonatal intensive care unit and underwent sterile dressing changes twice a week and was put on an intravenous prophylactic antibiotics scheme. At approximately two weeks of age, a magnetic resonance imaging (MRI) scan of the brain was made because of a progressive herniation of the right parieto-occipital area. The MRI showed thrombosis of the sagittal sinus. Fortunately the patient did not develop venous infarctions or signs of increased intracranial pressure. Surgical treatment was started with re-removal of the brain protrusion, and Spongostan* (Johnson&Johnson-Ethicon, NJ, USA) was placed into the open ventricle, followed by suturing in an EthisorbTM Dura Patch (Johnson&Johnson, CODMAN®, NJ, USA) and temporary covering with TachoSil® Surgical Patch (Baxter international Inc., Deerfield, Il, USA). At the age of four weeks, the infant had another episode of cerebrospinal fluid (CSF) leakage and thereby a second, small defect of the right parieto-occipital area, and brain prolapse, which caused epileptic seizures. The neurosurgeon reoperated and the brain prolapse could be reduced without resection. The defect was covered with a Dura Patch and TachoSil, and the CSF gap was closed with new sutures. Postoperatively, no signs of hydrocephalus developed. After a month of conservative treatment with dressing changes every two days, a positive pressure isolation room (pathogen free), antibiotics, antimycotics (Daktarin), and intensive care, a fourth operation was performed for debridement of the wound and to change the TachoSil. During the entire period, the patient was treated in an almost upright position to prevent pressure on the wound. Acetazolamide (Diamox) was administered in a low dosage because of the three times of recurring CSF leakage. Due to rejection of the Integra®, the last operation was performed, in combination with the construction of a delay procedure, for a planned pedicled occipital flap by the plastic surgery team, to allow definitive closure of the areas with Dura Patches after 4 to 6 weeks. Further conservative treatment with gauzes and suspicion Fucidin cream twice a week was introduced to bridge the period before definitive transposition of the occipital flap. However, after three weeks of bandage changes, the edges of the wound appeared to have reepithelialised spontaneously. It was decided to continue the conservative management plan, instead of reconstruction with the occipital flap. Conservative treatment was continued until a complete healing of the dermal defect occurred after eight months. During this period, systemic broad-spectrum antibiotics were continuously administered (i.e. amoxicillin/clavulanic acid, flucloxacillin, ceftazidime, cefazolin, and gentamicin) as well as topical application of fucidin ointment. As soon as there was any sign of fungal infection, miconazol cream was added to the topical antibiotic treatment. Further conservative management consisted of positive pressure environment, sterile wound dressing changes, and counselling for the parents and child. Outcome and follow-up At the present age of 17 months, the child is in good physical condition. Clinical evaluation reveals stable growth retardation (P<5) and a three-month development delay in comparison with infants of his age. Neurologically, the child is moving symmetrically, there is no evidence of neurological deficit. By physical examination, a defect central on the skull, covered by a completely healed skin is still palpable, located at the area of the initial tear of the thin membrane, where the dura patch was introduced ( Figure 2). Interestingly, the rest of the scalp defect shows spontaneous ossification. Until the bony skull defect is completely healed, the patient will wear a helmet to prevent any accidental injury. FIGURE 2: Postnatal defect and at 17 months' age Left image: postnatal defect with brain protrusion. Right image: healed skin after conservative treatment at 17 months of age. Discussion To our knowledge, this case is the most extensive case of aplasia cutis with underlying skull defect described in the literature. Management of skull defects resulting from cutis aplasia remains controversial. Both surgical intervention and conservative management, or a combination have been described [12,13]. The suggested treatment strategy is dependent on the dimension of the skull defect and the overall condition of the infant. Surgical management is not considered to be a standardised treatment for ACC. Ideally, we try to achieve the following objectives as mentioned by Albright et al. [20]: -protection of the underlying brain and dural venous sinuses by keeping the laesion moist using sterile saline-soaked gauzes. This prevents infection and avoids desiccation and cracking of exposed tissue overlying the dural venous sinus; -eventual healing or repair of any underlying skull defect; -coverage of the head with hair-bearing scalp; -minimisation of scar-tissue; -avoidance of surgical complications/iatrogenic trauma; -minimisation of hospital days; -minimisation of total treatment course. In this unique case, we have demonstrated that it is possible to finish a treatment conservatively even in case of a large congenital skull defect caused by ACC. We have followed a stepwise approach in which we first excised the brain herniation, and covered the membrane with a Dura Patch and a dermal skin substitute (Integra®). Due to rejection of the Integra®, we planned a delayed, pedicled occipital flap. However, conservative management led to early reepithelialisation, skin growth, and ossification of the thin membrane. Therefore, the pedicled flap did not have to be used. In order to reach a satisfactory result with conservative measures, and to avoid serious complications--such as meningitis, hyponatremia with seizures or brain herniation, or massive haemorrhage--treatment should be carried out under strict conditions. We recommend a positive pressure isolation room, dressing changes every second day, topical and systemic antibiotics until complete skin healing is reached, antimycotics, and intensive care conditions. Diamox can be considered to help prevent CSF leakage. An important aspect of the treatment approach was the continuous administration of antibiotics to avoid potentially fatal consequences associated with cerebral infection. A multidisciplinary approach is preferred to provide the best (conservative) treatment for a large skull defect in cutis aplasia congenita, and to avoid serious complications. Conclusions To our knowledge, this case is the most extensive case of ACC with underlying skull defect described in the literature, and this article reports the possibility of finishing a treatment conservatively even in case of a very large (14 x 10 cm) skull defect. An important aspect of the treatment approach was the continuous administration of antibiotics to avoid potentially fatal consequences associated with cerebral infection. A multidisciplinary approach is preferred to provide the best (conservative) treatment for a large skull defect in cutis aplasia congenita and to avoid serious complications. Additional Information Disclosures Human subjects: Consent was obtained by all participants in this study. Conflicts of interest: In compliance with the ICMJE uniform disclosure form, all authors declare the following: Payment/services info: All authors have declared that no financial support was received from any organization for the submitted work. Financial relationships: All authors have declared that they have no financial relationships at present or within the previous three years with any organizations that might have an interest in the submitted work. Other relationships: All authors have declared that there are no other relationships or activities that could appear to have influenced the submitted work.
v3-fos-license
2022-11-10T16:49:35.861Z
2022-11-01T00:00:00.000
253430398
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/2309-608X/8/11/1173/pdf?version=1667816144", "pdf_hash": "cefa6be2b12cf5290e8ad1955ef8c7c731f0f9a1", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43691", "s2fieldsofstudy": [ "Biology" ], "sha1": "89f175ca2bd9ed07fb57cc15b30ea074e9954cfe", "year": 2022 }
pes2o/s2orc
In Vitro Antifungal Activity of LL-37 Analogue Peptides against Candida spp. Fungal infections have increased in recent decades with considerable morbidity and mortality, mainly in immunosuppressed or admitted-to-the-ICU patients. The fungal resistance to conventional antifungal treatments has become a public health problem, especially with Candida that presents resistance to several antifungals. Therefore, generating new alternatives of antifungal therapy is fundamental. One of these possibilities is the use of antimicrobial peptides, such as LL-37, which acts on the disruption of the microorganism membrane and promotes immunomodulatory effects in the host. In this study, we evaluated the in vitro antifungal activity of the LL-37 analogue peptides (AC-1, LL37-1, AC-2, and D) against different Candida spp. and clinical isolates obtained from patients with vulvovaginal candidiasis. Our results suggest that the peptides with the best ranges of MICs were LL37-1 and AC-2 (0.07 µM) against the strains studied. This inhibitory effect was confirmed by analyzing the yeast growth curves that evidenced a significant decrease in the fungal growth after exposure to LL-37 peptides. By the XTT technique we observed a significant reduction in the biofilm formation process when compared to yeasts untreated with the analogue peptides. In conclusion, we suggest that LL-37 analogue peptides may play an important antimicrobial role against Candida spp. Introduction Candidiasis is one of the most medically important mycoses worldwide with different clinical manifestations; one of the most frequent is the invasive candidiasis, responsible for about 75% of opportunistic yeast infections in hospitalized patients with inherent risk factors [1]. Although there are about 200 species of Candida described, only some can cause infection and, in some cases, present reduced susceptibility to antifungals as well as a recognized intrahospital environment infectivity. The main etiological agent of invasive candidiasis is C. albicans, representing about 50% of cases. However, the prevalence of non-albicans Candida species such as C. glabrata in the United States and northwest Europe and C. parapsilosis in Latin America, southern Europe, India, and Pakistan is increasing [2]. Candidiasis is a common cause of morbidity and mortality. In the United States, nosocomial infection by Candida spp. is the fourth most common cause of hospital admission [2]. (AC-2) peptide, 24 amino acids long, begins with glycine and presents acetylation at the amino terminal domain and amidation at the carboxyl-terminal position. Finally, the DL 37-2 (D) peptide with 25 amino acids has a modification in the amino terminal position that turns this analogue peptide into a D enantiomer. It is important to note that the positive charge, peptide structural variations (for example, D enantiomer), and the short peptide chains could decrease the active sites where the endoproteases (released by the microorganisms as a defense mechanism) give the analogue peptides an intrinsic resistance to microorganisms [18]. Therefore, the LL-37 analogue peptides mentioned were selected in this study in order to evaluate their possible antifungal effect against different reference species of Candida as well as 20 strains with clinical importance that cause vulvovaginal candidiasis. Synthesis and Purification of Antimicrobial Peptides The human cathelicidin LL-37-derived peptides (AC-1, AC-2, LL37-1, and D) with the amidated C-terminal portion were obtained in Peptide 2.0 (Chantilly, VA, USA). Analyses with a high-performance liquid chromatography system (HPLC) and mass spectrometry (MS) performed by the manufacturer showed that the analogue synthetic human cathelicidin was 98% pure. Peptides derived from the LL-37 bioinformatic design were investigated on an anti-BP server (http://www.imtech.res.in/raghava/antibp/index.htmL) (accessed on 20 January 2019) using the APD database (http://aps.unmc.edu/AP/main.php) (accessed on 20 January 2019). To carry out the in silico experiments, we analyzed 15 different fragments and then selected the four peptides with the best antimicrobial profile: LL37-1 (GRKSAKKIGKRAKRIVQRIKDFLR) and AC-2 (GRKSAKKIGKRAKRIVQRIKDFLR), both with 24 amino acids, with the difference being that the AC-2 peptide presented acetylation in the terminal amino group; AC-1 (RKSKEKIGKEFKRIVQRIKDFLR) with 23 amino acids; and D ((d-PHE) GRKSAKKIGKRAKRIVQRIKD (d-F) LR) with 25 amino acids. Microorganisms Standard strains of C. albicans (SC5314 and ATCC 10231), C. parapsilosis ATCC 22019, C. krusei ATCC 6558, and C. tropicalis ATCC 750 were challenged. An azole-resistant strain of C. albicans 256 was identified by MALDI-TOF and then incorporated into the study. Additionally, 20 clinical isolates of C. albicans from patients with vulvovaginal candidiasis from Bogotá, Colombia, were analyzed. All strains were preserved in 10% glycerol at −80 • C. Three days before the experiments started, each fungal strain was sub-cultured in Sabouraud dextrose agar (Becton, Dickinson and Company; Sparks, NV, USA) and maintained at 37 • C for 24-48 h. Subsequently, isolated colonies of each strain were sub-cultured in brain-heart infusion liquid medium (BHI, Becton Dickinson, New Jersey, NJ, USA) and shaken at 100 rpm for 24 h at 37 • C in order to recover exponentially growing yeasts. Susceptibility Assay of Candida Planktonic Cells The minimum inhibitory concentration (MIC) was determined by the liquid medium microdilution technique, described in the M27-S4 document (Clinical and Laboratory Standards Institute (CLSI), 2012). The MIC was defined as the lowest necessary concentration of the LL-37 analogue peptides (AC-1, AC-2, LL37-1, and D) capable of inhibiting the different Candida species' growth (C. albicans SC5314 and ATCC 10231, C. parapsilosis ATCC 22019, C. krusei ATCC 6558, and C. tropicalis ATCC 750) and the clinical strains evaluated in this study. Yeasts were re-suspended in RPMI 1640 medium (BiowHITTAKER ® , Lonza, Belgium) supplemented with (3-(n-morpholino) propanesulfonic acid (MOPs, Sigma-Aldrich, Missouri, MO, USA) at a 0.5 McFarland scale representing 1 × 10 8 colony-forming units per milliliter (CFU/mL). Antimicrobial-derived peptides of LL-37 were added at 100 µM, 50 µM, 25 µM, 12.5 µM, 6.25 µM, 3.12 µM, and 1.5 µM concentrations, diluted in RPMI medium, and placed into sterile 96-well microtiter polystyrene plates (Corning Incorpo-rated, New York, NY, USA) until adjusting the volume to 100 µL. Fluconazole (FLZ) (Pfizer, New York, NY, USA) at an initial concentration of 64 µg/mL and amphotericin B (AMB) (Sigma-Aldrich, Missouri, MO, USA) at an initial concentration of 16 µg/mL were used as antifungal controls since they are frequently used in the treatment of different mycoses including candidiasis. As growth control, we utilized different Candida yeasts without any antifungal treatment or the LL-37 analogue antimicrobial peptides. After 48 h of samples' incubation, the 492 nm optical density measurement was carried out using a FC multiscan spectrophotometer (Thermo Fisher Scientific Inc., Waltham, MA, USA). Determination of Growth Phases Using the LL-37-Derived Peptides Growth curves were performed using different Candida species: C. albicans ATCC 10231, C. albicans SC5314, C. krusei ATCC 6558, and C. parapsilosis ATCC 22019. These yeasts were re-suspended in RPMI 1640 medium supplemented with MOPs in a scale of 0.5 McFarland or its equivalent in optical density from 0.08 to 0.1, which represents 1 × 10 8 CFU/mL, utilizing an FC multiscan spectrophotometer (Thermo Fisher Scientific Inc., Waltham, MA, USA) to measure the samples at a wavelength of 600 nm. Subsequently, in sterile 100-well microtiter polystyrene plates (Honeycomb, Thermo Fisher Scientific, Inc., Waltham, MA, USA), 150 µL of MOPs-supplemented RPMI medium and 150 µL of each yeast inoculum were added separately to determine the yeasts' growth in the presence of the LL-37-derived peptides (AC-1, LL37-1, AC-2, and D) at concentrations of 10 µM, 5 µM, 2.5 µM, 1.25 µM, and 0.62 µM. For this experiment, the amphotericin B (Sigma-Aldrich, USA) was used as antifungal control in an initial concentration of 40 µg/mL. Samples were incubated and analyzed in a BioScreen C piece of equipment (Thermo Labsystems Type FP-1100-C, Waltham, MA, USA) at a constant temperature of 37 • C with continuous shaking for 48 h. All measurements were carried out in an automated way every hour through the presence of turbidity at a wavelength of 600 nm. To increase the reproducibility, each assay parameter was performed in triplicate. Scanning Electron Microscopy (SEM) C. albicans ATCC 10231 was treated with a concentration lower than the MIC of each antimicrobial peptide, analogous to LL-37 for 24 h at 37 • C. Afterward, the samples were fixed in 2.5% glutaraldehyde for 3 h at room temperature. The samples were then applied on a polylysine-coated coverslip, serially dehydrated in alcohol, and subsequently observed in a scanning electron microscope and focused ion beam FE-MEB LYRA3 of TESCAN (Brno, Czech Republic), which has an integrated X-ray energy dispersive spectroscopy (EDS) microanalysis system (energy dispersive X-ray spectroscopy). Statistical Analysis Statistical analysis was performed using GraphPad Prism version 7.05 (GraphPad Software, San Diego, CA, USA). Statistical comparisons were carried out by the analysis of variance (one-way ANOVA) followed by a Tukey-Kramer post hoc test. The p-values of <0.05 indicated statistical significance. Antifungal Susceptibility in Planktonic Cells of Candida spp. The susceptibility of C. albicans ATCC 10231 and SC5314, C. parapsilosis ATCC 22019, C. krusei ATCC 6558, C. tropicalis ATCC 750, and the clinical isolates herein studied, which were exposed to different concentrations of the LL-37 analogue peptides, can be observed in Table 1. The AC-1 peptide demonstrated antifungal activity due to the high susceptibility of most of the strains challenged. Species such as C. parapsilosis, C. krusei, and C. tropicalis presented an MIC of 0.15 µM. However, the clinical isolates and the C. albicans SC5314 strain presented a lower susceptibility from the AC-1 peptide with an MIC up to 10 µM. Regarding the AC-2 and LL37-1 peptides, a high susceptibility of the C. albicans strains with MICs from 0.07 to 5 µM was evidenced; only a few clinical strains had a higher range (10 µM). Finally, a promising result was obtained from the D peptide against C. tropicalis and C. albicans ATCC 10231 with MIC values equivalent to 0.15 and 1.25 µM, respectively. The least promising effect of the D peptide was against C. krusei ATCC 6558 with an MIC of 10 µM. On the other hand, fluconazole, used as a control, showed inhibition in most of the reference strains with the exception of C. krusei due to its intrinsic resistance to this antifungal. Some clinical strains also showed low sensitivity to fluconazole. However, we highlight the significant antifungal effect of LL-37 analogue peptides against clinical isolates from patients with vulvovaginal candidiasis that were less sensitive to the antifungal drug used as control. Our results showed that all the strains included in this study were susceptible to amphotericin B, with the highest MIC being 4 µg/mL in the case of C. albicans 256 and some clinical isolates, as can be observed in Table 1. Determination of Yeast Growth Phases Using LL-37 Analogue Peptides Candida spp. were exposed to different concentrations of AC-1, AC-2, LL37-1, and D antimicrobial peptides, aiming to analyze the fungal growth curves. In all experiments, fluconazole was used as antifungal control. Our results showed that the yeast susceptibility profiles were variable, as observed in the previous section. The growth curves of the C. albicans ATCC 10231 strain demonstrated a promising antifungal effect by the four LL-37-derived peptides, with a significant decrease in growth at concentrations of 10, 5, 2.5, 1.25, and 0.62 µM ( Figure 1). 37-derived peptides, with a significant decrease in growth at concentrations of 10, 5, 2.5, 1.25, and 0.62 µM ( Figure 1). In the case of C. albicans 256 and the clinical isolate with a high azoles resistance profile, the AC-1 and LL37-1 peptides showed the most efficient antimicrobial activity at all concentrations used. Although the enantiomer D did not show an excellent performance, it was able to reduce the yeast growth at 2.5, 5, and 10 µM. On the other hand, the AC-2derived peptide did not present any growth inhibitory effect on C. albicans 256 (Figure 2). Growth curves of C. albicans 256. Four LL-37-derived peptides (AC-1, AC-2, D, and LL37-1) were tested at different concentrations (10, 5, 2.5, 1.25, and 0.62 µM). C. albicans 256 cells without exposure to antimicrobial peptides were used as a control. Fluconazole (64µg/mL) was used as antifungal control. Data represent three independent experiments. Statistical significance *** p < 0.001, **** p < 0.0001 when compared to control. When the peptides derived from LL-37 were tested against the C. albicans SC5314 reference strain, we saw that the most promising peptides were D (10 µM) and AC-1 (10 In the case of C. albicans 256 and the clinical isolate with a high azoles resistance profile, the AC-1 and LL37-1 peptides showed the most efficient antimicrobial activity at all concentrations used. Although the enantiomer D did not show an excellent performance, it was able to reduce the yeast growth at 2.5, 5, and 10 µM. On the other hand, the AC-2derived peptide did not present any growth inhibitory effect on C. albicans 256 ( Figure 2). 37-derived peptides, with a significant decrease in growth at concentrations of 10, 5, 2.5, 1.25, and 0.62 µM (Figure 1). In the case of C. albicans 256 and the clinical isolate with a high azoles resistance profile, the AC-1 and LL37-1 peptides showed the most efficient antimicrobial activity at all concentrations used. Although the enantiomer D did not show an excellent performance, it was able to reduce the yeast growth at 2.5, 5, and 10 µM. On the other hand, the AC-2derived peptide did not present any growth inhibitory effect on C. albicans 256 ( Figure 2). When the peptides derived from LL-37 were tested against the C. albicans SC5314 reference strain, we saw that the most promising peptides were D (10 µM) and AC-1 (10 When the peptides derived from LL-37 were tested against the C. albicans SC5314 reference strain, we saw that the most promising peptides were D (10 µM) and AC-1 (10 µM), which showed an important decrease in yeast growth with a significance of p < 0.05, as shown in Figure 3. µM), which showed an important decrease in yeast growth with a significance of p < 0.05, as shown in Figure 3. In the same way, when C. parapsilosis ATCC 22019 was challenged, the growth inhibition induced by the AC-1 peptide was significant at 10 µM, 5 µM, and 2.5 µM The growth inhibition induced by these four LL-37 analogue peptides was also significant in all concentrations used (10, 5, 2.5, 1.25, and 0.62 µM) against C. krusei ATCC 6558 ( Figure 4). µM), which showed an important decrease in yeast growth with a significance of p < 0.05, as shown in Figure 3. In the same way, when C. parapsilosis ATCC 22019 was challenged, the growth inhibition induced by the AC-1 peptide was significant at 10 µM, 5 µM, and 2.5 µM In the same way, when C. parapsilosis ATCC 22019 was challenged, the growth inhibition induced by the AC-1 peptide was significant at 10 µM, 5 µM, and 2.5 µM concentrations. The other analogue peptides (AC-2, D, and LL37-1) caused a significant decrease in growth at concentrations of 10, 5, 2.5, 1.25, and 0.62 µM, as shown in Figure 5. It is important to highlight the promising antifungal effect of LL-37 analogue peptides, including azoleresistant strains such as the fluconazole-resistant C. albicans 256 and some clinical isolates from women with vulvovaginal candidiasis who presented high MICs to fluconazole (Table 1). concentrations. The other analogue peptides (AC-2, D, and LL37-1) caused a significant decrease in growth at concentrations of 10, 5, 2.5, 1.25, and 0.62 µM, as shown in Figure 5. It is important to highlight the promising antifungal effect of LL-37 analogue peptides, including azole-resistant strains such as the fluconazole-resistant C. albicans 256 and some clinical isolates from women with vulvovaginal candidiasis who presented high MICs to fluconazole (Table 1). Effect of LL-37 Analogue Peptides in the Biofilm Formation Process C. albicans ATCC 10231 yeasts treated with the LL-37-derived peptides showed a significant reduction in metabolic activity, evidenced by biofilm formation compared to untreated yeasts. At 20, 10, 5, and 2.5 µM concentrations, the four analogue peptides herein studied showed a statistically significant decrease in the biofilm formation (p < 0.01). Additionally, at concentrations of 1.25 and 0.62 µM, the metabolic activity declined. Although less evident, it had a significance of p < 0.05, as can be observed in Figure 6. Additionally, yeasts treated with different concentrations of amphotericin B used as a control showed a significant decrease in biofilm formation compared to the untreated control group ( Figure 6). Effect of LL-37 Analogue Peptides in the Biofilm Formation Process C. albicans ATCC 10231 yeasts treated with the LL-37-derived peptides showed a significant reduction in metabolic activity, evidenced by biofilm formation compared to untreated yeasts. At 20, 10, 5, and 2.5 µM concentrations, the four analogue peptides herein studied showed a statistically significant decrease in the biofilm formation (p < 0.01). Additionally, at concentrations of 1.25 and 0.62 µM, the metabolic activity declined. Although less evident, it had a significance of p < 0.05, as can be observed in Figure 6. Additionally, yeasts treated with different concentrations of amphotericin B used as a control showed a significant decrease in biofilm formation compared to the untreated control group (Figure 6). concentrations. The other analogue peptides (AC-2, D, and LL37-1) caused a significant decrease in growth at concentrations of 10, 5, 2.5, 1.25, and 0.62 µM, as shown in Figure 5. It is important to highlight the promising antifungal effect of LL-37 analogue peptides, including azole-resistant strains such as the fluconazole-resistant C. albicans 256 and some clinical isolates from women with vulvovaginal candidiasis who presented high MICs to fluconazole (Table 1). Effect of LL-37 Analogue Peptides in the Biofilm Formation Process C. albicans ATCC 10231 yeasts treated with the LL-37-derived peptides showed a significant reduction in metabolic activity, evidenced by biofilm formation compared to untreated yeasts. At 20, 10, 5, and 2.5 µM concentrations, the four analogue peptides herein studied showed a statistically significant decrease in the biofilm formation (p < 0.01). Additionally, at concentrations of 1.25 and 0.62 µM, the metabolic activity declined. Although less evident, it had a significance of p < 0.05, as can be observed in Figure 6. Additionally, yeasts treated with different concentrations of amphotericin B used as a control showed a significant decrease in biofilm formation compared to the untreated control group ( Figure 6). Figure 6. Effect of LL-37 analog peptides on C. albicans ATCC 10231 biofilm formation. Colorimetric reaction was read spectrophotometrically at 490 nm. Statistical difference (** p < 0.01) was observed in yeasts treated with 20, 10, 5, and 2.5 µM of the derived peptides. At 1.25 and 0.62 µM concentrations, the statistical significance was **** p < 0.0001 compared to the growth control (yeasts without exposure to the antimicrobial peptides). Figure 6. Effect of LL-37 analog peptides on C. albicans ATCC 10231 biofilm formation. Colorimetric reaction was read spectrophotometrically at 490 nm. Statistical difference (** p < 0.01) was observed in yeasts treated with 20, 10, 5, and 2.5 µM of the derived peptides. At 1.25 and 0.62 µM concentrations, the statistical significance was **** p < 0.0001 compared to the growth control (yeasts without exposure to the antimicrobial peptides). Scanning Electron Microscopy (SEM) C. albicans ATCC 10231 yeasts treated with LL-37 analog peptides showed structural alterations, such as cell wall rupture ( Figure 7B 1 ), bud cell detachment ( Figure 7C 1 ), and pseudohyphal inhibition. Yeasts not treated with the analogous peptides (control group) presented a more homogeneous structure, and even the formation of pseudohyphae was observed (Figure 7). Scanning Electron Microscopy (SEM) C. albicans ATCC 10231 yeasts treated with LL-37 analog peptides showed structural alterations, such as cell wall rupture ( Figure 7B1), bud cell detachment ( Figure 7C1), and pseudohyphal inhibition. Yeasts not treated with the analogous peptides (control group) presented a more homogeneous structure, and even the formation of pseudohyphae was observed (Figure 7). Discussion In this study, we showed the in vitro antifungal activity of four analogue peptides to human cathelicidin LL-37 against yeasts of the genus Candida that reflects a possible therapeutic alternative, favoring advances in the rational design of new peptides as possible therapies to treat superficial and deep mycoses. The derived peptides herein analyzed (AC-1, AC-2, LL37-1, and D) showed inhibitory activity in vitro against different Candida species and even against 20 clinical strains obtained from patients with vulvovaginal candidiasis. The differences observed in the inhibitory processes of each peptide are possibly related to some intrinsic characteristics of the antimicrobial peptides, such as their positive charge (generally +2 to +9) [20] and their cationic nature, which allows these peptides to bind ideally to the anionic charges of cell membranes [21]. On the other hand, the peptides' hydrophobicity provides them with the ability to insert into microbial membranes causing structural damage. Finally, the amphipathicity or duality presented by these peptides, as they contain apolar and polar regions, results in an increased antimicrobial activity [22,23]. Regarding the synthetic variants of cathelicidin LL-37, the AC-1 peptide showed an outstanding antifungal effect at the five concentrations utilized against Candida albicans 256 and at 10 µM for Candida albicans SC5314. The AC-1 peptide has an acetylated N-terminal domain in its chemical structure that allows it to insert itself with more affinity in the fungal cell membrane, avoiding an adequate lipid packing and, therefore, increasing its lytic capacity [24]. The AC-2 peptide showed an important antifungal activity against different Candida spp. strains, as observed in C. tropicalis ATCC 750, C. krusei ATCC 6558, C. parapsilosis ATCC 22019, and C. albicans ATCC 10231. Probably, the action mechanism of AC-2 is different from that of AC-1 since, within its chemical structure, it has one glycine more and Discussion In this study, we showed the in vitro antifungal activity of four analogue peptides to human cathelicidin LL-37 against yeasts of the genus Candida that reflects a possible therapeutic alternative, favoring advances in the rational design of new peptides as possible therapies to treat superficial and deep mycoses. The derived peptides herein analyzed (AC-1, AC-2, LL37-1, and D) showed inhibitory activity in vitro against different Candida species and even against 20 clinical strains obtained from patients with vulvovaginal candidiasis. The differences observed in the inhibitory processes of each peptide are possibly related to some intrinsic characteristics of the antimicrobial peptides, such as their positive charge (generally +2 to +9) [20] and their cationic nature, which allows these peptides to bind ideally to the anionic charges of cell membranes [21]. On the other hand, the peptides' hydrophobicity provides them with the ability to insert into microbial membranes causing structural damage. Finally, the amphipathicity or duality presented by these peptides, as they contain apolar and polar regions, results in an increased antimicrobial activity [22,23]. Regarding the synthetic variants of cathelicidin LL-37, the AC-1 peptide showed an outstanding antifungal effect at the five concentrations utilized against Candida albicans 256 and at 10 µM for Candida albicans SC5314. The AC-1 peptide has an acetylated N-terminal domain in its chemical structure that allows it to insert itself with more affinity in the fungal cell membrane, avoiding an adequate lipid packing and, therefore, increasing its lytic capacity [24]. The AC-2 peptide showed an important antifungal activity against different Candida spp. strains, as observed in C. tropicalis ATCC 750, C. krusei ATCC 6558, C. parapsilosis ATCC 22019, and C. albicans ATCC 10231. Probably, the action mechanism of AC-2 is different from that of AC-1 since, within its chemical structure, it has one glycine more and it is acetylated and amidated in the carboxyl-terminal position, which confers protection against the microorganism's proteolytic systems. Our results corroborate this information since the LL-37 analogue peptides such as LL-37-1 strongly inhibited yeast growth, with MICs observed from 0.07 µM for C. tropicalis ATCC 750 to 5 µM for C. albicans clinical isolates. It should be noted that when the Candida albicans 256 strain was exposed to the LL37-1 peptide, the inhibitory effect was evidenced in the five concentrations analyzed. Possibly, the fact that LL37-1 is amidated in the C-terminal position provides it a protective effect (temporary) and increases its stability against microorganism exonucleases, enhancing its biological activity [26]. Peptide D presented an outstanding antifungal effect, with MICs between 0.15 and 5 µM against different strains of C. albicans 256 resistant to azoles, C. tropicalis ATCC 750, C. parapsilosis ATCC 22019, and some of the clinical isolates. Peptide D has a structural change in the right region of carbon-α (chiral carbon), classifying itself as a positively charged D enantiomer, which allows it to easily bind to the anionic charges of microbial membranes, causing the microorganism's structural destabilization and promoting the D peptide stability in proteolytic systems, thus enhancing its antimicrobial effect [27]. The present work shows the inhibitory effect of the LL-37 analogue peptides (LL37-1, AC-1, AC-2, and D) in small concentrations (0.62 µM) against strains with a broad resistance profile to azoles such as C. albicans 256 and clinical strains from patients with vulvovaginal candidiasis as well as different Candida species' control strains. It should be noted that the purpose of proposing new antimicrobial peptides as possible therapeutic candidates consists of the search for minimum concentrations, in which the risk of cytotoxicity for the host cells could consequently be reduced [25]. Additionally, the peptides analyzed in this work presented an important inhibitory effect on the yeast growth in the planktonic state (range of action from 0.62 µM to 20 µM). Additionally, they strongly decreased the biofilm formation process, which is one of the main virulence factors of different Candida species, due to the biofilm limiting the antifungals' penetration through the extracellular matrix and preventing the host's proper immune response functioning [28]. Inhibiting biofilm formation is undoubtedly a very important step to control Candida infection since the ability to form biofilm leads to a fungal successful persistence that is associated with high mortality rates [29]. AC-1, LL37-1, AC-2, and D peptides showed significant antifungal activity against yeast of the Candida genus. It is important to highlight the antifungal effect of these peptides, even in strains resistant to fluconazole, as well as clinical isolates that cause recurrent vulvovaginal candidiasis that have reduced susceptibility to this azole. However, more detailed studies are needed to better understand the interaction of these analogue peptides with pathogenic yeasts. It is crucial to continue with the search for these types of peptides with antifungal properties, optimizing biophysical parameters, maximizing their antimicrobial effect, and minimizing the toxicity to host cells. On the other hand, the analogue peptides may be interesting therapeutic candidates to control infections caused by multiresistant strains to different conventional antifungals. Finally, scanning electron microscopy (SEM) analysis showed the negative effect of LL37 analogue peptides (AC-1, AC-2, LL37-1, and D) on C. albicans yeasts. Important morphological alterations on the cell wall, among others, have been described by some authors studying different antimicrobial peptides against this yeast with promising results [30] Through this methodology it was possible to verify the antifungal effect of the analogous peptides of LL37. However, in the future, it will be necessary to carry out other complementary assays to better understand the site and mechanism of action of peptides on fungal yeasts. Some research groups have analyzed the LL37 peptide cytotoxicity [31]. However, no data on the specific cytotoxicity of the analogous peptides herein studied have been published so far. Coworkers are currently carrying out studies focused on the possible toxicity of these peptides in human red blood cells and in the fibroblast cell line L929 with promising results after 24, 48, and 72 h after peptide exposure (data in process). Other formulations could be developed in the future using delivery systems, such as nanoparticles, in order to improve the antifungal effect of the antimicrobial peptides herein studied. Peptides immobilized in nanoparticles are a promising option in fungal infections' control since this could potentiate the effect of some conventional antimicrobial or antifungal molecules and present a synergistic or additive effect in the infection control [32,33]. It would be interesting to address, in subsequent studies, the possible synergistic or additive effects that the mentioned peptides may present if together they deliver antifungal drugs currently used in clinics. Finally, we suggest that peptides derived from human cathelicidin LL-37 are potential therapeutic candidates due to their rapid mechanism of action and in vitro efficiency in yeast control and are projected as a therapeutic option against candidiasis, a frequent and important mycosis due to the high morbidity and mortality it causes worldwide.
v3-fos-license
2017-01-15T08:35:26.413Z
2017-01-01T00:00:00.000
17655498
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/2072-6643/9/1/66/pdf", "pdf_hash": "562d528586dfb718713ca54326fa54ccbc5c1607", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43694", "s2fieldsofstudy": [ "Agricultural and Food Sciences", "Medicine" ], "sha1": "562d528586dfb718713ca54326fa54ccbc5c1607", "year": 2017 }
pes2o/s2orc
Validation of an Online Food Frequency Questionnaire against Doubly Labelled Water and 24 h Dietary Recalls in Pre-School Children The development of easy-to-use and accurate methods to assess the intake of energy, foods and nutrients in pre-school children is needed. KidMeal-Q is an online food frequency questionnaire developed for the LifeGene prospective cohort study in Sweden. The aims of this study were to compare: (i) energy intake (EI) obtained using KidMeal-Q to total energy expenditure (TEE) measured via doubly labelled water and (ii) the intake of certain foods measured using KidMeal-Q to intakes acquired by means of 24 h dietary recalls in 38 children aged 5.5 years. The mean EI calculated using KidMeal-Q was statistically different (p < 0.001) from TEE (4670 ± 1430 kJ/24 h and 6070 ± 690 kJ/24 h, respectively). Significant correlations were observed for vegetables, fruit juice and candy between KidMeal-Q and 24 h dietary recalls. Only sweetened beverage consumption was significantly different in mean intake (p < 0.001), as measured by KidMeal-Q and 24 h dietary recalls. In conclusion, KidMeal-Q had a relatively short answering time and comparative validity to other food frequency questionnaires. However, its accuracy needs to be improved before it can be used in studies in pre-school children. Introduction Diet is just as important in children as in adults, and possibly even more so because the habits developed in the early years often persist throughout the lifespan [1]. Being able to measure diet in pre-school children is important since childhood obesity often continues into adulthood [2]. An unhealthy diet is a contributing risk factor for some of the most common non-communicable diseases, such as cardiovascular disease, diabetes and cancer [3]. In order to investigate the relationships between diet and disease it is imperative to be able to easily and accurately measure energy and food intake, especially in large epidemiological settings. Traditional methods to assess energy and food intake are 24 h dietary recalls, dietary history, diet records and food frequency questionnaires (FFQ). However, these are burdensome for both the participants and researchers and their accuracy is limited [4]. Thus, the development of easy-to-use and accurate methods to assess the intake of energy, foods and nutrients in pre-school children are Participants and Study Design The MINISTOP trial was based in the county Östergötland in Sweden. A total of 40 parent couples and their children agreed to participate in a validation of dietary intake [11], body composition [12], and physical activity methods at the final follow-up assessment, which began in February 2015 when their children were 5.5 years of age. Details of the recruitment and population have been published previously [11,12]. The age, weight, height, body mass index (BMI), as well as the parental age, BMI and education were comparable between this sample and those in the whole MINISTOP trial (n = 315). Two children had missing information and in total 38 (18 from the intervention group and 20 from the control group) 5.5-year-olds participated in this validation. This study was conducted according to the guidelines laid down by the Declaration of Helsinki, all procedures involving human subjects were approved by the Research and Ethics Committee in Stockholm, Sweden (2013/1607-31/5; 2013/2250-32), and informed consent was obtained from all parents. The MINISTOP trial is registered as a clinical trial (https://clinicaltrials.gov/ct2/show/NCT02021786). Protocol Parents of the children collected two urine samples at home and brought them to the measurement session at the Linköping University Hospital. The weight and height of the children were recorded when they were wearing minimal clothing and no shoes. Thereafter, the child received a dose of stable isotopes mixed with fruit juice to measure their TEE during the subsequent two-week period. The parents were instructed to collect urine samples on days 1, 5, 10 and 14 after dosing and to note the time of sampling. Within the same two-week period the intake of food and drink was assessed using 24 h dietary recalls. After the measurement at the hospital, all parents received an e-mail with a link to the online FFQ KidMeal-Q and were instructed to fill it in directly after the visit. Energy Expenditure TEE has been measured as previously described [11]. Briefly, each child was given an accurately weighed dose of stable isotopes, 0.14 g 2 H 2 O and 0.35 g H 2 18 O, per kg body weight. Five urine samples were collected (on days 1, 5, 7, 10 and 14), stored and analysed for isotope enrichments as previously described [11]. CO 2 production was calculated according to Davies et al. [13], assuming that 27.1% of the total water losses was fractionated [13,14]. TEE was calculated by means of the Weir equation [15], assuming a food quotient of 0.85 [16]. The mean change in body weight from day one to 14 was 0.07 ± 0.32 kg. KidMeal-Q KidMeal-Q is an online meal-based FFQ designed for pre-school children aged three to six. This FFQ measures the child's dietary intake over the past couple of months and includes between 42 and 86 food items, drinks and dishes, depending on the number of follow-up questions. The following pre-defined frequency categories were used: for breakfast food items as well as fruit (1 time/day, 2 times/day or more, 1-2 times/week, 3-6 times/week, 1-3 times/month), for dishes, snacks as well as sweets (1-2 times/week, 3-6 times/week, 7 times/week or more, 1-3 times/month), and for vegetables (1 time/day, 2 times/day, or 3 times a day or more). See the Supplementary Materials for the questions provided in KidMeal-Q. For each of the following food groups, five photos of portion sizes were included: (1) rice, potatoes and pasta; (2) meat, chicken, fish and vegetarian substitutes; and (3) vegetables (raw or cooked).The photos were used to calculate portion sizes for cooked dishes and vegetables. For other food items, standard portions were used. EI was calculated from reported intakes of food items and dishes by linkage to the food composition database provided by the National Food Administration [17] by means of KidMealCalc (Epiqcenter, Stockholm, Sweden), a software developed and validated for this purpose. The grams of fruits, vegetables, fruit juice, sweetened beverages, candy, ice cream and bakery products were then summarized. These foods were selected as they represent healthy and unhealthy food habits relevant for childhood obesity [11]. 24 h Dietary Recalls Four 24 h dietary recalls were performed over the phone in the two-week period following the measurement at the hospital, as published previously [11]. The days used in the 24 h dietary recalls were scheduled with the parents when they were at the hospital for measurements. Briefly, each parent was asked to recall the foods and beverages their child consumed. Information on the type of food products used in mixed dishes and the cooking method was recorded. The portion sizes were reported by the parents using household measurements (decilitres, tablespoons or teaspoons). Words such as slices or pieces were used for other foods such as bread, candy or potatoes. The reported intakes were converted into grams using a standardized weight table provided by the Swedish Food Agency [18] and the grams of fruits, vegetables, fruit juice, sweetened beverages, candy, ice cream and bakery products were then summarized. EI and nutrients were calculated from reported intakes of foods and beverages by linkage to the food composition database [17]. Statistics Values are given as means and standard deviations (SD). Significant differences between mean values were identified using paired samples t-tests and the Wilcoxon Signed Rank test for parametric data (EI, TEE and selected nutrients) and non-parametric data (food groups). Pearson or Spearman correlations were used to evaluate relationships between variables. The Bland and Altman procedure [19] was used to compare EI using KidMeal-Q to TEE measured via DLW. Thus, the difference (y) between EI and TEE was plotted versus the average of the two estimates (x). The mean difference with ±2SD (limits of agreement) were then calculated. To test for a relationship between x and y in the Bland and Altman plot, linear regression was used. Significance (two-sided) was accepted when p < 0.05. Analyses were performed using SPSS version 23 (IBM, Armonk, NY, USA). The classification capacity of KidMeal-Q was assessed using TEE. This was done by ranking EI (KidMeal-Q) and TEE (DLW) in a sequence. Thus, the children with the lowest EI and TEE had the lowest number and the difference between this child and the second in the sequence was the smallest possible. This principle of the smallest possible difference was maintained for all children, producing a sequence with gradually increasing values. The children were then divided into tertiles (low, medium and high) with increasing values. The classification capacity for KidMeal-Q was then evaluated as the number of children placed in the same (0), in the next higher (+1) or lower (−1) and in the second next higher (+2) or lower (−2) group. Results The descriptive characteristics and the energy expenditure for the 38 children (22 boys and 16 girls) are displayed in Table 1. There were no significant differences in anthropometric measures between boys and girls and therefore all analyses are presented for boys and girls combined. There was a wide range for weight and energy expenditure for the children. The parents were highly educated, with between 65%-75% of the parents having a university degree. On average it took the parents of the participating children 13.2 ± 6.2 min to complete KidMeal-Q. The mean EI calculated using KidMeal-Q was statistically different (p < 0.001) from TEE assessed via DLW. The mean EI was 4670 ± 1430 kJ/24 h and TEE was 6070 ± 690 kJ/24 h. Figure 1 displays the Bland and Altman Plot for EI assessed using KidMeal-Q to TEE measured using DLW. The limits of agreement were wide and a significant association was found for the average and difference (r = 0.711, p < 0.001). A significant trend was found, showing that lower EIs were underestimated to a greater extent. In comparison to TEE, KidMeal-Q underestimated EI in 84.2% (n = 32). A significant correlation was found between EI (KidMeal-Q) and TEE (DLW), r = 0.320 (p = 0.05). When dividing the children into tertiles (low, medium and high) for EI and TEE 42.1% (n = 16) were classified correctly, 47.4% (n = 18) were classified plus or minus one group, and 10.5% (n = 4) were classified plus or minus two groups. Table 2 shows the mean intakes and the correlations for the seven foods and drinks assessed using KidMeal-Q and 24 h dietary recalls. Only sweetened beverage consumption was significantly different in mean intake (p < 0.001) as measured by KidMeal-Q and 24 h dietary recalls. Significant correlations were observed for vegetables, fruit juice and candy between the two methodologies. Table 3 displays the mean intakes and correlations for selected nutrients estimated using KidMeal-Q and 24 h dietary recalls. For the percentage of energy obtained from the macronutrients no significant differences were observed, however a significant difference in percent energy from sucrose (p < 0.001) was found. Significant differences were also found for fibre and calcium (both p < 0.001). Significant correlations were found for the majority of the selected nutrients. EI from KidMeal-Q and the 24 h dietary recalls were correlated (r = 0.532, p = 0.001). Discussion KidMeal-Q is an interactive and user-friendly questionnaire with a relatively short answering time and has comparable validity to other corresponding epidemiological tools. KidMeal-Q underestimated EI in the majority of children. However, in regards to the seven investigated food groups only one significant difference was found (sweetened beverages) for the mean intakes assessed using KidMeal-Q and 24 h dietary recalls. KidMeal-Q is quick and simple for parents to respond to, which increases user-friendliness and the likelihood of completion [20]. On average a FFQ takes between 30 to 60 min [21], while KidMeal-Q took only a quarter to a half of that time. A systematic review and meta-analysis conducted by Edwards et al. [20] found that response rates are inversely related to the questionnaire length, and found an even stronger relationship in extremely short questionnaires. Time is not the only factor that affects user-friendliness; the wording and layout of the questions also plays a large role [22]. The interactive design of KidMeal-Q allows for easy navigation throughout the questionnaire through prompts, error messages and letting participants skip irrelevant questions, all of which increases completion rates [6,23]. The short answering time and interactive design of the questionnaire allows it to be used on a large number of people as well as in epidemiological studies, where dietary habits are not part of the main research question and only play a small part in the study. As found with other FFQs [9,[24][25][26][27] KidMeal-Q had wide limits of agreement, demonstrating that it is not a valid tool on an individual level. In comparison with TEE from DLW KidMeal-Q underestimated EI by 23%. One FFQ in pre-school children overestimated EI in comparison to TEE from DLW by 59% [28] and two others underestimated EI by 3% [26] and 5% [25]. Similar to KidMeal-Q, two of the FFQs [26,28] were conducted under unsupervised conditions, whereas that by Collins et al. [25] was conducted in a supervised setting, possibly allowing for the questionnaire to better predict EI through allowing parents to ask questions and clarify statements. The more questions in the Dutman et al. FFQ are more detailed [26], as demonstrated by its longer answering time of 25 min, may have led to a more accurate reporting of EI intake in comparison to KidMeal-Q. KidMeal-Q differed from the aforementioned FFQ in terms of how the parents reported their child's food intake, online versus pen and paper, which also may have led to the observed differences. The underestimation of EI could possibly be due to the questionnaire itself; for instance, the portion sizes provided were perhaps too small and thus led to the observed underestimation. It is important to note that KidMeal-Q underestimated sweetened beverages on average by 82 grams per day. This amount corresponds to approximately 1370 kJ, which is a considerable amount of energy. As FFQs have been shown to both under-and overestimate EI, more research needs to be conducted to improve the accuracy of these tools. Further work should focus on examining the provided portion sizes as well as gain additional understanding of parental reporting of dietary data. Specifically for KidMeal-Q, a revision of the questions regarding sweetened beverages is required. Even though correlations are not optimal when evaluating methods, they are often used when comparing dietary assessment methods. In this study a significant moderate correlation was found between EI from KidMeal-Q and TEE measured with DLW, with similar results being found in other studies [9,24]. However, the correlation was lower than Kroke et al. [27] and Dutman et al. [26], but higher than Collins et al. [25] and Perks et al. [29]. Four of the six FFQs studied [9,24,27,29] were conducted in an adult or youth population and were traditional paper-based questionnaires, except for Christensen et al. [9]. A stronger correlation was found in the Dutman et al. [26] study; however, this may be attributed to the fact that they extensively reviewed their FFQ results and contacted parents about peculiar answers, which should provide more accurate estimates of EI. KidMeal-Q demonstrated a decent ranking ability compared to DLW, which is similar to other studies [9,26,27]. In regards to the correlations for the seven investigated food groups, they are also similar to those found in previous studies in pre-school children [30,31]. A strength of this validation study was the use of DLW as a reference method, which is considered the gold standard for assessing TEE and recommended for usage when validating EI [32]. Furthermore, the use of 24 h dietary recalls allowed us to assess KidMeal-Q's ability to evaluate certain food groups, which is of great importance in epidemiological studies. This study was limited by the fact that TEE was the average of 14 days, while KidMeal-Q assessed dietary habits over the past couple of months; however, the day-to-day variation in TEE is low [33,34], thus we do not think this fact has largely influenced our results. We were unable to obtain four 24 h dietary recalls from all participants; however, when we re-ran the analyses including only children with four 24 h dietary recalls (n = 26), our conclusions remained the same. Furthermore, this nested validation study was conducted at the final follow-up within the MINISTOP trial and the parents in the intervention group were given advice on how to make their child's diet more healthy, which could have affected how they answered the FFQ. We do not believe this is an issue as there were no significant differences in EI as measured by KidMeal-Q or 24 h dietary recalls, TEE, or the food groups between the children in the intervention and control group. Additionally, the majority of the 24 h dietary recalls were weekend days as the MINISTOP trial targeted the home environment. However, as we have stated previously [11], we do not believe this is a major issue because the majority of Swedish parents with a child this age work and would have their child in daycare, so when they filled out the FFQ they would more than likely be filling it out with their child's food habits from the home environment. This study also had a relatively small sample size (n = 38) and only four 24 h dietary recalls were applied. Finally, the fact that the parents were on average more highly educated than the general Swedish population may limit the generalizability of the results. In conclusion, the online FFQ, KidMeal-Q, has been demonstrated to be interactive and user-friendly. It has a relatively short answering time and has comparative validity to other FFQs. However, more work is needed to further improve the questionnaire's accuracy before it can be used in studies in pre-school children. Conflicts of Interest: The authors declare no conflict of interest. Abbreviations The following abbreviations are used in this manuscript: DLW Doubly labelled water EI Energy intake FFQ Food frequency questionnaire SD Standard deviation TEE Total energy expenditure
v3-fos-license
2016-05-04T20:20:58.661Z
2015-10-14T00:00:00.000
677764
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CC0", "oa_status": "GOLD", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0140232&type=printable", "pdf_hash": "b6d0745bc46cf3fc2d7127769ac71e7711c8d450", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43696", "s2fieldsofstudy": [ "Business", "Medicine" ], "sha1": "b6d0745bc46cf3fc2d7127769ac71e7711c8d450", "year": 2015 }
pes2o/s2orc
Patient Engagement Practices in Clinical Research among Patient Groups, Industry, and Academia in the United States: A Survey Objective Patient-centered clinical trial design and execution is becoming increasingly important. No best practice guidelines exist despite a key stakeholder declaration to create more effective engagement models. This study aims to gain a better understanding of attitudes and practices for engaging patient groups so that actionable recommendations may be developed. Methods Individuals from industry, academic institutions, and patient groups were identified through Clinical Trials Transformation Initiative and Drug Information Association rosters and mailing lists. Objectives, practices, and perceived barriers related to engaging patient groups in the planning, conduct, and interpretation of clinical trials were reported in an online survey. Descriptive and inferential statistical analysis of survey data followed a literature review to inform survey questions. Results Survey respondents (n = 179) valued the importance of involving patient groups in research; however, patient group respondents valued their contributions to research protocol development, funding acquisition, and interpretation of study results more highly than those contributions were valued by industry and academic respondents (all p < .001). Patient group respondents placed higher value in open communications, clear expectations, and detailed contract execution than did non–patient group respondents (all p < .05). Industry and academic respondents more often cited internal bureaucratic processes and reluctance to share information as engagement barriers than did patient group respondents (all p < .01). Patient groups reported that a lack of transparency and understanding of the benefits of collaboration on the part of industry and academia were greater barriers than did non–patient group respondents (all p< .01). Conclusions Despite reported similarities among approaches to engagement by the three stakeholder groups, key differences exist in perceived barriers and benefits to partnering with patient groups among the sectors studied. This recognition could inform the development of best practices for patient-centered clinical trial design and execution. Additional research is needed to define and optimize key success factors. Introduction Tens of thousands of patient groups and voluntary health organizations exist in the United States [1]. This sector is large, diverse, continually evolving, and therefore difficult to track. Some organizations are well-established nonprofits (e.g., American Cancer Society, March of Dimes); others are relatively new. Some groups focus on diseases that affect large numbers of people, such as diabetes and cancer. Others target rare or "orphan" diseases such as cystic fibrosis [2]. Today, patient groups are facilitating clinical research by moving beyond traditional roles of patient recruitment and education to influencing funding decisions, informing research priorities, collaborating with industry, and contributing money for research and patient care [3]. Patient-powered registries and research networks developed by patient organizations are rapidly evolving as a means to contribute to research that leads to significant improvements in patient engagement, care, and health [4,5]. Patient groups are establishing tools and resources to fulfill unmet needs and provide more sophisticated ways to tailor patient group engagement in the research process. Examples include the Fox Trial Finder for Parkinson's treatment acceleration with a novel volunteer/patient engagement model [6]; JDRF trials connection for type 1 diabetes [7]; Crohnology.com for sharing experiences with Crohn's and colitis [8]; and other examples of proactive guidance on benefit/risk frameworks [9] and drug development roundtables [10]. Although no single formula exists for how best to engage with patient groups, two reports by the Institute of Medicine and the Clinical and Translational Science Awards task force on community engagement provide useful guidance for partnering with such organizations [11,12]. These reports stress the need for "meaningful engagement" in setting research priorities, governance of comparative effectiveness research programs, framing of research questions and protocols, monitoring of trials, and interpreting and disseminating results. While there is wide agreement on the importance of incorporating the "voice of the patient" into the clinical research continuum, disagreements remain about the goals of patient engagement beyond the elements of recruitment and retention (a common motivation in industry) and whether there is a funding/sponsor mandate (a common motivation in academia). Patient groups have become more sophisticated in their approach to shaping the research agenda for their disease conditions. There are positive case studies within rare disease networks that involve patient groups who are phenotypically similar establishing networks, and major stakeholders vested in solutions who are collaborating to bring about effective therapies [13][14][15][16]. Across the clinical trials enterprise, there seems to be a natural appreciation of the rationale for involving patient groups earlier in the process. This is in line with other similar patient-centered movements in healthcare, such as the rise of patient-reported outcomes and PCORnet, the National Patient-Centered Clinical Research Network [5]. It is also consistent with increased focus from the U.S. Food and Drug Administration (FDA) [17] as well as the 21 st Century Cures Act [18]. Stakeholders need to identify which attributes of patient groups lead to greater partnerships with research sponsors, and all participants in the clinical trial process must embrace the real value of collaboration that should lead to efficiencies and cost savings while producing more relevant outcomes for patients. The goal of our study, therefore, is to provide a snapshot of the different perceptions among stakeholders in the clinical trials enterprise about the importance and value of engaging patient groups. We set out to conduct this foundational work and describe the clinical trial services provided by patient groups as well as potential barriers to successful interactions with industry and academia. Participants and Methods This study was approved by the Duke University School of Medicine Institutional Review Board. Potential participants from patient groups, industry, and academic institutions were identified through rosters and mailing lists of the Clinical Trials Transformation Initiative (CTTI) [19], Drug Information Association, and other stakeholders in the clinical trials enterprise, such as Health Research Alliance and Clinical Research Forum. Individuals were emailed an electronic Qualtrics software (Provo, UT) survey link on May 7, 2014, from CTTI program staff and encouraged to forward the email to their constituencies as a means to increase reach -a method known as snowballing. Snowballing enhances reach but limits the ability to quantify an exact response rate. Survey administration was anonymous, as no identifying information was collected. Two reminder emails were sent during the first 2 weeks of the initial survey mailing. Survey questions (S1 Appendix Survey) were developed by the authors who represented the three stakeholder groups and informed by a literature review summarizing the available published medical and grey literature (e.g., white papers, government reports) from the past 5 years. Keyword searches and MeSH terms were used, including industry outreach, patient advocacy, clinical trials and research, patient group, and patient involvement. The literature review yielded 22 publications that were referenced in this manuscript. The survey included four domains: (1) importance or value of patient groups in research; (2) clinical trial services provided by patient groups; (3) negative impacts and barriers to relations; and (4) interactions between patient groups and industry and academia. Each domain included several Likert scale, multiple-choice, and "check all that apply" items. The survey also included questions related to the respondent's affiliation in the patient group, industry, or academic institution. Data Analysis Plan Descriptive statistics were used to examine the characteristics of the organizations represented by the study participants. Chi-square tests were used to compute differences in reporting of frequency of clinical trial services provided by patient groups among the study participants. ANOVA was applied to assess for the differences among study participants in mean scores of the importance or value of patient groups in research. Chi-square tests were used to calculate differences in reporting of frequency of negative impacts to relations between the two pairings: (1) industry and patient group and (2) academia and patient group. Independent t-tests examined the difference between these two pairings in mean scores of satisfaction with relations, engagement priority, and importance in establishing partnerships. A two-sided significance level of 0.05 was used for all statistical tests. Results A total of 179 respondents completed the survey: 24% (n = 43) from industry, 42% (n = 75) from academia, and 34% (n = 61) from patient groups ( Table 1). The majority of industry respondents (72%) included those with a primary focus in pharmaceutical development; 67% were from organizations with more than 500 employees, and 58% indicated more than 5 therapies on the market. Industry respondents cited 32 unique job titles that are dedicated primarily to patient engagement activities within their respective companies. Of the survey respondents in academia, 97% were from nonprofit institutions, 80% were from institutions with an NIH Clinical and Translational Science Award, and 71% had initiated contact with a patient group. Of the patient group survey respondents, 49% were affiliated with a group that was established more than 20 years ago, 72% had a single disease focus, 85% cited having a medical or scientific advisory board, 46% reported having an annual budget between $500,000 and $9,999,999, and 13% reported having a budget of more than $100,000,000. Importance or Value of Patient Groups in Research The perceived importance or value of patient groups in research was rated across research development, study design, study execution, and dissemination of results as shown in Table 2. There were significant differences in the mean scores reported across the three groups; in all cases, patient group respondents reported a greater importance or value in their contributions to research than did academic and industry respondents. The areas of most concordance among industry, academia, and patient group participants were in patient group contributions to improving patient retention (mean scores 4.0/4.1/4.5, respectively; p = .02) and accelerating Clinical Trial Services Provided by Patient Groups As reported in Table 3, there was some consistency in the reporting of services provided by patient groups to industry and academia in the conduct of clinical trials across the three participant groups, including a >50% response rate across the two categories "Patient recruitment and retention" and "Educating patients and their families/caregivers about research." However, patient group respondents cited providing several services at higher rates than industry or academic respondents reported utilizing. Other areas cited by patient group respondents at a higher frequency included participation in clinical trial design (frequencies 9/13/23; p = .02), support during interactions with third-party payers regarding research (frequencies 8/9/22; p = .003), tissue banking (frequencies 1/10/18; p = .001), providing funds for research (frequencies 3/17/30; p < .001), and publicizing and disseminating study results (frequencies 5/27/39; p < .001). Regarding the dissemination of study results, patient groups cited providing services in greater frequency than industry and academia reported receiving. These services included the organization of scientific conferences (frequencies 1/13/19; p = .001), communication with the press (frequencies 1/14/20; p = .001), dissemination on a website (frequencies 3/19/48; p < .001) or in a newsletter (frequencies 2/19/44; p < .001) or through social media (frequencies 3/15/39; p < .001), and presentation of results at a scientific conference (frequencies 2/7/26; p < .001). Negative Impacts to Relations Table 4 examines the negative impacts or barriers to successful engagement among patient groups, industry, and academia in the conduct of clinical trials. The greatest disagreement between patient group respondents and non-patient group respondents in perceptions of negative impacts had to do with the presence of internal bureaucratic processes, patient group lack of understanding of the benefits of partnering with industry and academia, an unwillingness to share information, a lack of interest in the disease, a lack of understanding by industry and academia of the benefits of partnering with patient groups, and a lack of transparency or openness on the part of the other entity (all p < .05). Additional differences in the perception of barriers between academia and patient group participants were reported in the negotiation of intellectual property and indirect costs (both p < .01). Also, most academic respondents (65%) cited opportunities to gain funding from national programs as an important factor in engaging with patient groups, yet one-third received no patient engagement training and experienced internal resistance or lack of buy-in that impeded their ability to engage with patient groups. Perceptions of Intergroup Interactions Industry and patient group respondents reported moderate satisfaction with their relations and a "medium" priority for engagement (i.e., non-significant p>.05) ( Table 5). However, academic respondents cited higher satisfaction with relations than did patient groups (4.1/3.3; Lack of patient group involvement in the conduct of clinical trials 3 (7) 9 (12) 7 (12) .67 Providing advice on improving the efficiency of conducting research 4 (9) 4 (5) 6 (10) .57 Editing informed consent forms 6 (14) 15 (20) p < .001). Patient group respondents reported greater importance in the need for open communications, clear expectations, and detailed contract execution in establishing effective partnerships than did industry and academic respondents (all p < .05). In addition, patient groups reported a greater importance of the need for financial benefit to both parties than did industry respondents (frequencies 2.8/3.7; p = .002). Discussion This study demonstrates real differences among stakeholder groups in perceptions of the value of patient group engagement with academia and industry around clinical trials-a finding that may represent a significant barrier to engagement that was not identified by the individual stakeholder groups independently. Differences in perceptions may lead to miscommunication Table 4. Negative Impacts to Relations Between Patient Groups and Industry/Academia * . Negotiating intellectual property 6 (13) 13 (21) .34 6 (8) 20 (33) < .001 doi:10.1371/journal.pone.0140232.t004 and mismatched expectations for these partnerships and should be recognized in the development of tools or guidelines meant to streamline interactions with patient groups. Developing a methodology for assigning a value to the contributions of patient groups in the CTE in absolute terms may also be useful for aligning stakeholders on the issue of valuation. Most patient group participants reported their ability to provide services in traditional areas such as patient recruitment and retention, patient/family education, and dissemination of study results. Our findings are largely consistent with a recent survey of 201 disease advocacy organizations that reported providing assistance with patient recruitment, data collection, financial support, and study design [20]. However, industry and academic participants reported significantly less receipt of services related to dissemination of study results than that reported by patient groups. A possible explanation is that the research teams were not notified of the publicity efforts; hence, this may be an area of opportunity to enhance industry and academic perceptions of patient group value and contributions and thereby enhance meaningful engagement across the sectors. In addition, our findings related to patient group services reflect the expanding role of patient groups. For example, social media such as Facebook and Twitter are increasingly being used to raise awareness and recruit patients into trials; however, their effectiveness is largely anecdotal [21]. That there is both alignment and difference among stakeholders on perceived barriers to interacting with patient organizations is arguably the most important policy implication to arise from this study. This suggests that, in order to inform the development of best practices, further work is needed to understand which barriers actually have the greatest effect on these relationships. In an emerging field, it is often difficult to know with whom to engage, as demonstrated by the large number of patient engagement job titles reported by our sample. While industry and academia reported moderate rates of internal barriers to engagement, patient groups were more likely to cite external factors, such as a lack of transparency, openness, or understanding of the benefits on the part of industry and academia. In terms of the importance of establishing partnerships between patient groups, industry, and academia, the high mean scores in open communications and clear expectations reported by the three stakeholder groups reflect the anecdotal evidence. For example, Gallin et al. [2] stress the importance of effective communication, agreement in shared goals, and establishment of appropriate governance structures and processes including oversight of conflicts of interest, scientific rigor, and program evaluation. Another study documented five best practices: (1) vision alignment, (2) resource alignment, (3) partnership structure, (4) management models, and (5) open and frequent communication [22]. It is notable that academic respondents rated their satisfaction with relationships significantly higher than did patient groups. Additional research is needed to understand the factors that contribute to this difference as a means to improve patient group satisfaction. The strength of our study is in the two-pronged approach to developing the survey questions: (1) questions were informed by a literature review and (2) questions were developed by an author team representing the three stakeholder groups (patient groups, industry, and academia). Study limitations include potential sample bias, as industry respondents may not fully represent a broad spectrum of therapeutic areas, and patient group respondents were largely from more established organizations. In addition, the snowballing method of recruitment may encourage like-minded respondents and may miss clusters of individuals who are not networked with the individuals sampled. Therefore, results may not be generalizable to other, less invested, individuals and groups. In addition, the literature review of patient engagement using MeSH terms revealed little formalized literature and studies to build on. Therefore, we considered it important to employ a three-way stakeholder engagement survey that would reveal more than anecdotal evidence on patient engagement. Last, potential differences may exist between engaging individual patients and organized patient groups. Conclusion Important consistencies and differences exist in perceptions of the value that patient group engagement adds to the clinical trial process. Despite reported similarities between approaches to engagement among industry, academia, and patient groups, key differences exist in perceived barriers and benefits of partnering and engagement that have implications in shaping policy. This recognition could inform the development of best practices. Additional research is needed to define and optimize key success factors for engagement between patient groups, academia, and industry around clinical trials.
v3-fos-license
2017-06-22T14:37:07.974Z
2015-08-13T00:00:00.000
1041588
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://bmcpublichealth.biomedcentral.com/track/pdf/10.1186/s12889-015-2008-7", "pdf_hash": "34e88f96ce4db2904ef0788ec7b5aeed5aea3caa", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43697", "s2fieldsofstudy": [ "Medicine" ], "sha1": "b9bb494eebaa5aca9879c012e6ba1cc833315bfe", "year": 2015 }
pes2o/s2orc
Multimorbidity and the inequalities of global ageing: a cross-sectional study of 28 countries using the World Health Surveys Background Multimorbidity defined as the “the coexistence of two or more chronic diseases” in one individual, is increasing in prevalence globally. The aim of this study is to compare the prevalence of multimorbidity across low and middle-income countries (LMICs), and to investigate patterns by age and education, as a proxy for socio-economic status (SES). Methods Chronic disease data from 28 countries of the World Health Survey (2003) were extracted and inter-country socio-economic differences were examined by gross domestic product (GDP). Regression analyses were applied to examine associations of education with multimorbidity by region adjusted for age and sex distributions. Results The mean world standardized multimorbidity prevalence for LMICs was 7.8 % (95 % CI, 7.79 % - 7.83 %). In all countries, multimorbidity increased significantly with age. A positive but non–linear relationship was found between country GDP and multimorbidity prevalence. Trend analyses of multimorbidity by education suggest that there are intergenerational differences, with a more inverse education gradient for younger adults compared to older adults. Higher education was significantly associated with a decreased risk of multimorbidity in the all-region analyses. Conclusions Multimorbidity is a global phenomenon, not just affecting older adults in HICs. Policy makers worldwide need to address these health inequalities, and support the complex service needs of a growing multimorbid population. Electronic supplementary material The online version of this article (doi:10.1186/s12889-015-2008-7) contains supplementary material, which is available to authorized users. Background The theory of epidemiological transition is grounded on the observed shift of disease burden from communicable to non-communicable disease (NCD) causes [1]. Whilst the debate about the role of population ageing in epidemiological transition continues, the demographic transition to older populations is also occurring across all regions, albeit with different patterns, determinants and rapidity. It has been shown that the ageing of populations is ongoing in both developed and developing countries although, the growth rate of older adults in low-and middle-income countries will remain significantly higher than in most high-income countries (HICs) for many decades [2]. Multimorbidity is usually defined as the presence of two or more chronic diseases within an individual [3]. Although chronic disease factors are considered drivers of multimorbidity, the observed increase in multimorbidity is also related to both the demographic and epidemiologic transition. As the global population continues to grow in size, and becomes increasingly aged, there is an expectant increase in multimorbidity prevalence. Tackling multimorbidity as part of NCD burden remains one of the key challenges faced by the global community. In particular, health systems need to examine its socio-economic determinants in order to provide the most equitable health care to their populations and to drive NCD prevention. Despite the growing recognition of the prevalence of multimorbidity amongst older adults, global prevalence studies have largely remained single-disease focused [4]. Few studies have reported national level estimates. Population prevalence studies in Spain and Germany suggest that multimorbidity prevalence is approximately 60 % for people aged 65 years and above [5,6]. While the focus on older adults is common, multimorbidity also affects younger adults [7]. A study in Australia reported a multimorbidity prevalence of approximately 4 % in adults aged 20-39 years, 15 % in the 40-59 age group, and 39 % in those aged 60 and older [8]. There are also contrasting associations by age and sex. Multimorbidity in HICs is reportedly more prevalent for individuals of higher ages, female sex, low income, and low education [9][10][11]. The outcomes of multimorbidity have been well documented in HICs, with multimorbidity being associated with reduced quality of life, decreased functional capacity, and reduced survival [12][13][14]. Studies have also shown the burden of multimorbidity and its relation to rising healthcare utilisation, cost and expenditure [15,16]. A comparison of the relationship between multimorbidity and socio-economic status (SES) show contrasting results for high, middle and low income countries. In Scotland, a high income country, multimorbidity has been found to be associated with lower SES [17]. In Bangladesh, a low income country -however, the wealthiest quintile of the population had an increased prevalence of multimorbidity [18]. And in studies examining its association with education, multimorbidity was more prevalent in those with lower educational levels in Canada (a HIC) [11]; while multimorbidity was less common among educated and employed persons in South Africa (an upper-middle income country) [19]. There have been no studies examining the age and socioeconomic distribution of multimorbidity (MM) in LMICs. The present study aims to establish the prevalence of MM in a range of LMICs, and to examine the variations of MM by age and education (as a proxy for SES). Study samples Publically available data from the WHO World Health Survey (WHS) was used, which is publicly available from the WHO. The World Health Surveys consists of crosssectional national studies, each of which follow a multistage clustering design to draw nationally representative samples of adults aged 18 years and older. The details of the survey procedures are described elsewhere [20,21]. Seventy-one countries participated in the WHS between 2001 and 2004. Sample sizes varied between countries depending on feasibility and cost. Individual participants aged 18 years or above were randomly selected for interview. All surveys were implemented as face-to-face interviews; except for two countries, which used phone and mail-in interviews. Of the seventy-one countries that participated in the WHS, eighteen countries were excluded from the analyses, as they did not complete the long version of the questionnaire covering chronic condition status; these were mostly countries from Western Europe. Countries were also excluded if the response rate to the chronic health questions was less than 90 % (eleven countries) or if they did not include post-stratification weights (six countries). A minimum of four countries were randomly selected from each region for further analysis, resulting in a total of twentyeight of the remaining thirty-seven countries. Since the research questions aimed to address the differences between LMICs the majority of countries sampled were LMICs. Due to low response rates in certain regions, such as Africa, countries from Eastern Europe & Central Asia were oversampled. We included one high income countries for comparison. In total, six countries were randomly selected from Africa; five countries from South-East Asia; four from South Asia; eight from Eastern Europe & Central Asia; four from Central & South America; and, one from Western Europe. Sampling weights were applied, as well as poststratification weights to account for non-response. Measures and variables In the WHS, chronic disease morbidity was defined by self-report, based on a set of six doctor diagnosed conditions. The self-reported conditions were assessed based on responses to the question, "Have you ever been diagnosed with…?" Previous studies have used different operational definitions of multimorbidity. Methodological differences, such as the number of chronic conditions to include in the count, result in a wide variability in prevalence estimates [7]. To prevent further discordance, multimorbidity is defined here as the presence of two or more chronic diseases, which is the most commonly used definition in prevalence studies [22]. A binary variable for multimorbidity was created on the presence of two or more of the six conditions: arthritis, angina or angina pectoris (a heart disease), asthma, depression, schizophrenia or psychosis, and diabetes. The individual level socio-demographic variables of interest were age, sex and highest level of education completed. The residence of the individual, defined as living in either an 'urban' or 'rural' area, was also used in the description of the country characteristics. Two different age groupings were generated for different analyses: first, three age groupings for those 18-49 years, 50-64 years and 65+ years; and then by two groups for those younger than 55 (18-54 years) and those aged 55 years or older. The former was done to examine stratum specific differences, and the latter to examine generational differences. To examine generational differences, 55 years was taken as a cut point, representing a mid-way point within the WHS study population. Level of education was used as a measure of countrylevel socioeconomic status (SES). 'Highest education level obtained' was collapsed from seven to four categories: (1) university or any higher education; (2) secondary school; (3) primary school; and, (4) less than primary school (including no formal education). Inter-country socioeconomic differences were examined by using country estimates for GDP per capita. These were obtained from the United Nations Statistical Division records for 2003. Countries were then grouped according to the cut-offs for low-middle-and high-income based on the World Bank classification figures in 2003 [23]. Statistical analysis Survey estimates were used to calculate prevalence measures and extract nationally representative samples, accounting for non-response. To obtain valid comparisons across the countries, age-standardised multimorbidity prevalence rates were calculated using the direct method with the WHO Standard Population (2000-2025) [24]. For the descriptive analyses, mean percentages were taken as an average across populations and normality of the distributions was tested using the Shapiro-Wilk test. We used non-parametric regression to produce a line of best fit, when comparing national estimates of multimorbidity with GDP. Individual countries were weighted by the survey size to produce regional estimates for comparisons of multimorbidity by age and education. Significance testing of the comparisons among independent samples was done by t-test or ANOVA while for those whose distributions deviated from the normal one -by the Wilcoxon rank-sum (for two variables) and Kruskal-Wallis (for more than two variables) tests. 'Prevalence ratios' of multimorbidity by education were calculated with the reference category being primary school education completion. Univariable models were fitted to analyse the association of both sex and age with multimorbidity. For the multivariable analyses, data were pooled at regional level. A random effects logistic regression model was fitted for the regional analysis, to account for the hierarchical nature of the data within countries and regions. Odds ratios (OR) and 95 % confidence intervals (CI) are presented, with p < 0.05 taken as statistically significant, unless stated otherwise. All analyses were done using Stata version 12. Confidence intervals have been calculated based on recommendations for crude and age-specific rates [25]. Results Individual country characteristics are described in Table 1 Individual morbidity estimates suggest that arthritis is the most common condition across the WHS countries, with mean prevalence of 12.0 % (95 % CI, 11.8 -12.2). The mean prevalence for depression, angina, asthma, diabetes and schizophrenia, respectively, were 6.7 %, 7.5 %, 5.0 %, 4.0 % and 0.9 % [see Additional file 1]. Multimorbidity prevalences by country are shown in Table 2. Both age-specific prevalences and age standardized prevalence are shown for each country. The mean world standardized prevalence for LMICs was 7.8 % (95 % CI, 6.5 -9.1) and the range was 1. Figure 1 shows national levels of multimorbidity by country GDP per capita. There was a positive association between multimorbidity prevalence and GDP per capita (from GDP per capita of $200 -$10,000). Above $10,000 the line flattens: Spain had a relatively low multimorbidity prevalence given their high GDP per capita. Figure 2 shows the prevalence ratios of multimorbidity across socioeconomic groups, stratified into younger and older adults. Amongst the younger adults, across all regions, there was a distinct negative socioeconomic gradient, with the highest burden on the least educated. In Western Europe there appeared to be a wider variation between SES categories, compared to SE Asia and Africa. Amongst older adults, there was less variation between SES categories, compared to the younger adults. However, there was still a distinct negative gradient in Western Europe, with the highest burden on the least educated. South-East Asia on the other hand has a positive gradient, with the highest burden on the most educated. Both univariable and multivariable analyses are shown in Tables 3 and 4. Univariable and multivariable analyses at the country level are shown in Table 3, showing the sociodemographic correlates of age, sex and education. Age was significantly associated with multimorbidity in all countries. Sex was significantly associated with multimorbidity in all but seven countries. Multimorbidity was associated with education in the univariable analyses, but was not significant when adjusted for both age and sex, except for certain education categories in Bangladesh, Brazil, Hungary, Mauritius, Namibia and Spain; which were all consistent with an inverse relationship. Similar to the country level, age and sex were both significantly associated with multimorbidity in all regions (Table 4). When adjusted for age and sex, the lowest education category was significantly associated with a higher risk of multimorbidity in Africa and Western Europe; and higher education categories were significantly associated with a decreased risk of multimorbidity in South Asia and Western Europe. Adjusted for age, sex, country and region, the 'all region' model suggests an overall negative education gradient. Discussion The subject of multimorbidity is of growing interest, in part, due to the ageing of all populations. Internationally, there is still limited evidence on the prevalence and social determinants of multimorbidity, particularly in LMICs. This is the first study to describe global patterns of multimorbidity and to compare prevalence across different countries including LMICs. There are a few notable findings. Firstly, despite the variation in multimorbidity prevalence the mean world standard prevalence for LMICs was 7.8 % (95 % CI, 6.5 -9.1), so even in LMICs the multimorbidity prevalence was quite high. Secondly, multimorbidity prevalence was positively associated with country GDP per capita. There was however a non-linear relationship; our one HIC -Spain had a low multimorbidity relative to per capita GDP. These results suggest an influence of other factors which may include, but are not limited to, more freedom to make better lifestyle choices and better social conditions [26]. In comparison to Spain, the Eastern European countries have relatively high multimorbidity prevalence. Historically, Eastern Europe has had poorer population health outcomes relative to their western counterparts following the fall of communism in 1990. Such health outcomes were markedly influenced by exposure to risk factors, such as tobacco smoking and alcohol consumption [27][28][29]. Thirdly, multimorbidity Fig. 2 a: The socioeconomic gradient of multimorbidity by regions, for age category 1 (<55). b: The socioeconomic gradient of multimorbidity by regions, for age category 2 (≥55). The lightest shade represents the first category (higher education achieved). The darkest shade represents final category (less than primary school education achieved). Multimorbidity prevalence ratios are based on the prevalence of multimorbidity in the third category, set at 1 was significantly associated with age across all countries including LMICs. This finding has been found consistently across several studies [9,17,[30][31][32][33][34]. Fourthly, multimorbidity as defined here, is also not limited to older adults, but affects younger adults in LMICs. This association of multimorbidity with age, however, might reflect the type of condition included in the disease count and their age of onset [35]. Fifthly, trend analyses of multimorbidity and education suggest a transgenerational difference: with a transition to a more negative education gradient is observed for younger adults compared to older adults in LMICs. Our 'all region' model also suggests an inverse relationship between multimorbidity and education. These findings are consistent with what has been found in other studies in HICs [17,32]. Finally, there are notable gender differences in multimorbidity: the female sex being associated with higher multimorbidity. This is a common observation in morbidity studies, often attributed to greater use of health services and disease diagnosis [33]. Though other studies also suggest the role of other factors, including behavioural and psychosocial [34,36]. Other studies suggest that clustering patterns of multimorbidity differ for male and females; for example, the cardiometabolic cluster was reportedly more common in males. This occurrence could be due to known differences in physiology, such as the protective effect of female hormones on CVD [37]. One of the study aims was to examine the variations of multimorbidity by SES here with education as a proxy. Our descriptive analyses of education show that both regional differences and generational differences exist for adults with multimorbidity. In Western Europe and Eastern Europe & Central Asia, there was wider variation in prevalence ratios between SES categories, compared to other regions. And for adults aged <55 years, the gradient was always negative, with one exception of older adults in South-East Asia. This suggests that in South-East Asia there might have been an intergenerational reversal in the socioeconomic gradient of multimorbidity. Such results have also been found in studies on obesity where transitional economies are experiencing a reversal in socioeconomic gradient thus resulting in a similar gradient to HICs [38]. The global-level multivariable analyses show a negative association of multimorbidity with education. Results from Western Europe (Spain) suggest a significantly negative education gradient of multimorbidity in HICs. In Africa, there is also a significantly negative education gradient in multimorbidity. The education gradient in Africa, despite most countries in this region being LMICs, is similar to the Western Europe region. These results are contrary to the Bangladesh study, which sampled 850 individuals (60 years and above) in a rural area and reported a direct association of multimorbidity with SES [18]. The SES index in their study, however, was based on household assets. Alternative measures of SES may lead to different results. One study in rural Uganda reports maternal education to be a better predictor of health; whereas other studies explore the use of permanent income [39,40]. Strengths and limitations This study provides novel data on multimorbidity prevalence in nationally representative population samples using a consistent set of methods measures across multiple countries. Being the first of its kind, one of its major strengths is the availability and comparability of the data across all a wide range of countries using the World Health Surveys which were developed for this reason. The study has few limitations which, even if not undermining its contributions and potential impact, should be also mentioned. Firstly, prevalence estimates were based on a limited set of conditions [7]. The chronic conditions included in the WHS were chosen to reflect health system coverage [41]. The conditions had to be amenable to self-report and reflect a known burden or prevalence globally. The choice of conditions should correspond to those with greater prevalence in older populations (prevalence for asthma, for instance, is more typically higher in older children and younger adulthood). Secondly, the study presents crosssectional data from 2003. Further investigations should use current or recent data, as well as longitudinal data, to ascertain changing patterns over time. Thirdly, only countries with a greater than 90 % response rate to health status questions on chronic disease were sampled, which meant that a number of lower income countries, where response rates were low, were excluded from the analyses. There was also low representation from HICs, as these countries largely did not complete the chronic disease questions. As such the use of Spain onlyto represent Western Europewas a limitation. Fourthly, these results were based on self-reported measures, which may result in disease underreporting and potential bias [42][43][44]. One study notes that self-reporting leads to underreporting, particularly amongst the poor, which dampen the gradients [45]. It may be that health literacy and service access impact prevalence based on self-report for countries at different levels of economic development. Self-reported diagnosis can be further validated by All regions (MV adjusted for region) 3.7* 0.6* 1.5* 0.7* 0.6* 1.2* 0.9*** 0.8* p-value *** <0.05; ** < 0.01; * <0.001; $ Regional multivariable analyses adjusted for age, sex and country; (OR) Unadjusted odds ratio; (AOR) Adjusted odds ratios in multivariable analysis adjusted for age, sex and country auxiliary symptom-reporting questions included in the survey, such as the Rose questionnaire used for angina, or through clinical assessment [46]. National GDP is generally correlated with healthcare system investment and potentially healthcare access, which might affect the interpretation of the results. Spain, however, had low multimorbidity relative to national GDP despite having a relatively good healthcare system access. In order to understand the relationship between a country's development and multimorbidity as an appropriate health outcome, further studies are needed: with a fuller accounting of confounding, modifying and mediating elements. Finally the use of education as a proxy for SES has been debated despite its wide use in population health research [47,48]. There is evidence to suggest that after conditioning for the effect of socioeconomic status, measured by household income or assets, education has an independent and substantial effect on health outcomes [49]. Conclusion Multimorbidity is common in LMICs and significantly associated with age. There is an inverse country association of multimorbidity with education, which indicates an inequity of disease burden. The negative gradient of multimorbidity with education is already occurring and more marked in the younger generation. It may reflect the proliferation of several key risk factors for these chronic conditions including unhealthy behaviours. The recent UN World Summit addressed the common risk factors of NCDs to be tackled with urgent priority; namely tobacco use, unhealthy diet, harmful use of alcohol and physical inactivity [50]. Weak health systems and governance will not be able to support the care needs resulting from the complexities of a multimorbid population. Better coordination and support through informed policy and planning of health care systems is needed to support the transition required for health systems to address future care needs. Furthermore, there is a need to increase activities and expand measures to reduce the modifiable risk factors that are driving multimorbidity prevalence.
v3-fos-license
2018-12-10T09:12:58.620Z
2017-01-01T00:00:00.000
55981397
{ "extfieldsofstudy": [ "Mathematics" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "http://downloads.hindawi.com/journals/mpe/2017/5032091.pdf", "pdf_hash": "2a88484e33249c9e41fe2021ce80a11da8544d5b", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43699", "s2fieldsofstudy": [ "Environmental Science", "Mathematics" ], "sha1": "2a88484e33249c9e41fe2021ce80a11da8544d5b", "year": 2017 }
pes2o/s2orc
Detection of Decreasing Vegetation Cover Based on Empirical Orthogonal Function and Temporal Unmixing Analysis Vegetation plays an important role in the energy exchange of the land surface, biogeochemical cycles, and hydrological cycles. MODIS (MODerate-resolution Imaging Spectroradiometer) EVI (Enhanced Vegetation Index) is considered as a quantitative indicator for examining dynamic vegetation changes.This paper applied a newmethod of integrated empirical orthogonal function (EOF) and temporal unmixing analysis (TUA) to detect the vegetation decreasing cover in Jiangsu Province of China. The empirical orthogonal function (EOF) statistical results provide vegetation decreasing/increasing trend as prior information for temporal unmixing analysis. Temporal unmixing analysis (TUA) results could reveal the dominant spatial distribution of decreasing vegetation.The results showed that decreasing vegetation areas in Jiangsu are distributed in the suburbs andnewly constructed areas. For validation, the vegetation’s decreasing cover is revealed by linear spectral mixture from Landsat data in three selected cities. Vegetation decreasing areas pixels are also calculated from land use maps in 2000 and 2010. The accuracy of integrated empirical orthogonal function and temporal unmixing analysis method is about 83.14%. This method can be applied to detect vegetation change in large rapidly urbanizing areas. Introduction Information on vegetation change has practical significance for revealing surface spatial variation and evaluating the regional ecological quality [1][2][3].Vegetation indices are effective quantitative indicators of vegetation health spatial distribution and key parameters to study in landscape ecology, climate change, and soil erosion in various researches of surface processes [4][5][6].MODIS EVI dataset is utilized to examine regional vegetation changes due to its excellent presentation of vegetation information and anti-interference against the soil background and atmosphere [7]. In China, land use change is mainly characterized by urbanization [8,9].Land use and land cover changes are primarily identified based on the repeated acquisition of remote sensing datasets.Proposed approaches for multitemporal analysis include (1) images classification [10], (2) wavelet decomposition [11], (3) a multitemporal dataset which is transformed by principal component (PC) analysis (then the resulting component could reflect various changes [12]), (4) spatial statistical analysis which calculates the quantitative analysis of the changing scope, strength, and trend [13], (5) change vector analysis which can calculate the change type and intensity [14], and (6) temporal unmixing modeling [15].These changing analytical methods have their own characteristics and emphases, but, for multitemporal images, the most important aspect is to remove noise and determine the dominant dimensions [16].In this study, prior information on increasing/decreasing vegetation spatial coverages is calculated by empirical orthogonal function (EOF). The empirical orthogonal function is usually employed to model the spatial-temporal patterns of the sea surface temperature [17], dynamical atmospheric [18], sea-level rise [19], and shoreline variability [20].The empirical orthogonal function has been applied on the night lights dataset and MODIS EVI for characterization and modeling of the changing extent, intensity, and distribution [21].Temporal unmixing is used to model the spatial distribution of crop types [22,23], forest [24], and sea ice imagery [25].Compared with other approaches [23,26], the empirical orthogonal function can describe vegetation change trend without ancillary information in this research.The integrated of empirical orthogonal function and temporal unmixing method is first provided to model the spatial-temporal patterns of crop types [16].Spectral mixture analysis is used to monitor vegetation change eliminating the background influence, but it is not suitable for a large area [27].Because of the fast urbanization in China, new construction results in widely decreasing vegetation.In this study, the empirical orthogonal function aims to take the decreasing vegetation curves as a prior for temporal unmixing models in Jiangsu Province.The combination method using empirical orthogonal function (EOF) and temporal unmixing analysis (TUA) is introduced to quickly detect decreasing vegetation areas in a large area. This study aims to evaluate the integrated empirical orthogonal function and temporal unmixing to detect the changing vegetation area and apply the approach in Jiangsu Province, a rapidly urbanized province in southern China [28].The theories of empirical orthogonal function and temporal unmixing and the application results in Jiangsu are presented first.Next, contrasting Landsat data are used to validate the accuracy and consistent spatial distribution of decreasing vegetation with MODIS EVI by empirical orthogonal function and temporal unmixing method.At last, this analysis also identifies strengths and uncertainties of the combined empirical orthogonal function and temporal unmixing method. Study Area and Datasets. Jiangsu Province is located at 116 ∘ 18 -121 ∘ 57 E, 30 ∘ 45 -35 ∘ 20 N, with a 10.26-million hectare area that accounts for 1.1% of the total terrestrial area in China.The plains area is 7.06 million hectares and the water area is 1.73 million hectares.The elevation of more than 90% of areas in Jiangsu Province is lower than 50 meters.Jiangsu belongs to warm temperate to north subtropical transitional climate (Figure 1).Jiangsu Province's comprehensive economic strength of Jiangsu has been at the forefront in China.After the opendoor policy was issued in 1978 in China, the urban area and the growth rate increased significantly in Jiangsu.In 1990, Jiangsu had an urban population of 14.59 million and a rural population of 53.08 million.As a contrast, there was an urban population of 49.90 million and a rural population of 29.30 million in 2012.In 2012, GDP per capita in Jiangsu reached $11,113.3 compared to the national average of $6251.87[29].The urbanization rate of Jiangsu was 63% in 2012, and more than 80% of the urban growth area occurred outwards from the pregrowth urban fringes at the expense of rural lands [28].Due to urbanization, the arable land area per farmer decreased to less than 335 m 2 [8].Urban sprawl has environmental impacts, such as enhancing urban heat island and increasing carbon emissions, affecting the quality of life in urban areas [30].Therefore, during the growing process, the timely and effective supervision of vegetation is of importance. Datasets. The 16-day MODIS EVI (MOD 13Q1 ) composites with a 250 m spatial resolution were downloaded from the USGS website (http://glovis.usgs.gov/).EVI temporal profiles span from February 2000 to December 2012 (296 images).EVI is less susceptible to cloud and haze contamination than NDVI [31].The EVI time series are mosaicked, reprojected, and resampled to 1000 m for displaying dominant vegetation change trend. One validation dataset is Landsat TM images in 2000, 2002, 2006, and 2009 by linear spectral unmixing method.The three net spectral endmembers (substrate, vegetation, and dark) are provided and validated from this research results [32].We use spectral unmixing model of ENVI software. Another dataset to validate the temporal unmixing analysis method accuracy is land use maps in 2000 and 2010.The land use map production is from Institute of Geographic Sciences and Natural Resources Research, Chinese Academy of Sciences.We also use ArcGIS software to generate 60 random points and compare the land surface type (Agriculture, Grass, Forest, Water, and impervious surface) of google earth and the land use map in 2000 and 2010.The accuracy of the land use map is about 91%. Empirical Orthogonal Function. The empirical orthogonal function method decomposes the original data into the product of temporal function and spatial function [18,33].The curves of empirical orthogonal function represent temporal patterns, which are the eigenvectors of the covariance matrix from the principal transform of the original data.PCs represent the spatial weight of the corresponding curves from empirical orthogonal function.In this research, the curves of empirical orthogonal function are the vegetation increasing/decreasing change curves.PCs display the spatial distribution of corresponding curves of empirical orthogonal function.There is principal components function in the ENVI software.The eigenvectors from the statistic file are EOFs. In the empirical orthogonal function method, the original data () is divided into the product of a temporal function (EOFs-) and spatial function (PCs-) [34,35]: Suppose that has large projection on the first vectors in the spatial field: where is the residual error when is expressed by vectors.where () represents error. is the spatial dimension number, and is the different time.() is the variance for the total error.It is the sum of variance of each pixel error: where () is the exception of error equation.The constraint condition is where is Lagrangian constant. Moreover, the empirical orthogonal function method aims to reduce the dimensionality with a minimum loss of information while maintaining the majority of the variation affected by independent processes and capturing the essential features [36,37]. In this analysis, the curves from empirical orthogonal function with decreasing trends are temporal patterns and represent the vegetation cover reduction.In conventional empirical orthogonal function, the PCs and empirical orthogonal function are separately interpreted in terms of spatiotemporal processes, and the empirical orthogonal function only represents statistically unrelated modes of variance.In the context of this study, much more attention is paid to the decreasing curves of empirical orthogonal function related to vegetation reduction, which provides prior information for temporal unmixing model.The curves of empirical orthogonal function and PCs can be obtained by ENVI software. Temporal Unmixing Analysis. Temporal unmixing analysis is an extension of the linear spectral unmixing.The concept of the temporal unmixing model is that each pixel is the linear combination of temporal endmembers and corresponding fractions [22].Fractions of endmember should be equal or greater than zero and the sum of fractions in one pixel should equal to one. Accurate endmembers and temporal dimensions are the keys to the temporal unmixing model.Endmembers are in the extreme position of the feature space and represent different fundamental processes.The selection of endmembers is crucial for the temporal unmixing model; here, endmembers are selected by the geometric vertex method [38,39].The curves of empirical orthogonal function provide vegetation increasing/decreasing trend as prior information.From the EOF curves with decreasing trend, the pixels with decreasing trend can be found in the corresponding PC.Temporal vegetation decreasing endmembers can be extracted from the corresponding PC scatter plot. The temporal unmixing used here has two differences with the traditional temporal unmixing [40].First, the approach only selects one decreasing vegetation endmember to model the temporal unmixing analysis.Second, here the curves of empirical orthogonal function provide vegetation decreasing prior information for temporal unmixing [16].In this research, the integration of empirical orthogonal function and temporal unmixing is useful for identifying the processes of decreasing and increasing vegetation cover.The temporal unmixing analysis is completed in ENVI software. The technical flow chart for the research to detect decreasing vegetation trend and validate the accuracy is as follows (Figure 2). Results The first curve of empirical orthogonal function has primary eigenvalues, which contributes to approximately 89.25% of the variance (Figure 3(a)).Other curves' variances continuously decrease.The variance of the second to the seventh eigenvector of empirical orthogonal function account for 2.04%, 1.09%, 0.97%, 0.44%, 0.41%, and 0.28%, respectively.It is important to acquire the vegetation decreasing/increasing trends as prior information from the eigenvectors which account for large variance.The amplitudes of the first ten curves of empirical orthogonal function could be quantified in the time domain (Figure 4).The first ten curves of empirical orthogonal function are temporal eigenvectors of the EVI variance structure.The first curve has relatively low amplitude because it is the mean value of EVI with no variance.The second and third curves have annual and biannual peaks.Distinctly, the fourth curve shows an increasing trend.The fifth curve also displays periodic cycles with biannual peaks.The sixth and seventh curves have decreasing trends before 2006 and gradually increase afterward. The temporal curves of empirical orthogonal function provide prior information for the temporal unmixing model.The fourth, sixth, and seventh curves are related to vegetation change trend, but the fourth curve accounts for more variance than the sixth and seventh curves.The fourth curve could reveal increasing vegetation cover changes, so the opposite pixels of the fourth curve are related to vegetation decreasing. In this analysis, the third PC and the fourth PC are defined as and apexes to obtain endmembers that can describe the details for vegetation trend change (Figure 5(a)).The EOF provides prior information for temporal unmixing.The fourth EOF has the increasing trend, so there are pixels with increasing trend in the fourth PC; further, the vegetation decreasing pixels are in the opposite direction of vegetation increasing endmembers in the fourth PC.In the scatter plot, the pixels in the top position are related to vegetation increasing as the fourth EOF curve provides prior information.Oppositely, the vegetation decreasing endmembers can be found in the bottom vertex pixels. Scatter plots of PC3 and PC4 were used to select endmembers (Figure 5 Spatial distribution of decreasing vegetation endmember by temporal unmixing model is shown in Figure 5(c).It can be observed that the decreasing vegetation endmember is mainly located in the suburbs.In Suqian City, the decreasing vegetation cover is displayed in the suburbs.In Nanjing City, the decreasing vegetation is shown in the suburbs and in the south new area.In Taizhou City, the decreasing vegetation is in the suburbs and in the southern part.In the middle part of Jiangsu Province, the decreasing vegetation endmember is along the Yangtze River.In the southern part of Jiangsu, Suzhou City, the decreasing vegetation endmember not only is in the urban edge but also has scattered distribution, because there is a high speed economic development in Suzhou.In the development of Jiangsu Province during 13 years, the vegetation decreasing speed in the south is much faster than that in the north.The typical urbanization processes around the old city center, leading to vegetation decreasing in suburbs of Suqian City (Figure 6(a)).Suqian is in the north of Jiangsu Province.Comparing Figure 6(a) with Figure 5(c), the linear spectral unmixing and temporal unmixing methods both display same vegetation decreasing area in the suburbs of Suqian City. The decreasing vegetation cover area in the middle of Nanjing is located in the suburbs due to urban expansion (Figure 6(b)).At the same time, the vegetation reduction in the southern part is due to new construction.Contrasting Figure 6(b) with Figure 5(c), the empirical orthogonal function and temporal unmixing methods detect the same decreasing vegetation area with linear spectral unmixing method. Vegetation reduction in Suzhou presents a star-scattered pattern.Suzhou City is in the southern region of Jiangsu Province.Urbanization and economic development are the main reason for vegetation decreasing.The spatial changes of the vegetation fractions (Figure 6(c)) are consistent with the decreasing vegetation distribution of the empirical orthogonal function and temporal unmixing method (Figure 5(c)).Both display decreasing vegetation of star-scattered patterns and similar spatial distribution in the suburbs of Suzhou City. Validation Based on Land Use Map. Decreasing vegetation areas are calculated from land use map from 2000 to 2010 (Figure 7).The blue part is vegetation decreasing area from empirical orthogonal function and temporal unmixing method.The red part is vegetation decreasing area from the land use map in 2000 and 2010.Vegetation decreasing area is larger in the south due to faster economic development than that in the north of Jiangsu. According to decreasing vegetation pixels coincidence from the land use map and temporal unmixing analysis, the accuracy of empirical orthogonal function and temporal unmixing analysis is 83.14% (Table 1).Vegetation decreasing area from empirical orthogonal function and temporal unmixing analysis is 6956 km 2 .Vegetation decreasing area from the land use map during 2000 and 2010 is 7111 km 2 .As a result, the spatial coincidence is 5912 km 2 . Strengths and Uncertainties. The combination method of empirical orthogonal function and temporal unmixing was first mentioned [16] for identifying and representing phenology spatiotemporal patterns.The approach used the number of phenology dimensions based on empirical orthogonal function and modeled the vegetation phenology distribution by temporal unmixing analysis.Here, we pay attention to vegetation decreasing and increasing eigenvectors from empirical orthogonal function and select the vegetation decreasing endmembers based on temporal feature space.The two approaches both take statistical empirical orthogonal function results as prior information but emphasize different vegetation changes.The approaches described in [16] aim to describe the temporal phenology endmembers and spatial distribution, whereas the approach here aims to display the spatial distribution of decreasing vegetation.The advantage of this combined method is using the EOF as prior information for temporal unmixing analysis and using the vegetation decreasing endmembers to unmix the spatial distribution of vegetation decreasing area.In another research [41], the approach enables the detection of different types of changes occurring in time series, including the dates of changes occurring within seasonal and trend components.This research here only pays attention to the decreasing and increasing vegetation trend and does not emphasize the accurate phenology dates of vegetation changes. In this analysis, the unmixing processes corresponding to the spatial distribution are selected manually.Manual selection could allow for the consideration of stable endmembers.Compared with other methods, this method highlights the important benefit to quickly detect decreasing vegetation over large areas without classification and auxiliary [21]. Further Application. Further research is necessary to apply the empirical orthogonal function and temporal unmixing method to different study areas to detect boundary sensitivity of the endmembers when the decreasing vegetation endmembers are selected.In a research [16], pixels with a strong trend of vegetation increase and decrease are identified due to the annual cycle of rising and falling of water.In this study, we have focused on the spatial distribution of decreasing vegetation.Future work may improve the accuracy of decreasing vegetation endmembers.Here, Landsat data at a 30 m spatial resolution can serve to illustrate the spatial mapping accuracy. Decision makers could use MODIS EVI by empirical orthogonal function and temporal unmixing to quickly detect the spatial extent of decreasing vegetation and it could help in land use planning.Vegetation plays an important part in the land surface characterization, climate change modeling, and biogeochemical cycles.During the processes of urbanization in China, urban expansion is the significant driver for changes toward decreasing vegetation [42].However, monitoring vegetation over large areas at regular intervals is expensive.The combination approach of empirical orthogonal function and temporal unmixing analysis to detect decreasing vegetation could be seen as a preliminary tool.Furthermore, the vegetation fraction by linear spectral unmixing could be utilized to focus on the plots.The new combination method of empirical orthogonal function and temporal unmixing analysis contributes to vegetation cover mapping and monitoring. Conclusions Empirical orthogonal function analysis uses principal temporal and spatial patterns to present the original dataset.In this study, much more attention is paid to the increasing and decreasing vegetation eigenvectors which provide prior information for the temporal unmixing analysis. Here temporal unmixing analysis identifies the spatial distribution of decreasing vegetation endmembers.This approach extracts decreasing vegetation endmembers from temporal principal components to model the spatial distribution.In Jiangsu Province, the decreasing vegetation mainly is distributed in the suburbs due to urbanization. The Landsat dataset by linear spectral mixture is used for analysis in consistency of decreasing vegetation distribution with the integrated empirical orthogonal function and temporal unmixing analysis.The three components linear spectral unmixing provide estimates of vegetation fraction and the vegetation decreasing patterns.The decreasing vegetation in Suzhou displayed star-scattered pattern around the old city.The decreasing vegetation in Suqian is located in the suburbs.The decreasing vegetation in Nanjing is in the suburbs and new constructed area in the south.The empirical orthogonal function and temporal unmixing method display the same spatial extent of decreasing vegetation with linear spectral unmixing based on the Landsat dataset.Compared with vegetation changes from land use map in 2000 and 2010, the accuracy of the integrated empirical orthogonal function and temporal unmixing method is about 83.14%. Figure 1 : Figure 1: Location of Jiangsu Province and land use classification in 2010 (the land use production is from Institute of Geographic Sciences and Natural Resources Research, Chinese Academy of Sciences). Figure 4 : Figure 4: Curves amplitude of the first ten of empirical orthogonal function could be quantified in the time domain.Importantly, the fourth curve has increasing trend which is opposite to decreasing vegetation trend. (a)) representing decreasing vegetation cover between 2000 and 2012 (Figure 5(b)).The decreasing vegetation endmember reveals the vegetation reduction processes from 2000 to 2012.It is taken as the average vegetation decreasing representation in Jiangsu for temporal unmixing model, because the third and fourth PCs are used to select the decreasing vegetation endmembers. 1 . Validation Based on Landsat Dataset by Linear Spectra Unmixing.The vegetation fractions in 2000, 2006/2002, and 2009 in Suqian, Nanjing, and Suzhou correspond to the blue, green, and red channels in Figure 6.The dark areas mean no vegetation from 2000 to 2009.The blue areas mean vegetation cover in 2000 but with no vegetation in 2006/2002 and 2009, which clearly exhibits vegetation change processes. Figure 5 : Figure 5: Feature space representation of the third and fourth PCs for temporal unmixing analysis.The decreasing vegetation endmembers are found in the south part of PC space (a).Decreasing vegetation endmember (b).Spatial distribution of decreasing vegetation area estimated by temporal unmixing analysis in Jiangsu (c). Figure 6 : Figure 6: Vegetation fraction map in Suqian City (a), Nanjing City (b), and Suzhou City (c) in Jiangsu Province.Vegetation endmember fractions in 2000, 2006/2002, and 2009 are shown as blue, green, and red channels.The dark areas mean no vegetation from 2000 to 2009.The blue areas mean vegetation cover in 2000, but with no vegetation in 2006/2002 and 2009, which clearly exhibits vegetation change processes.The whole area is the city and the red rectangles refer to central urban. Figure 7 : Figure 7: The blue part is vegetation decreasing area from empirical orthogonal function and temporal unmixing analysis.The red part is vegetation decreasing area from land use map in 2000 and 2010.Vegetation areas are mainly distributed in the suburbs. Table 1 : EOF and TUA method accuracy analysis.
v3-fos-license
2018-04-03T00:46:06.928Z
2017-08-21T00:00:00.000
6518763
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.nature.com/articles/s41598-017-09131-2.pdf", "pdf_hash": "a3c3cbf6d27d896b5db024adbd87d0688f2cfd86", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43701", "s2fieldsofstudy": [ "Environmental Science", "Medicine" ], "sha1": "04d3a6091d9f355bff69cf22267c1581efb4c520", "year": 2017 }
pes2o/s2orc
Dose-related liver injury of Geniposide associated with the alteration in bile acid synthesis and transportation Fructus Gardenia (FG), containing the major active constituent Geniposide, is widely used in China for medicinal purposes. Currently, clinical reports of FG toxicity have not been published, however, animal studies have shown FG or Geniposide can cause hepatotoxicity in rats. We investigated Geniposide-induced hepatic injury in male Sprague-Dawley rats after 3-day intragastric administration of 100 mg/kg or 300 mg/kg Geniposide. Changes in hepatic histomorphology, serum liver enzyme, serum and hepatic bile acid profiles, and hepatic bile acid synthesis and transportation gene expression were measured. The 300 mg/kg Geniposide caused liver injury evidenced by pathological changes and increases in serum alanine aminotransferase (ALT), aspartate aminotransferase (AST), alkaline phosphatase (ALP) and γ-glutamytransferase (γ-GT). While liver, but not sera, total bile acids (TBAs) were increased 75% by this dose, dominated by increases in taurine-conjugated bile acids (t-CBAs). The 300 mg/kg Geniposide also down-regulated expression of Farnesoid X receptor (FXR), small heterodimer partner (SHP) and bile salt export pump (BSEP). In conclusion, 300 mg/kg Geniposide can induce liver injury with associated changes in bile acid regulating genes, leading to an accumulation of taurine conjugates in the rat liver. Taurocholic acid (TCA), taurochenodeoxycholic acid (TCDCA) as well as tauro-α-muricholic acid (T-α-MCA) are potential markers for Geniposide-induced hepatic damage. Therefore, knowledge of Geniposide-induced liver injury and its hepatotoxic mechanism are needed to allow the safe clinical use of the CMM. Bile acids play essential roles in regulating cholesterol, triglyceride, and glucose homeostasis 14 . The primary bile acids (PBAs), such as chenodeoxycholic acid (CDCA) and cholic acid (CA), are synthesized from cholesterol in hepatocytes. Rodents also synthesize α-muricholic acid (α-MCA) and β-muricholic acid (β-MCA) 15 . Secondary bile acids (SBAs) including lithocholic acid (LCA), ursodeoxycholic acid (UDCA) and deoxycholic acid (DCA) are derived from PBAs by microbial flora in the large intestine 16 . PBAs and SBAs can be transformed into conjugated bile acids (CBAs), including t-CBAs and glycine-conjugated bile acids (g-CBAs). Approximately 95% of the bile acids excreted into the bile duct from hepatocytes are reabsorbed in the terminal ileum and returned back to the liver for further biliary secretion 17 . Some liver diseases and drug-induced liver injuries can disturb the synthesis and clearance of hepatic bile acids potentially resulting in alteration of the composition and concentration of bile acids in liver and sera. The consequential bile acid accumulation in liver can result in hepatotoxicity and even lead to cirrhosis and hepatic necrosis 18,19 . Hence, bile acids have been considered as biomarkers of hepatic diseases 20,21 . FXR and various hepatic transporters such as the Na + -dependent taurocholic cotransporting polypeptide (NTCP), BSEP, multidrug resistance associated protein 2 & 3 (Mrp2, Mrp3) play pivotal roles in regulating bile acid homeostasis via regulation of synthesis, transportation of bile acids 18,[22][23][24][25][26][27][28][29] and their proper function of this excretion is critical to prevent bile acid mediated hepatotoxicity. In the present study, we explored Geniposide induced hepatotoxicity in rats and its effect on bile acid levels and metabolism, to search for potential markers and elucidate the mechanism associated with Geniposide-induced liver injury. Results Physical effects and liver weights. Manifestations, including diarrhea, weakness and weight loss, and one death rat were observed only at the 300 mg/kg Geniposide dose and relative liver weight (g/100 g body weight) were increased after 3 days (data not shown). There were no abnormal signs in the rats in 100 mg/kg Geniposide group. Geniposide caused liver injury at high dose level. After rats 300 mg/kg Geniposide treatment, the serum concentration of ALT, AST, and ALP increased significantly (p < 0.05) ( Table 1). In addition, γ-GT and cholesterol (CHO) were increased at both 100 and 300 mg/kg doses (p < 0.05) ( Table 1). A decrease of total bilirubin (TBIL) was noted with Geniposide at the 100 mg/kg (p < 0.001) ( Table 1), but not 300 mg/kg (p = 0.2) dose. Histological findings included hepatocyte swelling with degeneration or necrosis, fat droplets in hepatocytes, and lymphocytes infiltration in the 300 mg/kg Geniposide group (Fig. 2c). Histological abnormalities were not observed in the 100 mg/kg group (Fig. 2b). Therefore, the high dose of Geniposide caused liver injury in rats. Multivariate statistical analysis of bile acids in sera and livers. Representative UPLC-MS/MS chromatograms of bile acids detected in the sera and livers are shown in Supplementary Fig. S1. Sixteen bile acids, including 5 t-CBAs, 5 g-CBAs and 6 unconjugated bile acids (UCBAs) were quantified in control and Geniposide-treatment groups ( Fig. 3a-f). An initial principal component analysis using the bile acid data alone revealed a partial segregation of treatment groups and controls, and this separation was further enhanced by a partial least-squares discriminant analysis (PLS-DA) as shown in Fig. 4a,b. To avoid overfitting, permutation tests with 100 iterations were performed to validate the model 30 and the validation plots indicated the original model were valid. These data indicated that model was of modest quality and provided accurate predictions. Analysis of the animal latent variable 1 (LV1) scores for both serum (Fig. 4a) and liver ( Fig. 4b) showed that bile acid levels in Geniposide treated groups differed from control at both the 100 mg/kg (p < 0.05) and 300 mg/kg (p < 0.001) dose. The variable importance in projection (VIP) values were used to identify the potential markers (Fig. 4c,d), and a VIP value above 1.0 was used as a cut off to select potential markers 31 . Using this criteria, we identified the bile acids TCA, TCDCA, T-α-MCA in sera and TCA, TCDCA, taurohyodeoxycholic acid (THDCA), hyodeoxycholic acid (HDCA), T-α-MCA in liver as potential markers. Geniposide affected sera bile acid compositions. TBAs, UCBAs and CBAs (including t-CBAs and g-CBAs) in serum of each rat were calculated respectively. As shown in Fig. 5a, UCBAs accounted for the largest portion of TBAs in rat sera in all groups, and no difference in serum TBAs were detected between the control and either the low or high dose of Geniposide, although an increase in TBAs was weakly indicated in the high dose versus control groups (p = 0.081) (Fig. 5c). Nevertheless, t-CBAs but not g-CBAs were clearly increased (p < 0.05) (Fig. 5b) after rats were treated with high, but not the low dose of Geniposide. Specifically, the high dose of Geniposide elevated the amounts of T-α-MCA, TCDCA, TCA and β-MCA (p < 0.05, vs control group) by 132%, 177%, 418% and 145%, while decreasing HDCA by 71% (p < 0.05) ( Table 2). Notably, an increase of TCDCA and a decrease of HDCA were also observed in the 100 mg/kg Geniposide group (p < 0.05) ( Table 2). These results indicated that, treatment with Geniposide for 3 days could cause different changes of bile acid compositions depending on the different doses. Geniposide affected liver bile acid compositions. Hepatic TBAs, CBAs and UCBAs results are shown in Fig. 5d-f. The t-CBAs accounted for the greatest portion of hepatic TBAs in Geniposide treated and control groups, with UCBAs and g-CBAs representing minor components (Fig. 5d). Treatment with low dose of Geniposide did not affect hepatic TBAs. As in sera, high dose of Geniposide elevated liver TBAs and t-CBAs, but not g-CBAs and UCBAs (Fig. 5e,f). The level of hepatic TBAs and the sum of t-CBAs in Geniposide 300 mg/ kg group was 75% and 82% higher than controls, respectively. Treatment with high dose of Geniposide also increased multiple t-CBAs, including T-α-MCA, TCA, TCDCA (36.1, 47.0, 4.27 μg/g in control group vs 62.0, 127, 11.2 μg/g in Geniposide high dose group, respectively). THDCA and HDCA, however, were decreased by high dose Geniposide treatment. Decreases of THDCA and HDCA were also noted in low dose group (Table 2). Together these results indicated that high dose of Geniposide can cause the accumulation of bile acids in liver, mostly t-CBAs, that could be related to liver injury. Geniposide impact on hepatic bile acid transport and metabolism and gene expression. To understand the mechanism of the Geniposide on bile acid metabolism associated with hepatotoxicity, we used quantitative real-time PCR to analyze the gene expressions of a nuclear bile acid receptor (FXR), an enzyme for bile acid synthesis cholesterol 7α-hydroxylase (CYP7A1) and atypical nuclear receptor SHP. As shown in Fig. 6a, the expression of FXR mRNA was suppressed by high dose Geniposide (p < 0.01), but potentially upregulated by low dose (p = 0.1). Figure 6b showed that both high dose and low dose of Geniposide suppressed SHP mRNA expression (p < 0.001). The expression of CYP7A1 mRNA suppressed at the 100 mg/kg dose (p < 0.05) but unaffected at the 300 mg/kg dose of Geniposide (p = 0.3) (Fig. 6c). Multiple changes in genes involved in bile acid transport were also observed. As shown in Fig. 6d, high dose of Geniposide inhibited the expression of BSEP mRNA (p < 0.01). The expression of NTCP mRNA was down-regulated by high dose of Geniposide, but . The black circles represented the control, while the red and blue circles represented the Geniposide 100 mg/kg and 300 mg/kg group respectively, as indicated on the plots. According to PLS-DA score plots, LV1 scores in sera and liver were presented, respectively. The VIP plots of PLS-DA highlighted the discriminatory species in sera (c) and liver (d). *p < 0.05, ***p < 0.001, compared with the control group. secretion, and compensatory mechanisms induced by the low dose may be overwhelmed by Geniposide-induced liver injury at the high dose. Discussion and Conclusion The occurrence of hepatotoxicity cases linked to CMM have raised serious concerns regarding CMM safety 32 . CMM taken at recommended doses by the Chinese Pharmacopoeia generally do not cause liver injury, but increasing the dosage of some CMM may lead to hepatotoxicity 33 . Moreover, the concentration of active ingredients in an herb can be diverse due to differences in growth areas, harvest time, and processing method and so on. Hence, even if people consume the same amount of an herb, the intake of active ingredients could differ. A major active constituent in FG, Geniposide, is a critical marker for FG quality 3 . In the present study, we found that Geniposide could cause distinct liver injury in rats at a dose of 300 mg/kg, without measurable hepatotoxicity at 100 mg/kg. Other studies have also revealed hepatotoxicity at high-dosage of Geniposide (≥280 mg/ kg) 7, 33 , supporting dose-dependent Geniposide-induced hepatotoxicity. Geniposide has been reported to have various pharmacological effects, being especially protective against hepatic injury caused by alcohol, high fat diet or carbon tetrachloride at the dose range 25-100 mg/kg 10,34 in rats. While Geniposide causes hepatotoxicity at doses several times higher than the doses used to elicit these pharmacological effects in the experiments, the potential for patients to be exposed to high doses of Geniposide in the clinic should be a concern since the minimum content of Geniposide in FG is established at 1.8% but no upper level is defined by the China Pharmacopeia 3 . Nevertheless, the content of Geniposide in FG is influenced by several factors, such as growing areas, processing procedure, and even collection time [11][12][13] and the highest content is ~6% which is 3-4 times the minimum standard. So, even though the same doses of FG is taken by patients this could represent substantial differences in Geniposide exposure. The highest daily dose of FG for human is 10 g recommended in Table 2. Concentrations of bile acids in sera and liver after rats treated with Geniposide for 3 days. Data are presented as means ± SD concentrations in sera and liver measured using UPLC-MS/MS of 7-8 rats. *p < 0.05, **p < 0.01, ***p < 0.001, compared with the control group of same bile acid. SCIeNtIfIC REPORTS | 7: 8938 | DOI:10.1038/s41598-017-09131-2 China Pharmacopeia 3 , that may be equivalent to 180 mg to 600 mg (3 mg/kg to 10 mg/kg for 60 kg human) of Geniposide corresponding to the range of content of Geniposide in FG (1.8% to 6%). According to the dose conversion method between animal and human 35 , doses of Geniposide 100, 300 mg/kg used in rats in this study could be converted to estimate human equivalent dose (HED) 16, 48 mg/kg respectively, and the HED is divided by a factor value of 10 to obtain the pharmacologically active doses for humans (1.6, 4.8 mg/kg). It is known that the pharmacologically active doses for humans (4.8 mg/kg) is within the daily dose range of Geniposide in FG in human mentioned above. Thus, hepatotoxicity due to Geniposide at 300 mg/kg may be relevant for humans. There is a possibility for patients with a risk of hepatotoxicity when FG has a high content of Geniposide. Based on this study, we suggest that the quality control standard for the content of Geniposide in the herb of FG should have both upper and lower limitation values to prevent hepatotoxic events. The mechanism of Geniposide-induced hepatotoxicity has not been elucidated, though oxidative stress was postulated 7,8 . After treatment with 300 mg/kg Geniposide, the serum ALP and γ-GT were obviously increased, both of which have been used as markers of the cholestasis 36 . The increase of serum ALP and γ-GT could be a side effect of many medications 17 as they are general reporters of liver damage. Therefore, we performed further tests on the bile acids in sera and livers, and found that there were significant changes in the compositions of serum and liver bile acids following treatment with Geniposide 300 mg/kg. Our results revealed that disturbances in bile acid formation or secretion may be involved in Geniposide-induced hepatotoxicity. Bile acids are endogenous molecules that normally regulate cholesterol homeostasis, lipid solubilization and metabolism 37 . Abnormally high concentrations of bile acids, such as occurring cholestasis, can result in intrahepatic accumulation of toxic bile acids leading to hepatic damage by producing pathophysiological effects including mitochondrial dysfunction with overgeneration of reactive oxygen and nitrogen species 19,38,39 . Moreover, even minor liver damage can cause the perturbation of serum and hepatic bile acids 40 . Various liver disorders such as nonalcoholic fatty liver disease (NAFLD), drug-induced liver injury could increase the levels of bile acids in liver 41 . Therefore, bile acids are considered as highly sensitive markers for liver injury and liver dysfunction, and used as potential biomarkers in drug-induced liver injury 42 . In this study, we investigated the bile acid profiles in both sera and livers of the rats with or without Geniposide treatment. Multivariate discriminant analyses 43 showed clear differences between high-dosage Geniposide (300 mg/kg) and control group, but weak difference between the control group and the low-dosage Geniposide (100 mg/kg) group. Our study revealed that Geniposide-induced hepatic injury was associated with the change of bile acids in sera and livers. Concurrent with liver injury, TBAs, especially the dominant types of bile acids 41 and the t-CBAs, markedly increased in the livers after rats were treated with high-dose of Geniposide. Among t-CBAs, TCDCA and taurodeoxycholic acid (TDCA) have postulated as inducers of cholestasis that significantly elevate serum levels of ALT and AST in rats 44 . Additionally, TCA, TCDCA and TDCA are substantially elevated in acetaminophen-induced acute liver failure patients 45 . Strong correlations were noted between hepatic necrosis and the bile acids TCA and TDCA in an acetaminophen-induced rat liver injury model 46 . Here, TCA, TCDCA and T-α-MCA were increased in both sera and livers, and were the strongest bile acid discriminators of dose, suggesting them as valuable serum potential markers for Geniposide-induced liver injury in rats. Correlation coefficients (r) between variables of bile acids and ALP, γ-GT in serum, which are commonly used biomarkers in evaluating drug-induced choletasis 36 , were calculated 47 (Supplementary Table S1). The correlation analysis suggested significant positive correlations between concentrations of major t-CBAs (T-α-MCA, TCDCA, TCA, TDCA), partial UCBAs (β-MCA, CDCA, CA) and GCA in serum and ALP, γ-GT. In addition, ALP, γ-GT positively correlated significantly with the concentrations of major t-CBAs (T-α-MCA, TCDCA, TCA) and CA in liver. Therefore, the results revealed the concentrations alteration of t-CBAs in particular could have a relationship with high dose Geniposide-induced liver injury. Bile acid homeostasis is tightly regulated via a feedback loop operated by FXR and SHP 48 . The hepatic FXR induces SHP in liver leading to inhibition of CYP7A1, the rate-limiting enzyme in bile acid synthesis 24,37 . The loss of FXR and SHP can rapidly result in cholestasis and liver injury 48 . As we observed in this study, high-dose of Geniposide (300 mg/kg) significantly down-regulate the expression of FXR and SHP mRNA, and SHP downregulation was observed at the lower dose as well (Fig. 6e). However, increased bile acid production was only weakly suggested (p = 0.3) by increased CYP7A1 expression with 300 mg/kg Geniposide exposure, suggesting other mechanisms must be at work to elevate hepatic bile acid concentrations to promote liver injury. Disruption in bile acid export could also lead to their elevations in the liver and we found that multiple hepatocytes transporters were involved in Geniposide-induced bile acid increase and liver injury (Fig. 6e). BSEP is the major transporter for the secretion of bile acids from hepatocytes into bile 49 , and BSEP inhibition is a known risk factor for drug-induced cholestatic hepatotoxicity thought to play an important role in the development of liver injury 50 . Geniposide at 300 mg/kg down-regulated BSEP mRNA expression in the liver which would support the accumulation of bile acids in hepatocytes. The transport of bile acids across the basolateral membrane of the hepatocytes is mainly mediated by the NTCP. Geniposide at 300 mg/kg also suppressed hepatic NTCP mRNA expression which could be a negative feedback mechanism to reduce bile acid entry in response to elevated hepatocyte bile acid concentrations 49 . Mrp2, located in the canalicular membrane of hepatocytes, transport bile acids from the hepatocytes into the bile 49 . Mrp3 is localized to the basolateral membrane of the hepatocytes mediating the export of bile acids. Geniposide was shown to up-regulate the expression of Mrp2 mRNA and Mrp3 mRNA in rat livers, significantly at doses 100 mg/kg (on Mrp2) and 100, 300 mg/kg (on Mrp3). The up-regulation of Mrp3 could be a compensatory action for bile acid efflux when BSEP-mediated biliary excretion is impaired 51 , to reduce the accumulation of bile acids in hepatocytes. The elevation of Mrp2 could facilitate hepatic bile acids into the canaliculus, and thus reduce the risk of liver injury. In addition, Mrp2 mediates the export of bilirubin conjugates from hepatocytes 52 , consistent with the Geniposide-induced bilirubin decrease in this study. In comparison, Geniposide had more vigorous effect in up-regulation of Mrp3 genes at dose of 100 mg/kg rather than that at dose of 300 mg/kg. One possibility for this observation would be an accumulating hepatocyte damage that is reducing the livers ability to sustain an effective compensatory defense via Mrp2 and Mrp3 induction, and is consistent with the higher levels of bile acids and liver injury observed at high Geniposide dose. In conclusion, high dose Geniposide can cause liver injury which is associated with, and potentially linked to increase of bile acid concentrations in hepatocytes. These changes appeared weakly associated with an increase of bile acid synthesis due to CYP7A1 dysregulation, with strong suppression of FXR and SHP. Clear dose dependent impacts on hepatic bile acid excreting gene expression were identified. While reductions in bile acid excretion through the primary route regulated by BSEP associated with low-dose Geniposide appeared to be effectively compensated for by shifts in NTCP, Mrp2 and Mrp3 expression, these systems could not prevent hepatic bile acid accumulation as well as liver injury caused by high dose of Geniposide. Based on the results, we assume that high dose of Geniposide-induced rat liver injury was likely cholestatic, and TCA, TCDCA and T-α-MCA are potential serum potential markers for Geniposide-induced liver injury in rats. Chemicals (Toronto, Canada). glycochenodeoxycholic acid (GCDCA), glycodeoxycholic acid (GDCA) and glycocholic acid (GCA) were purchased from Nanjing Shenglide Technology Co., LTD (Nanjing, China). TCDCA, THDCA, CA, UDCA, HDCA, CDCA and DCA were purchased from national institutes for food and drug control (NIFDC, Beijing, China). Methods Animals and experimental procedure. Specific pathogen free male Sprague-Dawley rats (24) provided by Vital River Laboratory Animal Technology Co. Ltd. (Beijing, China) were received at 10 wks of age, with body weights ranging from 200-220 g. Animals were housed in an environmentally-controlled animal facility with room temperature 23 ± 3 °C, relative humidity of 40~70%, air ventilation of approximate 15 times/hr and a 12-hour light/dark cycle. The animals were allowed filtered tap water and the fixed-formula rat granular feed ad libitum. The rats were randomly divided into Geniposide 0, 100, 300 mg/kg 35,53 groups. Rats were dosed by a gastric gavage once daily for three consecutive days while the rats in control group received an equal volume of pure water. Twenty-four hours after the last administration, all rats were anesthetized with sodium phenobarbital by intraperitoneal injection under the condition of fasting overnight and the blood samples were collected from the abdominal aorta and euthanized by exsanguinations, and then livers were dissected. The sera were prepared by centrifugation at 3,000 rpm for 15 min after coagulating at room temperature for analysis of biochemical parameters and bile acids. A portion of liver was preserved in neutral buffered formalin for histopathological examination, while the remaining portion was stored at −80 °C for further analysis of bile acids by LC-MS/MS and gene expression by quantitative real-time PCR. Histopathological examination. Liver samples were routinely fixed with neutral buffered formalin, and embedded in paraffin. Four micron thick sections were cut and stained with HE. The histomorphology was examined under the light microscopy (Olympus, Japan). Analysis of bile acids in serum and liver by UPLC-MS. A 100 μL aliquot of serum sample was added to washed and activated SPE columns (Waters Oasis HLB 1cc, 10 mg). While in the SPE reservoir, the serum was spiked with 5 μL anti-oxidant solution (0.1 mg/ml solution BHT/EDTA in 1:1 MeOH: water) and diluted to 1 column volume with 5% MeOH w/0.1% acetic acid (v/v. Samples were loaded by gravity and washed with 1 column volume of 30% MeOH w/0.1% acetic acid (v/v). Sample extracts containing bile acids were eluted into 2 mL vials containing 10 μL of 20% glycerol solution in MeOH using 0.2 mL MeOH, followed by 0.5 mL acetonitrile (ACN), followed by 0.7 mL ethyl acetate. Solvents were removed under nitrogen and the residual 2 μL glycerol was redissolved with 100 μL of 100 nM 1-cyclohexylureido, 3-dodecanoic acid (CUDA; Cayman Chemical, Ann Arbor MI, USA) internal standard (in 50:50 MeOH: ACN) to tubes. Samples were filtered at 0.1 µm by centrifugation through Durapore PVDF membranes (Millipore) for 3 min at 6 °C at 4500 g (rcf) and stored at −20 °C for less than 1 wk prior to LC-MS/MS. The pulverized liver (15 mg) was placed into a tared and cleaned polyproylene tube, spiked with 5 μL anti-oxidant solution, and mixed with 500 μL MeOH, followed by 30 sec vortex-mixing. After centrifugation at 10,000 g for 5 min at room temperature, the supernatant was collected, spiked with glycerol, dried and then reconstituted in 100 μL CUDA, filterd and stored as described above. The quality control samples were kept at −80 °C and the calibration samples were kept 4 °C until analyzed. A Waters Acquity UPLC System coupled with API 5500 QTRAP mass spectrometry (AB Sciex) was used for the quantification of Bas. The UPLC system consists of a binary pump, a continuous vacuum degasser, a thermostated auto-sampler and a column compartment. Chromatographic separation of bile acids was carried out on an ACQUITY UPLC BEH column (2.1 × 100mm, 1.7μm) (Waters Corp., Milford, US). The mobile phase made up of 0.1% formic acid in water (A) and 0.1% formic acid and acetonitrile (B). The gradient elution was as follows: 90%A (0-0.5 min), 90-75%A (0.5-1.0 min), 75-60%A (1.0-11.0 min), 60-5%A(11.0-12.5 min), 5%A (12.5-14.0 min), 5-90%A (14.0-14.5 min), and 90%A (14.5-16.0 min). The flow rate of mobile phase was 0.4 mL/ min and the injection volume was 5 μL. The mass spectrometer was operated in the ESI negative mode with multiple reaction monitoring (MRM) function for the quantitation 54, 55 and more details on the MRM conditions were shown in Supplementary Table S2. The temperature of ion source was set up at 600 °C. The total chromatographic operation was divided into several periods. The ion dwell time and transition about all of the compounds were set reasonably. Data were manipulated with SIMCA-P software, Version 12.0. Instrument responses were calibrated with a mixture of 16 bile acids in methanol. The linear regression parameters obtained for each bile acid were showed in Supplementary Table S3. The accuracy was evaluated by the analysis of carbon-stripped serum spikes at low and high concentrations (Supplementary Table S4). Quantitative Real-time PCR analysis. Total hepatic RNA was extracted by using a total RNA kit (OMEGA, Georgia, U.S.A) according to manufacturer instructions. An aliquot of 1 μg RNA was applied for reverse transcription with oligo-dT primer (TOYOBO, OSAKA, Japan). Quantitative real-time PCR was performed using the Roche 480 instrument (Roche, Mannheim, Germany) and SYBR Green PCR Master Mix (Roche, Mannheim, Germany) for the subsequent genes with the corresponding primers (Sangon Biotech, Beijing, China) (Supplementary Table S5). Quantification was performed by the ΔΔCT method. The quantity of mRNA was normalized with the internal standard GAPDH. Statistical analyses. The data are expressed as the mean (M) ± standard deviation (SD). All data are independent samples. Statistical analysis of measurement data was performed using Student's t test and person correlation coefficient (r) was performed using correlation analysis with SPSS statistical software, version 16.0. Bile acid data were also used to perform the PLS-DA using SIMCA-P v.12.0 (Umetrics, San Jose, US). The data of Geniposide treatment groups were compared with control group, and a p-value of <0.05 was considered to be statistically significant.
v3-fos-license
2015-07-06T21:03:06.000Z
2013-02-08T00:00:00.000
15715031
{ "extfieldsofstudy": [ "Geography", "Computer Science" ], "oa_license": "CCBY", "oa_status": "GREEN", "oa_url": "https://discovery.ucl.ac.uk/1388242/1/10.1080-19475683.2012.758175.pdf", "pdf_hash": "93903ac4785ad7b20038bbadd6be358433ec8f48", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43703", "s2fieldsofstudy": [ "Environmental Science" ], "sha1": "93903ac4785ad7b20038bbadd6be358433ec8f48", "year": 2013 }
pes2o/s2orc
Evolution and entropy in the organization of urban street patterns The street patterns of cities are the result of long-term evolution and interaction between various internal, social and economic, and external, environmental and landscape, processes and factors. In this article, we use entropy as a measure of dispersion to study the effects of landscapes on the evolution and associated street patterns of two cities: Dundee in Eastern Scotland and Khorramabad in Western Iran, cities which have strong similarities in terms of the size of their street systems and populations but considerable differences in terms of their evolution within the landscape. Landscape features have strong effects on the city shape and street patterns of Dundee, which is primarily a shoreline city, while Khorramabad is primarily located within mountainous and valley terrain. We show how cumulative distributions of street lengths when graphed as log–log plots show abrupt changes in their straight-line slopes at lengths of about 120 m, indicating a change in street functionality across scale: streets shorter than 120 m are primarily local streets, whereas longer streets are mainly collectors and arterials. The entropy of a street-length population varies positively over its average length and length range which is the difference between the longest and the shortest streets in a population. Similarly, the entropies of the power law tails of the street populations of both cities have increased during their growth, indicating that the distribution of street lengths has gradually become more dispersed as these cities have expanded. Introduction The dynamics of urban morphology has been explored from many different perspectives (e.g. Harris 1985;Berechman and Small 1988;Yongmei and Junmei 2004;Benguigu, Blumenfeld-Lieberthal, and Czamanski 2006;Batty 2008Batty , 2010, but for a better understanding of city structure and the complexities of this dynamics, it is crucial to quantify the different physical properties of those structures. To advance this, we argue here that street networks are among the most important city structures, and recently, there have been many studies of street patterns and city growth based on socio-economic data (Batty 1971;Wegener 1994;Makse, Havlin, and Stanley 1995;Hillier 1999;Barredo Kasanko, McCormick, and Lavalle 2002;Berling and Wu 2004). There have been also been many analyses of street patterns based on network science which, in the last decade, has become highly significant Scellato et al. 2006;Jiang 2007;Barthelemy and Flammini 2008;Masucci et al. 2009) while related studies such as those by Xie and Levinson (2007), Lammer et al. (2006), Jiang (2009) and Levinson and Huang (2012) amongst others focus on the structural properties of road *Corresponding author. Email: mohajeri.nahid.09@ucl.ac.uk networks from the point of view of traffic and engineering. Furthermore, Marshall (2005) explains how different layouts and patterns of streets contribute to better urban design, addressing how design aspects of urban transportation might increase the functionality of cities. By contrast, research on street patterns and urban dynamics based on physical data and how they relate to physical concepts is much less developed and remains in its infancy. In particular, with few exceptions (Mohajeri 2012;Mohajeri and Gudmundsson 2012), there has been little attempt to analyse city growth and street patterns in relation to landscape using entropy concepts. There is a clear need for rigorous quantitative methods that explain (1) how city geometry changes over time as a function of its size and of external landscape constraints and (2) how changes in city geometry affect the associated street patterns that determine how energy is distributed within the city in terms of the flow of people and materials. One rigorous quantitative method which has found extensive use in systems theory and indeed in spatial interaction modelling is entropy analysis. Entropy statistics, which measures the variation of a phenomenon with respect to its frequency across a given range, allows us to quantify changes in geometry as a city grows and helps in assessing the plausible mechanisms for the formation and evolution of city structure. In this article, our main aim is to use entropy analysis to show how the properties of street patterns, focusing on street lengths, vary within a city and how this variation partly reflects external landscape constraints. The second aim is to investigate the degree to which landscape constraints, such as coastlines, mountain ranges and major rivers, control the shape of cities by providing constraints on their growth. This article is empirically grounded in that it focuses on the associations between landscape, city evolution and street patterns of two case studies, namely the cities of Dundee in Scotland and Khorramabad in Iran (Figures 1-2). Case study exemplars: the geographical background We need to first justify the selection of these two cities which are quite different in terms of both their physical and cultural contexts. Both cities have clear boundaries. Their overall shape is partly controlled by their landscape, primarily the coastline of the Firth of Tay in the case of Dundee (Figure 1) and mountains and valleys in the case of Khorramabad (Figure 2). This is somewhat different from cities, such as Paris or Chicago, whose landscape morphologies do not have such strong physical features. The availability of historical data and GIS data sets for street networks of both cities also makes it possible to carry out a detailed analysis of their street networks. Notes: Its geometry is largely controlled by landscape constraints, primarily mountain slopes or fronts and valleys. The two rose diagrams summarise the general trend of all the 8481 streets using weighted and non-weighted data. An additional set of reasons is the striking difference in their history of evolution. Dundee has been developing gradually over several hundred years; its inner part dates back to the medieval times and is thus historically important (Ferguson 2005;Watson 2006). The city, in fact, dates back to at least to the twelfth century and has a current population of 143,390 (General Register Office for Scotland 2009). It is located along the north coast of a fjord, the Firth of Tay Estuary, in Eastern Scotland ( Figure 1) and is Scotland's fourth largest city. The city has a roughly elliptical boundary, part of which is determined by the shoreline of the Tay. By contrast, the greater part of Khorramabad, in Western Iran, is a very young city, mostly less than 60 years old. The city population is about 334,000 (Iranian Statistical Centre 2007). It is located in the province of Lorestan and is surrounded by prominent landform features such as mountainscapes that form part of the Zagros range ( Figure 2). It thus provides an excellent example of a rapidly expanding city subject to strong landscape constraints. There exist detailed maps of the city since 1955, at which time it occupied only the small, narrow (bottleneck) part of the present city ( Figure 2). The south and southeast and north and northwest parts of the city extend to form 'wings' within its valley. The wings are connected by a narrower pass through the mountains, which functions as a natural bottleneck that constrains traffic between the two parts of the city. At its narrowest, the width of this bottleneck is only 1.1 km. The overall shape of Khorramabad is broadly that of a crescent, with somewhat irregular boundaries that are clearly constrained by the valley of the same name, its flanking mountains and the narrow pass. Data sources Transport network data sets for the United Kingdom were available from the Integrated Transport Network (ITN) layer (provided by the Ordnance Survey), downloadable from the UK EDINA Digimap website (Digimap: http:// www.edina.ac.uk/). This layer consists of the road network, road routing information and other transport information. Street data sets and their statistical information within the city for Dundee were obtained from the Digimap source and imported into GIS (Arcview Version 9.3; www.esri.com) while then the ITN layers were converted to GIS-shapefile format. Dundee City Council and the National Library of Scotland also provided the historical data for previous street networks in Dundee, and all the historical maps were digitised for import into ArcGIS. The National Iranian Cartographic Centre (2005) provided the GIS-shapefiles for the network data sets of the city of Khorramabad. In addition, the CAD/GIS master plan of the city is used so as to obtain the most recent street networks of city (Ministry of Housing and Urban Development 2005). The historical maps of Khorramabad were digitised through GIS, the maps being scanned from master plan studies of the city (Ministry of Housing and Urban Development 2005). Google Earth was also used for capturing appropriate remote-sensing images of two cities which display the main physical features and act as a backcloth to the street line analysis. As our analysis will focus on the variation in street-segment lengths and trends (orientation or azimuth) and street spacing, a street segment is defined as the distance from one junction to the next while spacing is defined as the shortest distance between street centrelines. External constraints considered here include landscape factors such as coastlines (Dundee city) and mountain ranges (Khorramabad) as such constraints largely define the boundary shape of each of the case study cities. Digital terrain models were used to supplement such as those used in Google Earth. Delineation of city boundaries There is no definitive agreement on how to define a city boundary and methods vary depending on the application (Benguigui, Blumenfeld-Lieberthal, and Czamanski 2006;Pont and Haupt 2010). Here the boundaries are determined from aerial imagery (Google Earth and various aerial photographs) on the basis of changes in land cover and clear geomorphological features. In the case of Dundee, the estuary shoreline forms a natural boundary to the south and southeast, and in the north and northwest, there is a very clear transition from urban to agricultural land use. In the case of Khorramabad, the agricultural field patterns and the steep and sharp mountain slopes clearly separate both cities from their surroundings and allow clear delineation of boundary polygons. Directional statistics The distribution of street orientations is presented using rose diagrams (Swan and Sandilands 1995;Smith, Goodchild, and Longley 2009), constructed using the program GEOrient (http://www.geoorient.com/). Two sets of analyses were performed; first, using non-normalised (nonweighted) data, where short streets and long streets have equal weight in the rose diagram, and second, using data normalised (weighted) in proportion to the length of the shortest street. In this case, more weight is given to longer streets which are in proportion to their lengths. Power law size distributions Power law size distributions are very common in artificial (man-made) and natural processes and structures, particularly in the heavy tails of many distributions which often account for the majority of size or volume of the range of objects in question. The populations of cities, the intensities of earthquakes, word frequencies in literature and the frequencies of family names all give rise to power law-like distributions (e.g . Schroeder 1991;Peitgen, Jurgens, and Saupe 2004;Newman 2005). Skew distributions in general and power-law distributions in particular imply that the number of small events, processes or objects of a particular type is large in comparison with the number of large events, processes or objects of the same type. In general, systems where competitive processes are at work usually determine this sorting of small from large which often accords to evolution where the dynamics of the system is key. When applied to a cumulative frequency (probability) distribution, a power law has the form: where P(≥x) is the number of objects with a size larger than x, C is a constant of proportionality and D is the scaling exponent. In the case of a distribution of street lengths, P(≥x) is the number of streets with a length larger than x, C is constant and D is the scaling exponent. To determine whether data sets follow a power law distribution, the traditional and standard procedure is simply to plot the logarithms of the values (x) and their probabilities P(x) as log (P(x)) = log (C) -D log(x). A straight line on the log-log plot is then usually regarded as a general indication of that a power law can account for the variation (Newman 2005;Jiang 2007Jiang , 2009Clauset, Shalizi, and Newman 2009) but in reality, however, a straight line is hardly ever observed over the entire range of the values or sizes of x; there is normally a cut-off at the smallest perceivable size (Newman 2005). Thus, the distribution generally corresponds to a power law only over a certain range, for example, in its heavy tail or short tail, dependent on what transform of the distributions is being examined. To validate size distributions as a power law, the maximum likelihood method generates the most acceptable statistics used to compare the power law fit with other candidates such as the log-normal, exponential and stretched exponential. Details of this estimation procedure are given in Section 5. Entropy Entropy, commonly denoted by the symbol S, is a fundamental thermodynamic concept. In classical thermodynamics, an infinitesimal entropy change, dS, is defined as: where δQ is the energy (heat) received or absorbed by the system under consideration, and T is the absolute (Kelvin) temperature (of the source) at the time when that energy/heat is received. The equality sign applies to reversible processes -the inequality sign to irreversible processes -and the units of δS are given in energy (joules) over absolute temperature (K), or J K −1 . A thermally isolated system cannot receive any heat from an environment, in which case δQ = 0 and, from Equation (2), dS ≥ 0 which may be regarded as one version of the second law of thermodynamics. It implies that for any change in such a system, its entropy either stays the same (a reversible change) or increases (an irreversible change). In cases where the system is not isolated, its entropy may decrease as it imports energy from it surrounding parts. However, the entropy of the system and its surroundings must increase if the systems and its surroundings are isolated, hence self-contained. As defined above, this traditional variety of entropy does not have an immediate application to street patterns in terms of their evolution, at least as we have used it here but physical entropy also has a basis in probability theory through statistical mechanics. When related to a probability, the concept of entropy can be used in analysing the frequency distribution of streets using the following expression, known as Shannon-Gibb's entropy formula, which gives the entropy for a general probability distribution (Dill and Bromberg 2003;Blundell and Blundell 2006) as Where k is a constant that is usually taken as the dimensionless number 1, when dealing with frequency distributions (Ben-Naim 2008; Volkenstein 2009). For a power law distribution of street lengths, t is defined as the number of classes or bins that contains streets in the frequency distribution, that is, the number of bins of street lengths with nonzero probabilities of streets, and P i is the frequency or probability of a set of streets belonging to the i-th bin, that is, the probability of the i-th class or bin (Dill and Bromberg 2003;Volkenstein 2009). When calculating the entropy using Equation (3), it is usual to include only those bins where the probability of finding a street is greater than zero (thus, each included bin contains at least one street). Equation (3) is analogous to the Shannon entropy equation, which lies at the basis of information theory (Jaynes 1957) and is here applied to frequency distributions (Wang et al. 2003;Rao et al. 2004;Drissi, Chonavel, and Boucher 2008;Navarro, Aguila, and Asadi 2010;Chen 2012). By definition, we also have where the sum of the probabilities for all the bins is equals to one. Given that the probabilities are always between 0 and 1 (Equation (4)), and the natural logarithm of numbers between 0 and 1 is negative, the minus sign in Equation (3) ensures that entropy must always be positive. The probabilities, as applied to streets in a population, are a measure of the chances of randomly selected streets from the population of street lengths falling into a particular bin. The calculated entropy of the population depends on the shape of the probability distribution. For example, if the distribution is uniform, that is, all the bins occupied by streets have the same lengths (heights), so that the probability of streets belonging to any of the bins is equal, then the entropy reaches its maximum value (Kondepudi and Prigogine 1998;Stamps 2004;Nelson 2006;Desurvire 2009;Volkenstein 2009). The entropy of an isolated system in a given macrostate where all the probabilities are equal may be derived from Equation (3) and is given by the Boltzmann equation, namely: where, again, t is the number of nonzero bins in the probability or frequency distribution. A city, however, is not and cannot in any sense be treated as an isolated system since it always exchanges materials, energy, information and people with its surroundings. A street network as a part of a developing city is thus not isolated (it may be either closed or open). It thus follows that the bins or classes (microstates in statistical mechanics) for a street network are not equally probable. It is of interest to examine how this measure changes as a city develops, for it is a signature of how evenly spread are the distribution of street lengths and this reflects the extent to which the city is evolving and changing. Street patterns and size distribution 4.1. Dundee The trends for the whole city (with 9616 street segments, Figure 1a) and those within sub-regions along its estuarine shoreline (6004 street-segments, Figure 3) were analysed and presented as rose diagrams, using both normalised and non-normalised data. The sub-regions ( Figure 3) were chosen according to three criteria, namely: (1) The number of streets should be similar in all the subareas (800 < N < 900); (2) All the subareas should be of a similar size; and (3) The subsets should reflect the variation in alignment of the shoreline. From Figure 1, it is evident that there are two main street trends in Dundee: one aligned roughly northsouth, the other roughly eastwest. These are broadly shorelineperpendicular and coast-parallel respectively. The extent to which the shore-parallel trends closely follow variation in shoreline direction is strikingly evident from Figure 3. It is notable that, progressing along the shoreline from west to east, the northerly trending streets remain orthogonal and thus become north-northwest trending towards the eastern part of the city. Greater variability in street direction at the eastern and, especially, at the western margins of the city are partly attributable to many streets being roughly perpendicular to the curved landward boundary of the city at these localities. The change in trend towards the city centre is presumably because this is the oldest part, where the city originated and where the segments tend to be more irregular (see the non-weighted rose 3 in Figure 3). Coast-parallel street segments tend to be longer than the coast-perpendicular segments. This is presumably because the city originated with the first harbour on the estuary shore and subsequently grew preferentially along this shoreline. Cumulative distributions (Equation (1); Figure 4a) are used to explore the power law properties of street lengths. Log-log plots (Figure 4b and c) suggest that the street length distributions are consistent with composite power laws that have different scaling exponents for different street-length ranges. From purely visual inspection, a clear break in straight-line slope occurs at a street length of around 140 ± 20 m, at which point the scaling exponent changes from 0.917 to 2.582. This implies the existence of two distinct street populations. That composed of streets with lengths from 3 m to 140 m primarily consists of local streets, including private lanes and alleys and cul-desacs (Headicar 2009). The other population (streets with lengths from 140 m to 2248 m) is comprised primarily of local roads and collectors (that are commonly wider than local roads and feed the traffic from local streets to arterial roads). Khorramabad The analysis of 8481 streets for Khorramabad again indicates the existence of two dominant trends (Figure 2b), although these are much weaker than in the previous case of Dundee. The greater variation in street orientation for Khorramabad is partly due to the broadly crescent-shaped city boundary, which is more directly constrained by landscape topography. To further explore this, the city was divided into five similarly sized (average N = 1692) subareas ( Figure 5) based on different time periods of the city growth. Weighted and non-weighted rose diagrams for these subareas exhibit more clearly bi-directional orientations that, for Dundee, are clearly aligned with constraining external boundary (the coastline). Results for all the street lengths are shown in Figure 6, which provides cumulative plots of all the streets lengths exceeding a given length against the lengths any, its very clear landscape constraints have on street density and functionality. Street density is the reciprocal of spacing and is defined as the number of streets per unit length of a transverse roughly perpendicular to the mean trend of the streets. Spacing is defined as the shortest distance between the central lines (or middle parts) of adjacent streets. Variation in the street spacing or density indicates how the capacity for traffic transport may change within a city (Mohajeri 2012). For example, a narrow valley may form a bottleneck where the street spacing would be expected to decrease or the density to increase, to maintain uniform capacity for traffic flow along the city (Figure 5-right). In each subarea ( Figure 5-left), the spacing along two roughly orthogonal lines or transverses was measured; one, marked by a, is parallel with the dominating (easterly) main trend of streets; the other, b, is parallel with the subordinate main trend. The two transverses are roughly perpendicular to the trends of the streets for which spacing are determined. Several points emerge from the results in Table 1. First, the street spacing follows approximately normal distributions. The standard deviations vary but are much smaller for the easterly streets (i.e. those crossing lines b). Second, the spacing is also, on average, much less for the streets crossing lines b than for those crossing lines a as revealed in Table 1. The mean spacing in the subareas or subpopulations for the streets crossing lines b varies from 27.82 m to 60.84 m, with an average mean spacing value of about 50.71 m. By contrast, the mean spacing for the streets crossing lines a in the same subareas varies from 52.13 m to 123.26 m, with an average mean spacing of about 106.15 m. Thus, the mean spacing of the streets that cross lines a, and are thus parallel with the axis of the elongated, crescent city, is roughly twice that of the streets crossing the lines b. The easterly trending streets, that is, those crossing lines b, have a much higher density (much less spacing) than the streets crossing lines a. Third, the minimum spacing occurs in subarea/subpopulation 3: shown in bold (Table 1, Figure 5), namely at the narrowest part, or the bottleneck, of the city. The low average street spacing, or high street density, in this subarea is a further indication that the external landscape influences not only the overall shape of the city but also its internal street pattern. Critical testing of power law distributions Although the practice of attributing good straight line fits of log-log distributions to the existence of an underlying power law process is widespread, it is entirely possible that other distributions may provide a better statistical fit and more closely represent the underlying generative process. In short, much of the analysis of power law relationships is based on visual analysis of the log-log plot rather than any serious consideration of other possible relations that also show good visual fits but are very different from the Table 1. Number of street-spacing measurements, mean spacing, standard deviations and minimum and maximum spacing in subareas/subpopulations 1-5, Khorramabad, along transverses/profiles a and b in each subarea (located in Figure 7- simplest power law case. Therefore, following the methods advocated by Newman (2005) and Clauset, Shalizi, and Newman (2009), maximum likelihood estimators (MLE) with goodness of fit tests based on Kolmogorov-Smirnov statistic and likelihood ratios were used to evaluate the power law behaviour apparent from visual examination of the street network data. This method permits estimation of the scaling exponent (α), and also the lowermost or minimum value (x min ) down to which the distribution follows a power law. Following Clauset, Shalizi, and Newman (2009), a quantity × obeys a continuous power law distribution if it is drawn from a probability density function or PDF such that: where C is a normalised constant based on the minimum value or lower-bound of the power law (x min ) and α is the scaling exponent. Generally, a power law fit to empirical data does not apply for all ×∈×≥ 0. There must be some lower bound or minimum value for the power law fit. Often, a power law fit applies only to data larger than x min , i.e. to the tail of the distribution. It follows that a definition of power law distribution, using normalisation, is: Using Equation (7), we can estimate α thus: where x i , i = 1 . . . n are the observed values of × such that x i ≥ x min , α is the slope of the line in the power law domain, n is now the number of data bins used in the calculations (excluding those with values below x min ), and x min is the lower bound for the power law fit to apply. It may be helpful to explore the complementary cumulative distribution function (CDF) of a power law distribution function. The shape or form of the CDF normally shows less fluctuation than that of the PDF, in particular in the tail of the distribution (Newman 2005). The cumulative distribution function P(x) in relation to the probability distribution Pr is defined as P(x) = Pr (X ≥ x). For the continuous case, the formula is (Clauset, Shalizi, and Newman 2009): where x i , i = 1 . . . n are the observed values of × such that x i ≥ x min . In the present analysis, x min is chosen so as to make the cumulative distributions of the measured data and the best-fit power law as similar for x i ≥ x min . There are a variety of methods for quantifying the distance between two distribution functions, but for nonnormally distributed data, the most common method is the Kolmogorov-Smirnov or KS statistic, which is the maximum distance between a distribution function (CDF) of the data and the fitted model. This is defined as: Here, S(x) is the CDF of the data for the observations with values larger than or equal to x min , and P(x) is the CDF for the power law that best fits the data in the region ×∈≥ x min . Our estimate of x min is then the value of x min that minimises in Equation (10). When x min and α have been calculated, we can find the goodness-of-fit between the data and the power law. A goodness-of-fit test generates a P-value that quantifies the plausibility of the hypothesis that the data fit a power law. It should be noted that a large P-value does not necessarily mean that a power law is the best model for the data. First, there may be other models or distributions that match (fit) the data equally well or better over the observed range of x. Second, for a small number of data, it is very difficult to rule out a power law model; even if the calculated P-value is large, the power law fit may be spurious. To explore these points further, power law models for the street network data are compared with alternative models using a likelihood ratio test. For each alternative model (fit), if the calculated likelihood ratio is significantly different from zero, then its sign indicates whether the alternative is favoured over the power law model. To do so, we calculate the logarithm of the likelihood ratio (R), which has a positive and negative sign depending on which distribution is better, or zero if the model fits are equally good. More specifically, positive values of the log-likelihood ratio indicate that the power law model is favoured over the alternative. However, the sign of the R alone is not sufficient to determine which model provides the better fit because, like other quantities, the ratio is subject to statistical fluctuations. To make an objective judgement as to whether the observed value of R is sufficiently far from zero, we need to know the size of the expected statistical fluctuations, that is, the standard deviation σ of R. To estimate σ , we use a method used by Clauset, Shalizi, and Newman (2009) that gives a P-value that tells us whether the observed sign of R is statistically significant. Using the maximum likelihood method for testing the appropriateness of power law models can, however, be problematic (Newman 2005;Clauset, Shalizi, and Newman 2009). For example, it is very difficult to decide between log-normal and the power law models because, for realistic ranges of x, the two models are very similar. It is therefore unlikely that any test would be able to discriminate between these models unless the data set is very large. Also, in many cases, the results from comparing power laws with other distributions based on calculations of P-value and likelihood ratio tests does not help us to decide which model fits better with the data. When a decision cannot be made using quantitative approach, the final decision as to which model best fits the data may have to be based on our intuition. The value of such a judgement about the best-fitting model for a data distribution can be greatly improved by considering the likely physical basis or theoretical factors that generate, or contribute to the generation of, the data. More specifically, we should consider physical, that is, the non-statistical arguments that might favour one model fit over the alternative models. Thus, in many cases the decision as to whether to use a power law or an alternative model does not only depend on how well the models fit the data but also on the theoretical framework and the scientific aims of the study (Clauset, Shalizi, and Newman 2009;Huges and Hase 2010;Berendsen 2011). Based on these considerations, maximum likelihood estimators were calculated for the power law fits to the realdata distributions. The goodness-of-fit was also calculated Notes: Number of street segments for each city (n), scaling exponent based on MLE (α) and the standard error of α, the number of observation in the power law region (range) (n tail ) and standard error of n tail , lower bound of power law (x min ) at which the power law no longer applies and standard error of x min , power law fits and the corresponding P-values, a P-value for the fit to the power law model and likelihood ratios for the alternative models (fits). Positive values of the log-likelihood ratios indicate that the power law model is favoured over the alternative models if the P-value <0.1. However, if the P-value is larger than 0.1, the sign is not reliable indicator of which model is the better fit to the data. to estimate the lower cut-off (x min ) for the scaling region and the KS statistic (which computes a P-value for the estimated power law fit to the data). The uncertainty/error in the estimated parameters for the power law fit was also evaluated. However, to compute the log-likelihood ratios for two competing models (fits), freely available R routines were used (http://tuvalu.santafe.edu/~aaronc/powerlaws/). The results are summarised in Table 2. The P-values for the power laws indicate that the Dundee data set fits very well with a power law. However, the likelihood-ratio tests have P-values so large (0.29, 0.99, 036) that they cannot be used to decide which of the various alternative models best fits data. In contrast, the data set of Khorramabad has P-value so small (effectively 0.0) that the power law model can be ruled out. In the likelihood-ratio test for Khorramabad, the P-value is small enough for the signs to be reliable; the results show that any of the other models are plausible. Even if the alternative distributions (log-normal, exponential and stretched exponentials) may statistically fit some of the street network data sets better than a power law, power law fits may still be useful. For example, as shown in the present analysis, they provide a convenient basis for distinguishing between street subpopulations that have different functions. This kind of analysis is developed further with reference to the entropy concepts in the following section. Geometric evolutions of cities and entropy analysis We will now explore how the street patterns, as regards their lengths, can be interpreted with reference to concepts drawn from statistical mechanics/information theory, primarily based on entropy measures The focus is on the evolution of Dundee in the time periods from the seventeenth century to the year 2007 and the evolution of Khorramabad in the time periods from the 1955 to 2006. Lengths of street segments were analysed for each time periods (Figures 7a and 8a). Plots of the cumulative distributions of street lengths (Figures 7b and 8b) for different time periods provide different curves on the log-log plots (Figures 7c and 8c). In particular, in Dundee there are noticeable changes in the approximate straight-line slopes at about the same street lengths as in Khorramabad, that is, at 120 ± 20 m (Tables 3 and 4). This indicates different street populations (using the same 'regression-line' fits as in Figures 4 and 6 but not shown in Figures 7c and 8c). All the street populations for Dundee are shown in Table 3, and for Khorramabad is shown in Table 4, where 'breaks' in slope, marking the change from one population to another, occur at lengths from 100 m to 140 m. Thus, the short-street populations range in length from 3 m to 120 ± 20 m for both cities, whereas the long-street segments range from 120 ± 20 m to 2248 m for Dundee and to 1192 m for Khorramabad. Using these results, the scaling exponents and the length ranges of the street populations from Dundee and Khorramabad can be compared with their entropies. The length distribution of streets is a measure of the associated entropy. This follows because entropy in the probabilistic sense is an indication of the spread of any kind of frequency distribution (Dill and Bromberg 2003;Volkenstein 2009). Equation (3) can be used to calculate the entropies associated with the various street populations, Table 3) and long (tail) populations (those marked by 'A' in Table 3). (c and d) Correlations for the short populations (marked by 'A' in Table 4) and long (tail) populations (marked by 'B' in Table 4). Only the subpopulations shown in Tables 3 and 4 are plotted since the whole populations do not fit with straight-line plots. as well as the scaling exponents of the power laws, D, as in Equation (1). Entropies, scaling exponents, length ranges and average lengths of street populations in Dundee and Khorramabad are compared in Tables 3 and 4. Considering first the relations among the populations (A, B) from Tables 3 and 4, the results are plotted in Figure 9. There is clearly a strong positive correlation between the entropies and (a) the length ranges and (b) the average lengths of the streets in these populations as a function of time, that is, during the evolution of the city. It is clear that short streets are of less importance in these relations than the long streets because the maximum (and average) lengths of the short streets do not change much with the expansion of the city. Thus, the focus is on the long streets (steep-slope or that tailpart) populations, since these are likely to change with the growth of the street network. Clearly, all the three parameters (entropy, length range and average length) increase as the street network expands during the growth of these cities. This implies that as the city grows, the tail populations increase their average and maximum lengths and thus become more dispersed or spread, thereby increasing their entropies. The maximum likelihood method is again used to test whether the data for each of the city-evolution periods are consistent with power laws, using Dundee as an example. Results (see the P-values in Table 5) indicate that most of the Dundee-evolution data sets are indeed consistent with power law models, the exception being the data for 1776-17. However, P-values for the alternative models are so large that we cannot decide which, if any, of the alternative models are statistically better. This also applies to the other time periods, with the exceptions of 1821 and 1912, where the P-values are small enough for the signs to be reliable such that log-normal, exponential and stretched exponential models also provide plausible fits as shown in Table 5. Discussion and conclusion Before the industrial revolution (which began in Scotland in the early nineteenth century), the centre of Dundee has preserved most of its character from medieval times The street patterns particularly in the southern and eastern parts still follow an irregular layout, similar to the old medieval town. Thus, the city grew until that time in a sort of natural way through 'bottom-up', individualistic rather than 'top-down' institutionalised collective processes. However, many of the medieval local streets were demolished in the late nineteenth century for extension of the Victorian streetscape (Ferguson 2005;Watson 2006). By contrast, the city of Khorramabad is an example of a rapidly expanding city, with most of its streets being planned by the central government and in a 'top-down' manner (Ministry of Notes: Number of street segments for each time period (n), scaling exponent based on MLE (α) and standard error of α, the number of observation in the power law region (range) (n tail ) and standard error of n tail , lower bound of power law (x min ) at which the power law no longer applies and standard error of x min , power law models (fits) and the corresponding P-values, a P-value for the fit to the power law model and likelihood ratios for the alternative models. As the growth of cities is a complex process, affected by many parameters, it is clear that Dundee and Khorramabad have evolved through different processes. However, even if the growth processes have been different, with initial street patterns quite different, the contemporary street patterns show many similarities (Levinson and Huang 2012). The results presented here show, first, that landscape constraints can have large effects on the general city boundary shape and, second, that boundary shape, in turn, affects the street patterns. In the case of Dundee, street lengths are to a large degree controlled by their orientation in relation to the estuary shoreline that bounds the city to the south (Figures 3 and 5). Coast-parallel streets are longer and most streets are either perpendicular or parallel to the shore. In Khorramabad, the mountain slopes continue to constrain city shape as well as trend and length evolution of streets ( Figure 5). These constraints are reflected not only in the trends of the streets but also in the minimum street spacing being in the narrowest part of the city (Table 1). 1950 1960 1970 1980 1990 Most of the street populations show power law length distributions with the obvious point there are many more short streets than long ones. The power law size distributions of street networks, or city size, indicate that a city system self-organises itself as a hierarchical structure at different scales, from the smallest to the largest (Zipf 1949;Simon 1955;Salingaros 2005;Batty 2006Batty , 2008Pumain 2006;Jiang 2007;Levinson and Huang 2012). On log-log plots, the street populations of both cities show breaks in the straight-line slopes (yielding different scaling exponents D) at roughly the same street lengths, namely at 120 ± 20 m. The shorter streets are mainly local streets whereas the longer ones are collectors and arterials (Figures 7 and 8). Time Time A long power law tail (subpopulations B in Tables 3 and 4) normally implies a more dispersed (spread) distribution and thus a high entropy (in comparison with subpopulations A). To test this implication, the entropies calculated for the Dundee and Khorramabad street subpopulations were plotted against their length ranges and average lengths, considering the sampled time periods (Figure 9). The results show a linear correlation between the calculated entropies and the length ranges and average lengths of these subpopulations, implying that the entropy of a street network increases over time with increase in its maximum and average street lengths. The results in Figure 10 suggest that the entropy of neither city changes significantly over the six sampled time periods. However, the entropies of the long (tail) populations have a high correlation and clearly increase with time. In conclusion, the two case studies, the cities of Dundee and Khorramabad, clearly show how their shapes and associated street patterns are constrained by the surrounding landscape. The entropy analysis method presented here is used to quantify the variation in street trends and lengths in relation to landscape constraints within cities and between cities as well as city evolutions. We believe that this method, as demonstrated by the results presented here, offers great possibilities for quantifying not only other properties of street patterns such as width, density and connectivity, but also for other linear spatial structures in cities, obviously other kinds of networks that can be coupled to street patterns but also clusters of related locations that are linked through more abstract relations.
v3-fos-license
2020-12-10T14:07:18.210Z
2020-10-28T00:00:00.000
228078991
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://doi.org/10.1186/s40337-020-00330-3", "pdf_hash": "ae7685b2cb09af166b77d5f6842dd752b3ac1053", "pdf_src": "SpringerNature", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43704", "s2fieldsofstudy": [ "Psychology", "Medicine" ], "sha1": "b50ca8cc5541d694e388632448c72764eac1c5ba", "year": 2020 }
pes2o/s2orc
Body mass index and self-reported body image in German adolescents Despite knowledge about eating disorder symptoms in children and adolescents in the general population, relatively little is known about self-reported and sex-specific eating-disorder-related psychopathology, as well as its specific correlates. 880 German school-attending adolescents (15.4 ± 2.2 years) and 30 female patients with AN (16.2 ± 1.6 years) were studied. All participants completed the Eating Disorder Inventory-2 and a Body Image Questionnaire. There were more overweight males than females (15.2% vs 10.1%, p < 0.001), but more females with underweight than males (6.2% vs. 2.5%, p < .001). Negative body evaluations (p < .001) and dissatisfaction (p < .001) were significantly more frequent in females. Compared to underweight female patients with AN, underweight school-attending females had less negative body evaluations (p < .001) and lower scores on 5 of the 11 EDI-2 subscales (p < .001; p < .05). Males were more overweight than females, females more underweight. Body image was more important to female than to male youth, yet without reaching pathological values when compared to female patients with AN. Complex emotional and cognitive challenges seem to be a representative factor for eating pathology rather than simply being underweight. These aspects may be relevant for the shift from a thinness-related focus in girls in the general population to the development of an eating disorder. Plain English summary Still too little is known about eating disorder-related psychopathology and its correlates in non-clinical samples, especially with regard to self-report and sex-related differences. Therefore, 880 German school-attending adolescents and 30 female patients with anorexia nervosa (AN) were observed. Males were more overweight than females, females more underweight. Body image was more important to female than to male youth, yet without reaching pathological values. Personality characteristics seem to be maintenance factors in eating disorder pathology, rather than solely being underweight. These aspects may be relevant for the shift from a thinness-related focus in girls in the general population to the development of an eating disorder. Background Disturbed eating behaviours have become a serious concern among adolescents [1]. Severe weight concerns, disordered eating symptoms, and body shape perception disturbances have been reported across cultures [2,3]. During the last decades a drive for thinness [4] as well as an increasing prevalence of obesity [5] and metabolic syndrome [6] have been observed. Clinically relevant eating disorders are among the most frequent chronic illnesses in adolescents [7]. For the German population, prevalence estimates differ for any threshold eating disorder between 2.9% among females and 0.1% among males, for any subthreshold eating disorder between 2.2% for females and 0.7% for males, and for eating disorder symptoms between 11.5% among females and 1.8% among males [8]. These figures are consistent with those reported in other Western countries [9]. Previous research has demonstrated a high occurrence of disordered eating behaviours in adolescents and suggested the importance of examining preclinical symptoms [10]. Important indicators for eating-disorder-related disturbances are body mass index (BMI; body weight/ body height 2 ), body image, eating disorder and related psychopathology, such as clinically relevant perfectionism. In this context, body image refers to the perception of oneself and combines perceptual and cognitive-affective components [11]. A distorted body image is part of the diagnostic criteria of AN [12,13] and body image dissatisfaction and distortion as well as excessive weight concerns are causally or consequently associated with eating disorders. However, whether body dissatisfaction plays a causal role may vary depending on age and sex [14,15]. Current findings suggest that body image dissatisfaction is becoming more normative and that sex differences, which implicated females as being more underweight and concerned about body shape and fatness, may be decreasing [2]. Importantly, disordered eating behaviours are associated with an increased risk of further health-compromising behaviours, such as suicide and substance use [16][17][18]. Thus, the evaluation of non-clinical samples provides an opportunity to observe trends in prevalence and severity of unhealthy BMI status, eating disorders as well as weight and shape concerns, which is important for prevention and treatment programming. Thus, we aimed to examine BMI, body image, eatingdisorder-related psychopathology in a school-attending sample of adolescents and to further compare a subset of the school-attending females who were underweight with a clinical sample of patients with AN. Furthermore, we also aimed to replicate previous results with regard to underweight and overweight among adolescents in Germany. At the same time, we aimed to extend prior findings by applying a multidimensional, including self-reported, assessment battery to males and females, including measurements of body image, eating-disorder-related psychopathology. According to prior findings, we hypothesized a higher occurrence of underweight in females. Furthermore, we assumed less eatingdisorder-related psychopathology in the underweight schoolattending females compared to female patients with AN. Study population The study population (Table 1) consisted of 880 schoolattending adolescents, of whom 30 school-attending females and 10 school-attending males were underweight (< 10 th BMI percentile; see 3.4 below). Therefore, a small and comparable size of sex-matched female patients with AN (n = 30) was used for further comparisons with the underweight school-attending females. Adolescents were recruited through German schools. After having received permission to conduct the study, students were asked to complete the questionnaires during class. The order of assessment administration was varied to avoid sequence effects. Inclusion criteria for the female patients were meeting the diagnostic criteria of an AN (307.1) according to the Diagnostic and Statistical Manual-V (DSM-V [12]). Exclusion criteria were the presence of another eating disorder according to the DSM-V [12]. The AN diagnosis was confirmed by a structured interview based on DSM (SIAB-EX [19];). Female patients were recruited from the Department for Child and Adolescent Psychiatry, Psychosomatics and Psychotherapy, Charité University Medicine, Berlin. Measures Body height (in meters) and body weight (in kg) were measured with participants wearing lightweight clothing and no shoes by a digital balance scale (manufacturer 'Korona', max. 150 kg), and a conventional stadiometer. Subsequently, the BMI (body weight/ body height 2 ) and BMI percentiles were calculated as described by Kromeyer-Hauschild et al. [20]. We defined extreme underweight as being ≤ 3 rd BMI percentile, underweight as between the > 3 rd and ≤ 10 th BMI percentile (≤ 17.5 kg/m 2 for adolescents ≥ 18 years), overweight > 90 th to ≤ 97 th BMI percentile, and obesity > 97 th BMI percentile (more than ≥ 25 to 29.9 kg/m 2 and ≥ 30 kg/m 2 for adolescents ≥ 18 years, respectively). The Body Image Questionnaire [21] is a self-report device assessing clinically relevant body image distortions as well as non-clinical impairments of body image with a two-factor structure [21]. One scale contains items on the negative evaluation of one's body, the other scale items on the positive perception of one's body. For this study and the associated research questions, we only included the sum and the mean score of the scale "Negative Evaluations of the Body" as a measure of body image. The internal consistency of the FKB-20 is good (Cronbach's α = .84) indicating its reliability [21]. Statistical analyses BMI was calculated for all subjects and categorized into under-, normal-and overweight according to BMI percentiles outlined by Kromeyer-Hauschild et al. [20]. General characteristics of the sample were compared by unpaired t-tests. For the FKB-20, we considered sum scores for the analyses, with regard to EDI-2 age-and sex-adjusted percentiles. FKB-20 and EDI-2 values were divided into quartiles and the groups within these quartiles were compared. Furthermore, for the comparison of underweight girls and patients with AN the subdivision into percentiles was performed. Comparisons of FKB-20 and EDI-2 scores were conducted using Mann-Whitney-U-tests. Additionally, differences between FKB-20 and EDI-2 scores in sexspecific body weight groups were analysed using Kruskal-Wallis-Test. The level of significance was α = 0.05 for all statistical tests that were conducted in SPSS 25. In order to avoid the Type I error, post-hoc Bonferroni corrections (adjusted significance level α = .004) were applied. Sex-specific findings on body image and eating-disorderrelated psychopathology Based on Mann-Whitney-U tests, we found significantly higher values among school-attending females for the FKB-20 and on the EDI-2 scales DT, BD and I than in school-attending males. Conversely, school-attending males scored significantly higher on the EDI subscale P ( Table 2). Sex-specific findings on body image and eating-disorderrelated psychopathology within individual weight percentile groups Dividing each group according to weight percentiles (extreme underweight/underweight, normal weight and overweight/obese), Kruskal-Wallis-analyses revealed differences between weight percentile groups in schoolattending males and in school-attending females for the FKB-20 as well as for the EDI scales, and BD ( Table 3). Comparison of underweight school-attending males and females and school-attending females with female patients with AN Based on the weight percentile groups, we compared the school-attending males (n = 10) and females (n = 30) who were underweight (≤ 10 th percentile) as well as female patients with AN (n = 30). Pairwise comparisons for males vs. females who were underweight (Mann-Whitney-U) revealed no significant group differences. Comparing female patients with AN to school-attending females with underweight (Table 4), we found significantly higher scores for the AN group regarding the BMI, body image and eating-disorder-related psychopathology Our findings revealed that 15.2% of male and 10.1% of female adolescents in our school-attending study population were overweight or obese. These data are comparable with data by a recent German study as well as European and international studies [5,[23][24][25]. Although there is some evidence that the rise in the prevalence of overweight and obesity is plateauing [5,26,27], prevalence rates are still high and general shifts in the BMI-distribution were found during the last decades [27]. The National Health and Nutrition Examination Survey reported a significantly increasing linear trend in obesity between 1999 and 2000 and 2015-2016, in both adults and youth in the US [28]. Underweight status was more prevalent among schoolattending females (6.2%) than school-attending males (2.5%) and occurred less often than overweight/obesity in our study. Accordingly, Schienkiewitz et al. [29] observed comparable rates of underweight in children and adolescents, although no sex differences were found. Nevertheless, other studies have shown sex differences with more underweight in females than males (for example Grajda et al. [30]). Sex differences regarding over-and underweight may result from underlying sociocultural and psychological differences. For instance, males and females differ in calorie consumption, eating styles [31] and body fat distribution [32]. Additionally, females experience more body weight and thinness-oriented body dissatisfaction than males [2,33]. Our finding regarding more prevalent negative body evaluations and body dissatisfaction in school-attending females than school-attending males contrasts with other studies that reported an equal distribution of body dissatisfaction among both boys and girls [2,34,35]. However, these studies also report a significantly more thinnessoriented dissatisfaction among females, but also a more muscle-oriented dissatisfaction among males [2,34,35]. Regarding eating-disorder-related psychopathology we found more ineffectiveness in school-attending females compared with school-attending males which is in line with other findings [36] and could be related to negative body evaluations and body dissatisfaction in the schoolattending females in the present study. Surprisingly there was more perfectionism in the school-attending males. It is possible that this finding may also be associated with body-related dissatisfaction in males, but as reported, this finding seems to be more related to a (male-specific) kind of body ideal and dissatisfaction in terms of drive for muscularity than thinness [2,34,35]. However, these specific aspects cannot be operationalised with the commonly utilized measures that were used in this study. Results regarding body image and eating-disorder-related psychopathology within the groups of males and females and individual weight percentile groups demonstrated that significant differences were found only on body imagerelated scales. Both sexes showed more body image dissatisfaction in the presence of underweight than in the presence of normal or overweight. This finding is consistent with other studies [37] and underlines that body image dissatisfaction and eating disorder psychopathology are strongly linked, even in school-attending samples. A comparison of the underweight school-attending females and female patients with AN showed greater body image dissatisfaction for the latter. In addition, female patients with AN showed significantly higher scores on the EDI-2 scales of interpersonal distrust, interoceptive awareness and asceticism, pointing out complex emotional and cognitive challenges. The increased values on the eating-disorder-related scales could indicate that these aspects are specific to the eating disorder psychopathology, rather than the underweight status itself. In order to assess eating-disorder-related psychopathology among young people in the general population, the underlying factors mentioned above should be focused on in addition to exclusively screening for drive for thinness [38,39]. Strengths and limitations When interpreting the results of this study, certain limitations have to be taken into account. First, we used self-report questionnaires. Methodologically, the additional use of a structured interview in our school-attending sample would have been advantageous. Nevertheless, the fact of testing during school days and the related lack of time did not allow for time-consuming individual interviews in such a large sample. For the same reason, we were not able to diagnose any serious Notes: n number, 1 reduced n due to missing data, FKB-20 Body Image Questionnaire, EDI Eating Disorder Inventory-2,°Kruskal-Wallis; *p < .004 (significant group difference after Bonferroni correction) eating disorders according to the DSM-V [12] in our school-attending sample. Therefore, we cannot eliminate the possibility of clinically relevant eating disorders in the underweight subgroup and it could be false negatives in the school-attending sample. In addition, underweight school-attending females and males in this study may have eating disorder psychopathology with a different severity or duration than those of patients with AN. Due to different and partly small age subgroup sizes in the school-attending survey sample, we have not carried out analyses within these groups, so that the results regarding under-and overweight as well as obesity only apply to the entire sample. As we only screened for underweight and overweight/obesity and the only disease control group was represented by female patients with AN, we cannot generalize our data to other forms of disordered eating behaviour, such as bulimic or binge eating symptoms. Furthermore, we did not collect information about the subtype of AN (restrictive/binge-purge). Moreover, as we did not collect information about the sociodemographic information of all attendees of the schools from which our school-attending sample was drawn, the degree of representativeness of the sample is unclear. Nevertheless, the sex distribution and body measures in our school-attending sample did match well with the previously reported epidemiological data in Germany [40,41]. In addition, the small size of the patients with AN as well as underweight school-attending females and males limits the representativeness of these samples and generalization of the related findings. Moreover, no control group with AN was available for the underweight school-attending males of our sample. Due to the sample size compared to studies with an epidemiological approach, findings of the present study regarding over-and underweight as well as obese have to be interpreted very carefully. In addition, our data must be treated with caution, as the BMI percentile calculations are based on a reference group from data sets from between 1985 and 1999 [20]. A shift in these percentiles seems possible. Besides, because of the cross-sectional nature of our study we are not able to evaluate if underweight, body image dissatisfaction or other factors could be predictors of eating disorder symptoms. Despite these limitations, strengths of this study include its relatively large sample size of the schoolattending cohort, the multidimensional assessment of body image, eating-disorder-related psychopathology, body height and body weight, and the control group of female patients with AN. Conclusion We observed sex differences in the prevalence of (extreme) underweight and overweight/obese in a German school-attending sample. Body image concerns were more prevalent among school-attending females than males. Underweight by itself does not seem to be a representative factor for eating pathology, as female patients with AN differed significantly from school-attending females with underweight in psychopathological factors. These findings underline the importance of a multidimensional assessment of body image and eatingdisorder-related psychopathology including self-reports when characterizing an underweight as well as potentially eating disordered sample. Therefore, the evaluation of these aspects in non-clinical samples is important to detect current prevalence rates, and trends in behaviours and attitudes. Further studies should focus on these issues instead of exclusively screening for BMI in nonclinical samples as well as patients with AN. Preventive and treatment programmes should be based on knowledge of underweight and dissatisfaction with body image, but should also focus on emotional, cognitive and personal temperament factors who may be involved in the development of eating disorders. Psychotherapeutic approaches in the treatment of AN and measurement of psychotherapy success should urgently focus these factors in addition to focusing in the key outcome of weight gain and weight restoration. For all, preventive and treatment programmes as well as psychotherapeutic approaches it could be helpful to take personality functions into account as they are described by the Operationalized Psychodynamic Diagnostic System [42] and associated with a lot of psychological disorders in childhood and adolescence including eating disorders [43,44]. Regarding body dissatisfaction there are indications for a significantly more muscularity-related body ideal in boys [2,34,35], which should be deepening investigated in future studies. Existing test instruments should be adapted to these findings. Availability of data and materials Due to the nature of this research, participants of this study did not agree for their data to be shared publicly, so supporting data is not available. Ethics approval and consent to participate All procedures performed in studies involving human participants were in accordance with the ethical standards of the institutional and/or national research committee (city's senate for education, youth and sport as well as the research ethic board at the Charité University Medicine) and with the 1964 Helsinki declaration and its later amendments or comparable ethical standards. Written informed consent was obtained from the subjects and/or their guardians when they were minors. Consent for publication Not applicable.
v3-fos-license
2023-01-15T14:47:13.769Z
2018-02-26T00:00:00.000
255814622
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://doi.org/10.1186/s12866-018-1156-1", "pdf_hash": "17af03ec5163bdf1ee6aa3023cacd7ef5fa02c77", "pdf_src": "SpringerNature", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43706", "s2fieldsofstudy": [ "Biology", "Medicine" ], "sha1": "17af03ec5163bdf1ee6aa3023cacd7ef5fa02c77", "year": 2018 }
pes2o/s2orc
Porphyromonas gingivalis ATCC 33277 promotes intercellular adhesion molecule-1 expression in endothelial cells and monocyte-endothelial cell adhesion through macrophage migration inhibitory factor Porphyromonas gingivalis (P. gingivalis), one of the main pathogenic bacteria involved in periodontitis, induces the expression of intercellular adhesion molecule − 1 (ICAM-1) and monocyte-endothelial cell adhesion. This effect plays a pivotal role in atherosclerosis development. Macrophage migration inhibitory factor (MIF) is a multifunctional cytokine and critically affects atherosclerosis pathogenesis. In this study, we tested the involvement of MIF in the P. gingivalis ATCC 33277-enhanced adhesive properties of endothelial cells. Endothelial MIF expression was enhanced by P. gingivalis ATCC 33277 infection. The MIF inhibitor ISO-1 inhibited ICAM-1 production in endothelial cells, and monocyte-endothelial cell adhesion was induced by P. gingivalis ATCC 33277 infection. However, the addition of exogenous human recombinant MIF to P. gingivalis ATCC 33277-infected endothelial cells facilitated monocyte recruitment by promoting ICAM-1 expression in endothelial cells. These experiments revealed that MIF in endothelial cells participates in the pro-atherosclerotic lesion formation caused by P. gingivalis ATCC 33277 infection. Our novel findings identify a more detailed pathological role of P. gingivalis ATCC 33277 in atherosclerosis. Background Many epidemiological studies have associated severe forms of periodontitis with atherosclerosis [1]. Porphyromonas gingivalis (P. gingivalis), a Gram-negative oral anaerobe, has been identified as one of the main pathogenic bacteria in periodontitis [2]. The DNA of P. gingivalis has been found in coronary stenotic artery plaques of myocardial infarction patients [3,4]. Furthermore, animal experiments have shown that P. gingivalis infection directly induces and accelerates atherosclerotic lesion development in pigs and mice [5,6]. In vivo studies have suggested that P. gingivalis enters the systemic circulation through inflammation-injured epithelial structures; then, this bacterium adheres to and invades vascular endothelial cells, proliferates in host cells, promotes the release of a variety of proinflammatory cytokines and induces atherosclerosis formation [7][8][9][10][11]. Macrophage migration inhibitory factor (MIF) has been recognized as a key factor in the vascular processes leading to atherosclerosis [12][13][14]. MIF expression in endothelial cells is dysregulated in response to proatherogenic stimuli during the development of atherosclerotic lesions in humans, rabbits, and mice [15,16]. Recent research showed that MIF increased monocyte recruitment during the process of atherosclerosis development [17]. One of the mechanisms of this effect is the MIF-mediated upregulation of adhesion molecule expression in vascular endothelial cells, which causes the monocytes flowing rapidly in blood circulation to decelerate, roll on the vessel wall, aggregate and adhere to the vessel wall [18]. Studies have shown that increased intercellular adhesion molecule − 1 (ICAM-1) expression is one of the molecular mechanisms of the pathological changes during the early stage of atherosclerosis. By mediating leukocyte adhesion, ICAM-1 increased plaque instability and accelerated plaque rupture and thrombosis, resulting in cardiovascular disease (CVD) events [19]. Our previous studies have found that P. gingivalis infection increases ICAM-1 expression in endothelial cells and monocyte-endothelial cell adhesion [20]. These findings suggested that P. gingivalis induces the inflammatory process of atherosclerosis. However, the exact role that P. gingivalis plays in the development of atherosclerosis is still unclear. We hypothesized that P. gingivalis infection promotes the formation of atherosclerosis through MIF. In the present study, we examined the MIF production induced by P. gingivalis ATCC 33277 in endothelial cells. We also investigated the impact of MIF on the adhesive properties of endothelial cells pretreated with the antagonist ISO-1 or human recombinant MIF (rMIF) plus ISO-1. Our novel findings have identified a more detailed pathological role of P. gingivalis in atherosclerosis. Cell lines The human umbilical vein endothelial cell line EA.hy926 and the THP-1 monocyte model (a monocytic leukaemia cell line) were purchased from Keygen Biotech company (Nanjing, China). EA.hy926 cells were cultured in DMEM containing 15% fetal bovine serum, and the THP-1 cells were cultured in DMEM containing 10% fetal bovine serum at 37°C in 5% CO 2 . EA.hy926 cells (10 5 cells mL − 1 ) were seeded in the tissue plate wells and were cultured until a confluent monolayer formed for subsequent study. Cell viability, which was > 90% for all the infection assays, was determined by trypan blue exclusion assay. THP-1 cells were labeled with the fluorescent dye calcein AM (0.1 mg/mL; BioVision, CA, USA) for 30 min before being co-cultured with EA.hy926 cells. Enzyme linked immunosorbent assay (ELISA) Bacterial suspensions were added to the EA.hy926 cells at a multiplicity of infection (MOI) of 100 for 4, 10 or 24 h, while Escherichia coli (E. coli) lipopolysaccharide (LPS) (1 μg/mL; Cayman Chemical, Ann Arbor, MI, USA) was used as a positive control [21]. The MIF level was determined using ELISA kits (BD Biosciences, Mountain View, CA, USA). The optical density was measured at 450 nm, and the MIF concentration was extrapolated from the standard curve according to the manufacturer's instructions. Western blot The EA.hy926 cells were pretreated with the MIF antagonist ISO-1 (25 μM; Cayman Chemical) or human rMIF (0.5 μg/mL; Cayman Chemica) plus ISO-1 for 1 h; then, the cells were infected with P. gingivalis ATCC 33277 at an MOI of 100 for 24 h. The whole cell protein of EA.hy926 cells was extracted, and Western blotting was performed. The EA.hy926 cells were lysed, and the protein concentration was determined by a BCA assay. Equal amounts of whole cell lysate were separated with 8% SDS-polyacrylamide gel electrophoresis and were transferred to a nitrocellulose filter membrane. After blocking, the protein was blotted with rabbit monoclonal anti-ICAM-1 antibody (1:500; Wanlei, Shenyang, China) and goat anti-rabbit Dylight 800-conjugated fluorescent antibody (1:1000; Abbkine Inc., Redlands, CA, USA). Western blot analysis was performed with Odyssey CLX (LI-COR, Lincoln, NE, USA). Quantitative real-time polymerase chain reaction (qRT-PCR) EA.hy926 cells were treated as mentioned above (in Western blot analysis). Then, the total RNA of EA.hy926 cells was extracted using TRIzol reagent (Invitrogen, Carlsbad, CA, USA). To remove the genomic DNA, total RNA was treated with DNase I for 2 min at 42°C following the manufacturer's protocol. The RNA integrity was checked via electrophoresis on 1.0% agarose gels. The RNA purity was identified by the 260/280 nm optical density ratio, and RNA samples with an 260/ 280 nm optical density ratio greater than 1.9 were selected for later analysis. Next, cDNA was synthesized using a reverse transcription system (Vazyme, Beijing, China) [22]. THP-1 adhesion to EA.hy926 cells EA.hy926 (10 5 cells mL − 1 ) were seeded on 6-well plates at 2 × 10 5 cells per well and were cultured to form a confluent monolayer. Then, the cells were pretreated with the MIF antagonist ISO-1 (25 μM) or rMIF (0.5 μg/mL) plus ISO-1 for 1 h. Next, the cells were infected with P. gingivalis ATCC 33277 at an MOI of 100 for 23 h. Next, 1 × 10 6 THP-1 cells were labelled with 5 μM calcein-AM and were co-cultured with the EA.hy926 cells for another 1 h. Non-adherent THP-1 cells were gently washed away with PBS twice. The adherent THP-1 cells remaining on the monolayer of endothelial cells were visualized using a fluorescence microscope (Nikon 80i, Tokyo, Japan); 3 fields under the microscope (× 100) were randomly selected; and the fluorescence-labeled THP-1 cells were assessed by cell counting assays [23]. All the experiments were performed in triplicate wells for each condition and repeated at least three times. Statistical analysis All data are presented as the means ± SD of three independent experiments. Statistical analysis was performed using one-way ANOVA, and the Student-Newman-Keul test was applied to compare differences from each other group (SPSS 17.0 software, IBM). P-values < 0.05 were considered statistically significant. P. gingivalis ATCC 33277 infection enhances MIF secretion in EA.hy926 cells We evaluated the effect of P. gingivalis ATCC 33277 on MIF expression. E. coli-LPS was used as a positive control, since MIF release is induced by proinflammatory factors such as LPS [21,24]. The ELISA results revealed that P. gingivalis ATCC 33277 infection significantly increased MIF secretion in EA.hy926 cells. Compared with the control level, MIF expression was increased 2.25-fold (MOI = 100) by P. gingivalis ATCC 33277 infection for 24 h (P < 0.01). P. gingivalis ATCC 33277 did not significantly affect MIF expression at the early time point, including 4 and 10 h (Fig. 1). P. gingivalis infection at an MOI of 100 for 24 h was chosen to evaluate the impact of MIF on the increased adhesive properties of endothelial cells in the following studies. To determine the impact of MIF on ICAM-1 expression, the MIF antagonist ISO-1 and rMIF were used. The results revealed that P. gingivalis ATCC 33277 infection (MOI = 100:1, 24 h) induced a significant increase in ICAM-1 expression. We discovered that this inductive effect of P. gingivalis ATCC 33277 was blocked by the MIF antagonist ISO-1. P. gingivalis-induced ICAM-1 expression was significantly reduced (by 49.78%) by ISO-1. Moreover, the inhibitory effect of ISO-1 was neutralized by exogenous rMIF. Sufficient exogenous rMIF supplementation rescued ICAM-1 expression. ICAM-1 expression was increased 1.95-fold in the rMIF group compared to the ISO group ( Fig. 2a and b). These findings were further confirmed by the qRT-PCR results, which detected the ICAM-1 gene transcription level under the same conditions as described above. The ICAM-1 mRNA level was also significantly reduced in ISO-1-treated cells, with an 81.97% reduction compared with that in P. gingivalis-infected cells. Similarly, exogenous rMIF increased the ICAM-1 mRNA level, which was 3.51-fold higher in the rMIF group compared with the ISO group (Fig. 2c). MIF regulates the increased monocyte adhesion to endothelial cells infected with P. gingivalis ATCC 33277 To investigate the role of MIF in P. gingivalis-induced monocyte-endothelial cell adhesion, we used the fluorescent dye calcein-AM to highlight the adhesive THP-1 cells. THP-1 cell adhesion to EA.hy926 cells was visualized using fluorescence microscopy. The adhesion experiment results were consistent with ICAM-1 expression. Compared with uninfected cells, P. gingivalis ATCC 33277 infection (MOI = 100, 24 h) markedly increased THP-1 cell adhesion to endothelial cells (P < 0.01). In contrast, cell adhesion was decreased in ISO-1treated cells compared with those infected with P. gingivalis ATCC 33277 (P < 0.01). In addition, THP-1 cell adhesion to EA.hy926 cells was recovered by exogenous rMIF addition, as shown in Fig. 3. These results, in combination with those in Fig. 2, suggested that expression of ICAM-1 in endothelial cells and monocyte-endothelial cell adhesion caused by P. gingivalis ATCC 33277 infection could be regulated by MIF. Discussion Numerous cross-sectional, case-control and cohort epidemiological studies suggest that periodontal infection is associated with atherosclerotic CVD, independent of confounding factors such as smoking and obesity [25][26][27], and systemic inflammation has been proposed as a possible mediator [24], which ultimately enhances the adherence of circulating monocytes to vascular endothelial cells. Our prior study found that P. gingivalis infection induced ICAM-1 expression and monocyte recruitment, which are crucial events leading to atherosclerosis pathogenesis [20]. This result is consistent with the findings of Velsko [10]. P. gingivalis is believed to play a pivotal role in the development of atherosclerosis. MIF is a proinflammatory cytokine that plays a critical role in the initiation and progression of chronic inflammatory and immune-mediated diseases such as atherosclerosis [28]. Under normal circumstances, the MIF protein level is very low. However, in atherosclerotic lesions, MIF is secreted in large quantities by vascular endothelial cells, and a relatively small amount of MIF is released by vascular smooth muscle cell [29]. Uniquely, MIF is rapidly released from preformed intracellular pools in response to LPS stimuli [30]. Consistently, in the current study, MIF secretion began to increase at 4 h after LPS stimulation, this increase sustained 24 h. Li's research also confirmed there was a significantly higher level of MIF protein after stimulation with E. coli LPS for 24 h [21]. Interestingly, Li et al. also found that MIF protein level remained unchangeable in P. gingivalis LPS-treated reconstituted human gingival epithelia [21]. While our study showed that live P. gingivalis ATCC 33277-induced MIF secretion was weaker and much later than that induced by LPS. The results indicated that P. gingivalis ATCC 33277 had a different mechanism in inducing MIF expression compared to LPS. It has been reported that P. gingivalis can invade endothelial cells and remain viable for extended periods [31]. It is speculated that the invasion of P. gingivalis has significant repercussions for the physiological status of the cell. Our findings identified a clue for the role of MIF in P. gingivalis-promoted atherosclerosis. Chuang's research showed that MIF induced by Dengue virus infection activates endothelial cell tight junction opening, which may cause plasma leakage and leukocyte migration (extravasation), resulting in increased vascular permeability [32]. Bernhagen et al. proved that MIF concentrations increase substantially in the presence of stress, inflammation, and infection [33]. We also noticed that the MIF concentration was increased in an unhealthy periodontal environment. MIF expression was higher in the periodontal tissue of chronic periodontitis patients than in that of healthy patients [21]. In experimental gingivitis patients, MIF protein expression in the gingival crevicular fluid started increasing 1 week after the occurrence of inflammation in the 46-77 years old age group. The trend in prostaglandin E2 expression is similar to that of MIF expression. According to statistical analyses, the MIF and PGE-2 concentrations are correlated, which suggests that MIF and PGE2 interact with each other have synergistically in inflammatory conditions [34]. The role of MIF in P. gingivalis infection was further investigated. MIF is a multifunctional cytokine with enzymatic tautomerase activity, and its inhibitor ISO-1 can block the activity of MIF [35]. We evaluated ICAM-1 expression by performing Western blot and qRT-PCR in endothelial cells infected with P. gingivalis for 24 h at an MOI of 100. Our prior work found that ICAM-1 expression and monocyte-endothelial cell adhesion were increased when endothelial cells were infected with P. gingivalis, which is consistent with the results of others [36,37]. In the presence of ISO-1, both the ICAM-1 protein and mRNA level induced by P. gingivalis infection were significantly decreased. However, ICAM-1 protein and mRNA expression levels were rescued by sufficient In contrast, cell adhesion was decreased in ISO-1-treated cells compared with those infected with P. gingivalis ATCC 33277 (P < 0.01). And THP-1 cell adhesion to EA.hy926 cells was recovered by exogenous rMIF addition. * P < 0.01. Scale bar = 100 μm exogenous rMIF supplementation. We confirmed our results by cell adhesion assays. The endothelial cells were treated with ISO-1 or exogenous rMIF for 1 h before they were infected with P. gingivalis and then were co-cultured with monocytes. We found that monocyte adhesion to P. gingivalis ATCC 33277-infected endothelial cells was significantly inhibited by ISO-1. In contrast, sufficient rMIF supplementation retrieved the monocyte-endothelial cell adhesion. Recent evidence has suggested a role for endogenous MIF in the promotion of endothelial adhesion molecule expression [25]. Both Lin SG et al. [38] and Amin MA et al. [28] found that MIF up-regulated ICAM-1 expression in endothelial cells. Moreover, in MIFdeficient human umbilical vein endothelial cells, the initial steps of atherosclerosis, such as the binding of adhesion molecules on endothelial cells to their specific ligands on mononuclear cells, or monocytes in circulation rolling and attaching to the vascular wall, were be accomplished due to a lack of extracellular MIF [15][16][17]. Our findings provide direct evidence for the role of MIF in upregulating ICAM-1 expression in P. gingivalis ATCC 33277-infected endothelial cells. In summary, our study revealed that the MIF induced by P. gingivalis ATCC 33277 infection not only promoted ICAM-1 expression inendothelial cells but also activated monocyte-endothelial cell adhesion. We have shown that MIF is a very potent pathogenic factor in P. gingivalis ATCC 33277-induced atherosclerosis promotion. Suppressing MIF expression with an inhibitor or neutralizing antibody in individuals with manifest atherosclerosis may be a potential therapeutic intervention for treating this condition. However, the mechanisms whereby MIF facilitates endothelial adhesion molecule expression are unknown. Therefore, our future work will study the MIF receptor in P. gingivalis-infected endothelial cells. Conclusions The experiments revealed that endothelial cell-expressed MIF participates in pro-atherosclerotic lesion formation caused by P. gingivalis ATCC 33277 infection. Our novel findings elucidate a more detailed pathological role of P. gingivalis ATCC 33277 in atherosclerosis.
v3-fos-license
2018-12-08T13:42:28.458Z
2018-03-29T00:00:00.000
55340039
{ "extfieldsofstudy": [ "Computer Science" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "http://downloads.hindawi.com/journals/mpe/2018/9378230.pdf", "pdf_hash": "a65d666942c3b9a251dee90cb8f2a6632bce9054", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43709", "s2fieldsofstudy": [ "Engineering", "Computer Science" ], "sha1": "a65d666942c3b9a251dee90cb8f2a6632bce9054", "year": 2018 }
pes2o/s2orc
Improved Approach to Robust Control for Type-2 TS Fuzzy Systems This paper is concernedwith the robust stability conditions to stabilize the type 2 Takagi-Sugeno (T-S) fuzzy systems.The conditions effectively handle parameter uncertainties using lower and upper membership functions. To improve the solvability of the stability conditions, we establish a multigain controller with comprehensive information of the lower and upper membership grades. In addition, a well-organized relaxation technique is proposed to fully exploit relationship among fuzzy weighting functions and their lower and upper membership grades, which enlarges a set of feasible solutions. Therefore, we derive a less conservative stabilization condition in terms of linear matrix inequalities (LMIs) than those in the literature. Two simulation examples illustrate the effectiveness and robustness of the derived stabilization conditions. Introduction Over the past few decades, the type 1 Takagi-Sugeno (T-S) fuzzy model has attracted much attention because it can systematically represent nonlinear systems via an interpolation method that smoothly connects some local linear systems based on fuzzy weighting functions [1][2][3].The main advantage of type 1 T-S fuzzy systems is that they allow us to apply the well-established linear system theory for the analysis and synthesis of nonlinear systems.For this reason, the type 1 T-S fuzzy model has been a popular choice not only in consumer products but also in industrial processes, such as power converters [4], motors [5], and solar power generator systems [6]. For the stability analysis and synthesis of type 1 fuzzy control systems, Lyapunov stability theory is widely used [7][8][9][10].Fundamental stability conditions in terms of linear matrix inequalities (LMIs) are derived from Lyapunov stability condition.The conditions guarantee the stability of the fuzzy control systems if there exists a solution to a set of LMIs.Many researchers introduced the stability conditions and relaxed stability conditions using parallel distributed compensation (PDC) concept [8].Using the information of type 1 fuzzy membership functions, the stability conditions can be further relaxed [11][12][13][14]. Although the type 1 fuzzy control system can effectively handle the nonlinear systems, it cannot guarantee the stability of the nonlinear systems with parameter uncertainties.Recently, type 2 fuzzy systems have attracted a lot of research attention [15] because they are better at handling uncertainties than the conventional type 1 fuzzy systems [16,17].Hence, for the stability analysis and controller synthesis of nonlinear systems with parameter uncertainties, it is essential to use type 2 fuzzy systems.Several researchers have researched such type 2 fuzzy systems [18][19][20].However, all the aforementioned papers have seldom studied stability analysis and controller synthesis for type 2 T-S fuzzy systems.This motivates the study of the stability analysis and controller synthesis of type 2 T-S fuzzy systems. Recently, some researchers have studied stability analysis and controller synthesis for type 2 T-S fuzzy systems [21][22][23][24][25].In [21], an interval type 2 T-S fuzzy controller was proposed using a common controller gain that collectively depends on the sum of lower upper membership grades.In [22], the controller design for the interval type 2 T-S fuzzy system 2 Mathematical Problems in Engineering was introduced using a membership function different from a membership function of the system. In the above studies based on the type 2 T-S fuzzy systems, the stability conditions for the design of the type 2 fuzzy controller have some tuning parameters, which can result in increasing the implementation effort.It motivates the study of the controller synthesis for type 2 T-S fuzzy system.This paper studies the robust stability conditions to stabilize type 2 T-S fuzzy systems.The conditions effectively handle parameter uncertainties using lower and upper membership functions.To improve the solvability of the stability conditions, we establish a multigain controller with comprehensive information of the lower and upper membership grades.In addition, we propose a well-organized relaxation technique that fully exploits relationship among fuzzy weighting functions and their lower and upper membership grades, which enlarges a set of feasible solutions.Therefore, we derive a less conservative stabilization condition in terms of LMIs than those in the literature.The proposed condition has a simple structure without tuning parameters.Finally, two simulation examples are given to illustrate the effectiveness and robustness of the derived stabilization condition. Notation.The notations ≥ and > mean that − is positive semidefinite and positive definite, respectively.In symmetric block matrices, ( * ) is used as an ellipsis for terms that are induced by symmetry.Furthermore, Sym() = + stands for any matrix . System Description and Preliminaries Let us consider the following type 2 T-S fuzzy model [21] that represents a continuous-time nonlinear system: for ∈ R = {1, 2, . . ., }, where () ∈ R , () ∈ R denote the state and control input, respectively; F denotes a type 2 fuzzy set of rules corresponding to the function (()); and denotes the number of IF-THEN rules.The firing interval of the th rule is as follows: where where (()) = (()) (()) + (1 − (())) (()) denotes a fuzzy weighting function in which (()) ∈ [0, 1] is a nonlinear function and not necessary to be considered in this paper.Now, consider a multigain controller that is individually dependent on the lower and upper membership grades such as where and and are the controller gains associated with the lower and upper membership grades.By the above relations, (()), (()), and (()) satisfy the following conditions: where , , , , , and are real constant values.Henceforth, for a simple description, we use the following notations: (()) ≜ , (()) ≜ , and (()) ≜ .The resulting closed-loop system under (4) is represented as follows: Then, merging conditions ( 20)- (26) gives To derive LMI condition, (27) can be represented as the following form: where The S-procedure enables the condition in (19) subject to (28) to be expressed as which can be converted into where Remark 2. The proposed LMIs are not always feasible for all type 2 T-S fuzzy system.However, the proposed method can guarantee a larger feasible solution set than the previous studies because we use the relationship among fuzzy weighting functions and their lower and upper membership grades and the proposed controller is individually dependent on the lower and upper membership grades. Example 2. Let us consider an inverted pendulum model subject to parameter uncertainties, which is adapted from [26]: where 1 () is the angular displacement of the pendulum; = 9.8 m/s 2 ; and are uncertain in min = 2 ≤ ≤ 5 = max and min = 8 ≤ ≤ 18 = max , respectively; = 1/( + ); 2 = 1 m; and () is the force applied to the cart.From [21], we can obtain a plant rule to describe the inverted pendulum subject to parameter uncertainties in the following format: Plant Rule : where The type 2 T-S fuzzy model is represented as follows: where, for all , and the lower and upper grades of membership for each rule are defined as follows: (i) 2 () = 2max , = max , and = min : (ii) 2 () = 0, = max , and = min : (iii) = min and = min : For Theorem 1, = 0, = 1, = 0, = 0.5, 1 = 0, = 0.5.Let us demonstrate the validity of the proposed conditions for the type 2 T-S fuzzy model.Figure 2 shows the trajectories of the states with = min and = min under various initial conditions.Figure 3 shows the trajectories of the states with = max and = max .From Figures 2 and 3, we can clearly see that the proposed controller can stabilize the inverted pendulum with different parameter values and is robust to parameter variations in the plant model.In addition, the proposed conditions lead to less conservative results because we use the larger mass ranges than those of [21]. Conclusion In this paper, we proposed robust stability conditions to stabilize type 2 T-S fuzzy systems.The conditions effectively handled parameter uncertainties using lower and upper membership functions.Furthermore, by applying a multigain controller and a well-organized relaxation technique, we derived a less conservative stabilization condition in terms of LMIs than those in the literature.Our simulation results showed the effectiveness and robustness of the derived stabilization conditions.
v3-fos-license
2016-05-12T22:15:10.714Z
2009-08-13T00:00:00.000
1002620
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://bmccardiovascdisord.biomedcentral.com/track/pdf/10.1186/1471-2261-9-38", "pdf_hash": "54cb0ad53c438ac80019dc03ec8afdab9708cbb3", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43711", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "sha1": "1fabbc191f832b69df3554ad3d4a92a4a01104ca", "year": 2009 }
pes2o/s2orc
The search for valved conduit tissue grafts for adults (>22 mm): an ultrasonographic study of jugular vein diameters of horses and cattle Background Natural heterologous valved conduits with a diameter greater than 22 mm that can be used for right ventricular outflow tract reconstruction in adults are not commercially available. The purpose of this study was to measure by ultrasonography the maximum diameter of the distended jugular veins of horses and cattle, respectively, to identify a population of animals that would be suitable for post-mortem collection of jugular veins at sizes greater than 22 mm. Methods The study population included 60 Warmblood horses, 25 Freiberger horses, 20 Brown Swiss cows, and 20 Holstein cows (including 10 Holstein and 10 Red Holstein). The maximum cross-sectional diameter of the distended jugular veins was measured at a location half-way between the mandibular angle and the thoracic inlet. The thoracic circumference (heart girth length) was used as a surrogate of body size. The jugular vein diameters of the different populations were compared by analysis of variance and the association between heart girth length and jugular vein diameter was determined in each of the four study populations by linear regression analysis. Results There was considerable individual variation of jugular vein diameters within each of the four study populations. There was no statistically significant relationship between thoracic circumference and jugular vein diameter in any of the populations. The jugular vein diameters of Brown Swiss cows were significantly larger than those of any of the other populations. Warmblood horses had significantly larger jugular vein diameters compared to Freiberger horses. Conclusion The results of this study suggest that the production of bovine or equine xenografts with diameters of greater than 22 mm would be feasible. Differences between species and breeds need to be considered. However, prediction of the jugular vein diameter based on breed and heart girth length in an individual animal is inaccurate. Background Valved conduit tissue grafts are commonly used for right ventricular outflow tract (RVOT) reconstruction in the repair of complex congenital heart defects and for pulmo-nary valve replacement during the Ross procedure. However, despite intensive experimental and clinical research, the ideal valved conduit has yet to be developed. The availability of suitable pulmonary homografts is limited, especially for urgent procedures. Commonly used xenografts, including porcine aortic valves and valves constructed from bovine pericardium [1], require structural manipulation and lack durability. [2,3] Natural heterologous valved conduits are commonly used as an alternative to homografts and other xenografts, but mid-term outcomes following RVOT reconstruction may be complicated by supravalvular stenosis, excessive intimal peel formation, and severe perigraft scarring. [4][5][6][7] Furthermore, current heterologous valved conduits are only available at sizes up to 22 mm diameter, limiting their use to children and young adolescents. Natural heterologous valved conduits with a diameter of greater than 22 mm that could be used for adults are not commercially available to date. To our knowledge, jugular vein diameters in horses and cattle have not been reported so far. The goal of this study was to measure the maximum diameter of the distended jugular veins by means of ultrasonography in horses and cattle, respectively, and to relate the jugular vein diameters to animal size and breed. The data collected in this study would then allow choosing the animal population that would be most suitable for post-mortem collection of jugular veins at sizes greater than 22 mm. Echocardiographic examinations All examinations were performed in standing, nonsedated animals gently restrained by an experienced animal handler. A digital ultrasound system (SonoSite Micro-Maxx, Siemens Schweiz AG, Zurich, Switzerland) with a multi-frequency linear transducer working at 10 MHz was used. The skin and the hair were wiped off with ketonized ethanol (80%) and coupling gel was applied to ensure adequate contact with the transducer. The maximum cross-sectional diameter of the external jugular vein was measured after 10 to 15 seconds of manual occlusion of the vein at the thoracic inlet. The left and the right jugular vein, respectively, were scanned at a location half-way between the mandibular angle and the thoracic inlet. The measurements were performed parallel to the ultrasound beam at the widest diameter, bisecting the vein into two equal parts. Because the examinations were performed under field conditions and scales were not available, the thoracic circumference (heart girth length, in cm) was used as a surrogate of body size. [8,9] It was measured using a tape measure that was placed around the thorax, immediately behind the olecranon and behind the withers. The measurements were performed at end-expiration, with the animals standing square. Data analysis and statistics Graphical and statistical analyses were performed using commercial computer software (GraphPad Prism v5.01 for Windows, GraphPad Software, San Diego California USA, http://www.graphpad.com). For data analyses, measurements of the left and right jugular vein of each animal were averaged. Summary statistics were performed. Jugular vein diameters of the different populations were compared using a one-way analysis of variance with Tukey's post-hoc test; the 95% confidence intervals of the differences between populations were reported. Linear regression analysis was performed to determine the association between thoracic circumference and jugular vein diameter in each of the four study populations. The level of significance was p < 0.05. Results The summary statistics and the results of the linear regression analyses are listed in Table 1 and displayed in Figure 1. There was no statistically significant relationship between thoracic circumference and jugular vein diameter in any of the populations, although there was a trend of a positive relationship in Brown Swiss cows. Data analysis showed that there was considerable individual variation within each of the study populations (as indicated by the SD reported in Table 1 and the 95% prediction bands displayed in Figure 1). Table 2 summarizes the results of the comparisons of jugular vein diameters between the four study populations. The jugular vein diameters of Brown Swiss cows were significantly larger than those of any of the other populations (i.e., Holstein cows, Warmblood horses, and Freiberger horses). Warmblood horses had significantly larger jugular vein diameters compared to Freiberger horses. Discussion The results of this investigation provide information on the in vivo diameter of the distended jugular veins, determined by ultrasonography, in four different equine and bovine populations. The jugular vein diameters of Brown Swiss cows were in agreement with a previous investigation, in which the diameter of distended jugular veins, determined by ultrasonography, was found to be 2.4 ± 0.23 cm. [10] To our knowledge, jugular vein diameters of other bovine breeds and of horses have not been reported to date. Based on the results of this study, venous diameters of up to 2.4 cm can be expected in a population of average-sized Warmblood and Freiberger horses, with slightly larger veins found in Warmblood horses. At the time of the investigation, very large horses (i.e., estimated body weight above 600 kg) were not available for examination. While direct extrapolation of these findings to a population of larger horses is not possible, diameters greater than 2.4 cm may well be found in draft breed horses. In this study, Brown Swiss cows had the largest jugular vein diameters, exceeding those of Holstein cows and horses, respectively. Jugular veins with a diameter greater than 3 cm can be readily found in Brown Swiss cows. The use of the heart girth length as a surrogate of body weight may be considered a limitation of this study. However, heart girth length has been shown to correlate fairly well with body weight in horses and cattle. [8,9] Furthermore, weight estimation using body measurements is a simple, practical approach that can be easily used under field conditions, when scales are not readily available. Conclusion In conclusion, the range of jugular vein diameters found in this study suggests that the production of bovine or equine xenografts with diameters of greater than 22 mm would be feasible. Differences between species and breeds need to be considered. However, within each population (i.e., species and breed), there was no significant relationship between jugular vein diameter and body size estimated by girth length, and the range of jugular vein diameters varied considerably. Therefore, prediction of the jugular vein diameter in an individual animal based on breed and girth length is inaccurate.
v3-fos-license
2022-10-23T15:21:20.373Z
2022-10-20T00:00:00.000
253073607
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/2073-4344/12/10/1283/pdf?version=1666269111", "pdf_hash": "54d72dd87e0c5aee86da72e49464afff79a8f065", "pdf_src": "ScienceParsePlus", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43713", "s2fieldsofstudy": [ "Chemistry" ], "sha1": "d605776cc6a06e8df9098d632d423f247b72fe02", "year": 2022 }
pes2o/s2orc
Integral Characteristic of Complex Catalytic Reaction Accompanied by Deactivation : New theoretical relationships for a complex catalytic reaction accompanied by deactivation are obtained, using as an example the two-step catalytic mechanism (Temkin–Boudart mechanism) with irreversible reactions and irreversible deactivation. In the domain of small concentrations, A lim = N S k 1 C A k d , where A lim is the limit of the integral consumption of the gas substance, N S is the number of active sites per unit of catalyst surface; k 1 and k d , are kinetic coefficients which relate to two reactions which compete for the free active site Z. C A is the gas concentration. One reaction belongs to the catalytic cycle. The other reaction with kinetic coefficient k d is irreversible deactivation. The catalyst lifetime, τ cat = 1 C (cid:48) Z 1 k d , where C (cid:48) Z is the dimensionless steady-state concentration of free active sites. The main conclusion was formulated as follows: the catalyst lifetime can be enhanced by decreasing the steady-state (quasi-steady-state) concentration of free active sites. In some domains of parameters, it can also be achieved by increasing the steady-state (quasi-steady-state) reaction rate of the fresh catalyst. We can express this conclusion as follows: under some conditions, an elevated fresh catalyst activity protects the catalyst from deactivation. These theoretical results are illustrated with the use of computer simulations. Introduction Catalyst deactivation is a complex, non-steady-state process governed by a variety of phenomena that are influenced by many physicochemical factors. In the literature, different types of kinetic models of catalytic reactions with deactivation were proposed, i.e., phenomenological models, detailed kinetic models, and semiphenomenological models. Phenomenological Models In phenomenological models of 'gas-solid' catalytic reactions, the main characteristic of the catalytic process, the reaction rate (r), is presented as a function of the concentrations of the reactants (C = C 1 , C 2 , . . .), the temperature (T), and catalyst activity (a), The catalyst activity a is considered a function of the reaction conditions, here C and T, and its change can be called 'catalyst deactivation'. The first kinetic phenomenological model was formulated by Szépe and Levenspiel [1]. where R 0 is the reaction rate over the non-deactivated ('fresh') catalyst and d is an empirical parameter. These models are called separable because the reaction kinetics and deactivation kinetics are assumed to be separate, see papers by Corella et al. [2,3] as well. Froment and Bischoff [4,5] introduced the activity parameter (that is the ratio of the reaction rate constants of the deactivated k and of the 'fresh' catalyst k 0 ) and considered it as a function of coke concentration. They proposed three functions: exponential: Φ 2 = exp(−γC c ), (6) hyperbolic: Later, these relationships were used in many studies to describe the coking and deactivation of the catalysts in the processes of dehydrogenation, cracking, etc. In these processes, the activity changes rapidly, Therefore, in this case the quasi-steady-state assumption does not make the task easier. Beeckman, Marin, and Froment [6,7] developed the probabilistic model of catalyst coking that implies coke deposition on the active and coked surface. In this model, the catalyst activity is the product of two probabilities: where S is the probability that an active site is not covered with coke; P is the probability that an active site is not locked as a result of pore blockage. In reactors with the moving and fluidized bed, the coke concentration measurement becomes as accessible as the reactant concentration measurement (i.e., conversion and activity calculation). Therefore, it becomes important to express activity via coke concentration. Such dependencies were derived by Ostrovskii based on the multilayer mechanism of coke formations. Two equations were obtained, corresponding to infinite coke formation and a finite number of layers: where C m is the monolayer coke concentration; ϕ is the ratio of rate constants of poly-and monolayer coking; N is the number of coke layers. Detailed Kinetic Models In detailed models (micro-kinetic or mechanistic models), the model is based on the mechanism, i.e., the set of steps that include reactants and products of the overall reactions as well as catalytic intermediates. A detailed kinetic model was presented in [8] with a description of two different periods of irreversible deactivation. However, even presently, 50 years later, the number of papers with detailed kinetic models of catalyst deactivation is limited, since the information on the evolution of the surface composition is in short supply. Semi-Phenomenological Models In the catalytic literature, many models have been presented combining phenomenology with some mechanistic considerations of deactivation, e.g., power-law kinetic dependencies and Langmuir-Hinshelwood-Hougen-Watson relationships based on the concept of adsorption equilibrium, see Butt [9] and Bartholomew [10]. In our paper [11], such models are called semi-phenomenological. In 1989, Ostrovskii and Yablonskii [12] proposed the semi-phenomenological model of single-route catalytic reactions assuming two types of catalyst deactivation, i.e., reversible and irreversible ('aging'). In deriving this model, the known principle of quasi-steady-state (QSS) concentrations was used to obtain the concentration of the catalytic intermediate, which deactivates during the process. This concentration was presented as a function of the QSS reaction rate and other kinetic parameters. Later, Ostrovskii developed this approach further in the monograph [13] and paper [14]. In our paper [11], the same approach was presented for the rigorous derivation of the kinetic equation of the n-step single-route catalytic reaction accompanied by two processes of catalytic deactivation, one reversible and the other irreversible. Here the term "reversible deactivation" refers to the fact that the deactivation step includes two reactions, a forward, and a reverse one. This catalytic process was described by the three-building-block scheme ( Figure 1). There we considered a linear mechanism for the catalytic cycle, i.e., only one 'molecule' of the catalytic intermediate, including the active center, participates in each reaction. Figure 1. The three-building-block scheme approach to phenomenological modeling for a linear catalytic reaction accompanied by linear catalyst deactivation. Block one is a n-step linear catalytic reaction. Block two is a linear reversible catalyst deactivation. Block three is aging, i.e., linear irreversible catalyst deactivation. Applying the new, more convenient form of the rate equation for the single-route catalytic reaction with the linear mechanism [15], resulted in the three-factor kinetic equation of deactivation [11] for this complex process. where N 0 S is the initial number of active sites per unit of catalyst surface; R fresh is the 'fresh' rate of the main catalyst cycle; αR fresh = C Z,fresh is the 'fresh' concentration of free catalyst active site Z and where α is a special parameter described in [11]; K d is the equilibrium constant of the reversible deactivation reaction; and k i is the rate constant of the irreversible deactivation reaction. This deactivation equation with some simplifications will be used in this paper for different purposes. As mentioned, the quasi-steady-state hypothesis (QSSH) was applied as a tool for the derivation of this deactivation equation. The status of the QSSH in our problems should be discussed in more detail. This model is an advantageous simplification of the mechanistic model. It is based on the idea of a cyclic catalytic mechanism, and its constituents are the concept of active sites and assumptions on fast and slow parameters considering the quasi-steady-state principle. In comparison with the recent all-component-model, e.g., Cordero-Lanzac et al. [16], and detailed multi-step and multi-route models [17,18] see also Cordero-Lanzac's dissertation, the semi-phenomenological model used in this paper has obvious advantages: 1. the number of parameters is much smaller; 2. this model allows the derivation of interesting analytical results, as will be demonstrated in this paper; 3. potentially, this simpler model can be useful for the design of catalytic reactors with deactivation and optimization of industrial regimes. One of the main ideas of chemical kinetics is a hierarchy, i.e., the time scale separation, based on the large difference in magnitude of the parameters of the kinetic model. This hierarchy determines a variety of different cases and regimes, e.g., quasi-equilibrium (QE), quasi-steady-state (QSS), limiting step (LS), assumptions on most abundant reactive intermediates (mari) or surface intermediates (masi), and, finally, lumping. The quasi-steady-state (QSS) approximation is the central one among all these simplifications. The QSS-principle regarding kinetic intermediates of a complex chemical reaction is typically attributed to Bodenstein [19] and sometimes to Chapman [20,21] as well, see the historical information [22,23]. It was based on the idea of fast intermediates, i.e., the kinetic parameters related to some intermediates are much larger than the kinetic parameters related to stable molecules. In the pioneering paper by Michaelis-Menten [24], two hierarchies were considered: 1. a large difference in kinetic parameters 2. a large difference between the total amounts of main reactants and the total amount of intermediates. For the 'gas-solid' catalytic reaction, the latter corresponds to the case when the total number of active catalytic centers is much smaller than the total number of reactant and product molecules, see [22] (Chapter 3). Gorban and Shahzad [25] theoretically revisited and generalized the Michaelis-Menten approach. It was shown that, rigorously speaking, the Michaelis-Menten kinetics, as we refer to it presently, should be attributed to Briggs and Haldane [26]. In accordance with the QSS method, the derivatives of the chemical intermediates are replaced by 'zeros', and the corresponding differential equations transform to algebraic ones. This 'trick' became an extremely popular tool in the theoretical study of complex chemical reactions, both homogeneous and heterogeneous. However, for a period of 50-plus years after the time of Bodenstein and Michaelis-Menten, the mathematical status of the QSS method was very unclear, there was no understanding of why the derivatives of 'fast' intermediates are replaced by zero. Only starting from the 1950s, a rigorous mathematical concept for QSS was created based on the theory of singularly perturbed ordinary differential equations (ODEs). This theory was developed by Tikhonov and his colleagues [27][28][29][30], and the central point of this theory was the concept of a so-called 'small parameter'. In 1955 Sayasov and Vasil'eva published the first pioneering paper [28] on the mathematical status of the QSS using a radical gas chain reaction with fast intermediates as an example. The small parameter was chosen as the ratio of kinetic parameters. A similar point of view was expressed in 1963 by Bowen, Acrivos, and Oppenheim [31]. In 1963, Heineken, Tsuchiya, and Aris (HTA) published a paper [32] on the mathematical status of the QSS for the Michaelis-Menten two-step mechanism. The small parameter used by HTA was the ratio of two numbers, the number of enzyme active sites and the number of substrate molecules. A similar small parameter, i.e., the ratio of the total amount of surface intermediates, mole, to the total amount of reacting components, mole, was used in monographs [22,23] for obtaining general results in catalytic kinetics (see also the early monograph [33]). Goal of the Paper: General Problems and Specific Problems of This Paper The goal of this paper is to present the integral characteristic A (mol cm −2 cat ) of complex 'gas-solid' catalytic reactions accompanied by deactivation, i.e., where R(t) is the consumption rate of the gas reactant, or the rate of release of the gas product. It will be presented for some regimes as an analytical expression and will be illustrated using computational results. Our goal will also be to estimate the catalyst lifetime by relating the value of A to the value of the quasi-steady-state (QSS) rate of the 'fresh' (non-deactivated) catalyst. Our model of a complex catalytic reaction accompanied by deactivation includes two small parameters 1. the small parameter which is caused by the difference between the number of catalyst active sites and the number of gaseous molecules ("the first small parameter") 2. the small parameter caused by the difference between the deactivation parameters and kinetic coefficients of the main catalytic cycle ("the second small parameter") In this paper, we are going to start the systematic application of our three-factor kinetic equation proposed in [11] to different problems of catalytic kinetics. The program of our studies will include different cases and scenarios. It is reasonable to expect that the results of our studies will depend on the type of the kinetic device and its kinetic model, on the analyzed chemical mechanisms and corresponding models, and the conditions of the process reversibility, i.e., whether deactivation is reversible or irreversible, whether the chemical catalytic cycle is reversible or irreversible etc. Generally, we are planning to analyze the following cases: 1. Kinetic models of the batch reactor (BR) and continuously stirred tank reactor (CSTR). 2. Kinetic models of typical heterogeneous catalytic mechanisms: The n-step single-route complex catalytic reaction with a linear mechanism. 3. Models with reversible and irreversible steps in the catalytic cycle. 4. Models with reversible and irreversible deactivation process. In this paper, we will study the kinetic behavior of a batch reactor in which catalytic reactions are accompanied by deactivation. As an example the simplest two-step catalytic mechanism (Temkin-Boudart mechanism) is chosen with irreversible deactivation. It is the simplest mechanism of the ones mentioned above. Different scenarios of the transient interplay between the main cycle relaxation and deactivation dynamics will be described, and different temporal and parametric domains will be distinguished: The initial non-steady-state kinetic regime caused by the intrinsic catalytic cycle. 2. The quasi-steady-state regime regarding the catalytic intermediates with insignificant deactivation ('no deactivation' regime). This regime is caused by the difference between the number of catalyst active sites and the number of gaseous molecules ("the first small parameter"). 3. The quasi-steady-state regime regarding the intermediates in which the deactivation process is significant. Within this domain, the total number of active sites is decreased, and the quasi-steady-state regime becomes more pronounced. Under concrete values of parameters, some domains can be negligible. Different questions will be answered: 1. In which domain will the catalyst composition be nearly constant, i.e., despite the change in the number of active sites the relative concentrations of catalytic intermediates are remaining approximately the same? 2. How to analyze the long-term behavior of the catalytic system with deactivation based on its integral characteristic? 3. What is the best strategy for the increase of catalytic efficiency based on the kinetic description? Theoretical Analysis Our strategy is to first analyze the full model, i.e., the two-step irreversible catalytic cycle with irreversible deactivation, and then to analyze the three reduced models, two of which have "no" deactivation and are known in the literature, i.e., • Only the main catalytic cycle model; the non-steady-state case. • Only the main catalytic cycle model; the quasi-steady-state case. In our opinion, it is necessary to study these models to build a strong foundation and systematic framework on which to continue the analysis of more complex cases related to deactivation. We expect these simple models can be used as asymptotics for the more complex ones. This content will also help us to understand the final results for both the mathematical and chemical engineering communities. The Full Model of the Two-Step Irreversible Catalytic Cycle with Irreversible Deactivation We use the two-step catalytic mechanism of an isomerization reaction with irreversible steps, The rate equations corresponding to these reactions are, and where r 1 , r 2 and r d are rate equations in mol cm −2 cat s −1 ; k 1 is a reaction rate constant in cm 3 gas mol −1 s −1 ; k 2 and k d are reaction rate constants in s −1 ; C A is the concentration of gas reactant A in mol cm −3 gas ; and C Z and C AZ are concentrations of active catalyst sites, also referred to as catalyst intermediates, in mol cm −2 cat . The kinetic equations for the gas reactant and product are, where C B is the concentration of the gas product and S cat V gas is a factor consisting of ratio of the catalyst surface area S cat (cm 2 cat ) to the gas volume (cm 3 gas ). Equation (19) is derived from the law of mass conservation for element A where N V is the total concentration of gas (mol cm −3 gas ). The kinetic equations for the catalytic intermediates are, where N S is the total concentration of active catalyst sites in mol cm −2 cat . Equation (22) is derived from the law of mass conservation for the catalyst sites. In our analysis, we wish to differentiate between slow and fast behavior. To do this, we must first identify a small parameter. In some cases, one small parameter is enough. However, we will highlight two different small parameters for this specific work. The first small parameter, ε 1 , is defined as the ratio of the number of catalyst sites to the number of gas molecules. At time zero, the number of catalyst sites is equal to the number of active catalyst sites. If there is deactivation, this equality will not hold anymore. This is why we use the initial ('fresh') number of active sites in our definition. Small Parameter 1. The ratio of the total number of 'fresh' active sites to the total number of gas molecules, The parameters S cat , V gas , N 0 S and N V are the catalyst surface (cm 2 cat ), gas volume (cm 3 gas ), the 'fresh' concentration of active sites, i.e., the concentration of active sites (mol cm −2 cat ) at time zero, N 0 S = N S (0) and the total concentration of gas (mol cm −3 gas ), respectively. The second small parameter, ε 2 , is defined as the ratio of the deactivation parameter to a kinetic rate coefficient of the main catalytic cycle. The deactivation rate constant k d is assumed to be much smaller than the rate constants of the catalytic cycle k 1 N V and k 2 , i.e., k d k 1 N V , k 2 . Our second small parameter can be interpreted as a dimensionless deactivation rate constant. We achieve this by either scaling with respect to k 1 N V or k 2 . We chose k 1 N V here because the term reappears in several of the system equations. Small Parameter 2. The dimensionless deactivation rate constant, The parameters k d , k 1 and N V are the deactivation rate constant (s −1 ), the reaction rate constant of reaction 1 (cm 3 gas mol −1 s −1 ) and the total concentration of gas (mol cm −3 gas ), respectively. Implementing the small parameter into the kinetic equations for the reactant, product, and catalytic intermediates, we rewrite the equations for the full system (18)- (22). It now includes the two small parameters, ε 1 and ε 2 , Mathematically, there is a benefit to working with dimensionless variables and parameters, as this can simplify the equations significantly. We therefore introduce the Physically, there is a benefit to working with real dimensional variables and parameters, as this helps in physical interpretation. In this case, we use dimensional time. The set of equations for the system of dimensionless concentrations is given by, We will continue to study this set of Equations (28)- (32). This mathematical model of the whole process is a non-linear system containing three differential equations for the dimensionless concentrations, of the gas reactant A (28) The Initial Non-Steady-State Domain; Gas Concentration Is Abundant, and Deactivation Is Negligible Above we have introduced the full system of equations that will be studied further. We will first start with the implication of the two small parameters. If the small parameters are close to zero, i.e., ε 1 , ε 2 ≈ 0 (or in the case of no deactivation ε 1 ≈ 0 and ε 2 = 0), then for small t we find that Equations (28) and (31) are approximately zero. If the change in active sites is approximately zero, this means there is almost no deactivation. On a certain domain where t is small we may claim there to be no deactivation. For the change in gas reactant A to be small, it has to be small compared to its absolute value. The change in A is insignificant because it is in abundance. In this domain, for the full model, the changes in both substances (reactant A and active sites N) happen so slowly they cannot be observed, and thus are deemed insignificant. The fast behavior is dominating the slow one. Resulting in the following exact solution for the approximate model in the fast domain, The fast domain, also referred to as the non-steady-state (NSS) domain, is the time frame in which the above solutions are good estimates for the exact solutions. It is governed by the main catalytic cycle with "no" deactivation and under the assumption that the gas concentration is abundant. The Quasi-Steady-State Domain; Deactivation Is Absent When we go outside the fast or non-steady-state (NSS) domain, the same assumptions don't hold as in the previous section. Here we need to differentiate whether deactivation is truly absent or not. We first look at the case where deactivation is absent, i.e., ε 2 = 0. In this slow domain, we introduce the new time τ = ε 1 t. The main result of introducing the time τ with small parameter ε 1 is the following equations for the catalytic intermediates, Since ε 1 ≈ 0, we can replace Equation (44) with the algebraic equation, This is our quasi-steady-state (QSS) assumption, and we identify this domain as the QSS domain. The QSS intermediate concentrations are calculated as, If we substitute the values of C A and N S with their respective 'fresh' values C A, f resh = C A (0) and N S, f resh = N S (0) these solutions are equivalent to the steady-state (SS) values of Equation (41) and Equation (42) respectively. At the beginning of the QSS domain, the concentration C A is changing insignificantly and the relative ratio composition of the active catalyst sites, appears constant. However, as time increases the change in C A will increase, and thus the ratio will not remain constant. As we will see in later sections this is not the case when deactivation is present. Introduction to the Lambert W function To recap, in the absence of deactivation, we have the following set of equations for the dimensionless concentrations, There are two nonlinear differential equations, (50) and (52), and two linear algebraic equations, (51) and (53). Making use of the small parameter and the quasi-steady-state (QSS) assumption, see Equations (46)-(48), this set of equations reduce to one nonlinear differential equation, (56), and three algebraic equations, (54), (55) and (57). This specific set of equations is well-known in the literature. However, to our knowledge, an exact analytical expression for the concentration of A has yet to be presented. To this end, we would like to introduce the Lambert W function [40], which calculates the converse (inverse) relation of the function We now have a way to express the solution of ordinary differential Equation (56), Remember because there is no deactivation N S will be constant and N S = 1 for all time. By extension of the above equation we now also have analytical expressions for the remaining concentrations, Using Equation (58), one can obtain the expression for half decay time, τ 1/2 , which is traditional in chemical kinetics. It is the time during which half of the reactant is transformed into the product. As known, for the first order reaction A k B, where k is the kinetic rate coefficient of the reaction. For our two-step mechanism (Temkin-Boudart mechanism), based on Equation (58), Clearly, the half decay time is decreased with the rise of both kinetic coefficients k 1 N V and k2. If k 2 is much bigger then k 1 N V , it is identical to the expression for the first-order reaction. In contrast, if k 1 N V is much bigger than k 2 then, This expression can be used as a rough estimate of the parameter. Based on Equation (64) it is possible to recognize the deviation of the Temkin-Boudart non-steady-state kinetic dependence from the first-order (linear) one. The Quasi-Steady-State Domain of the Cyclic Reaction Accompanied by Deactivation In this domain, applying the quasi-steady-state assumption, the kinetic model is given by: This system of equations consists out of two nonlinear differential equations, (70) and (71), and three algebraic equations, (68), (69) and (72). We can rewrite the two differential equations above into one nonlinear differential equation, The analytical solution to Equation (73) is found to be This equation, while correct, does not give much information about the progression of the dimensionless concentration of active sites N S . The three-factor kinetic equation, (11) as it is presented in [11], can be adapted for models without reversible deactivation by setting ϕ r,d = 1 and K d = 1 (Appendix A describes the conditions under which we may use the three-factor rate equation). This results in the following rate equation for this domain, First, we simplify the terms inside the exponential as follows, Now we shift our time out of the "slow" time τ domain to the "fast" time t one, which results in the final expression for the rate equation, Obviously, k d C Z,fresh is the rate of deactivation for the fresh catalyst. Therefore, the phenomenological equation can be written as, where R d = k d C Z,fresh is the rate of deactivation of the fresh catalyst. Integral Consumption The integral consumption of the reactant which we are going to obtain and analyze is expressed as, Experimentally, in the quasi-steady-state domain the dependence of A(t) can be measured by the change of concentrations in time of reactant A (∆C A ) or product B (∆C B ) multiplied by the factor ( V gas S cat ). In our case, where the change in the reactant concentration is insignificant, the product concentrations are more convenient for calculating the values of A(t). There are two extreme cases for the integral consumption equation: 1. The limit of the integral consumption as time goes to infinity, t → ∞, We find the limit of integral consumption is equal to the product of the number of active sites for the fresh catalyst multiplied by the ratio of the kinetic coefficients of the two reactions competing for the free active site Z. One reaction belongs to the catalytic cycle, The other reaction belongs to the irreversible deactivation step, Z k d X. 2. The Taylor approximation for A(t) at small values of k d t. If the term k d t is very small, i.e., k d t 1, the Taylor approximation of A(t) will be Combining the Integral Consumption and the Quasi-Steady-State Equation Rate Comparing the equations for the limit of the integral consumption (83) and the equation for the quasi-steady-state rate, we obtain This ratio A lim R fresh can be interpreted as the catalyst lifetime for the catalytic reaction with deactivation, If Figure 2 shows the evolution of the concentration of free active sites which first dramatically decreases, then exhibits a plateau, and finally decreases gradually to zero. Interestingly, this whole model is characterized by a temporal turning point. Left of this turning point, the whole model can easily be approximated by the NSS no-deactivation model. Regarding this point, the QSS model with deactivation (not without) is an excellent approximation of the whole model. This turning point corresponds to the 'fresh' catalyst, which is characterized by the QSS surface composition. Computations Under the values of our parameters, the concentration of reactants changes insignificantly in all temporal intervals (the case of small conversion). The concentration of product B is presented in Figure 3. As for the QSS domain, the main catalytic cycle is accompanied by deactivation from the very beginning of this domain. Figures 4 and 5 illustrate the main theoretical result of our paper: that if we increase the rate coefficient k 1 , this will result in a decrease of the QSS concentration of free active sites C Z . Consequently, the integral consumption of reactant A and its limit A lim will increase, where A lim 's increase will be proportional to the increase in the rate coefficient k 1 . Hence, the lifetime of the catalyst τ cat will increase as well. Interpretation and Discussion The concept of the integral reactant consumption for the catalytic cyclic reaction accompanied by deactivation became the basis for deriving the corresponding analytical expressions. These expressions, i.e., the integral reactant assumption and catalyst lifetime as a function of reaction parameters, are important for formulating a new general strategy in the optimization of these reactions or processes. Equations (83) and (86) propose the following recipe for intensifying the catalytic process and extending the catalyst life. The concentration of free active sites (so to say, the deactivated catalytic intermediate) must be kept as low as possible. It can be achieved by increasing the reactant concentration C A which reacts with the intermediate Z (free active site). In our case, this is reactant A. It has arisen a reasonable question: "What about the justification of this recommendation?" In the monograph "Homogeneous Catalysis with Metal Complexes: Kinetic Aspects and Mechanisms" by O.N. Temkin and P.P. Pozdeev [41], there is a special section devoted to this problem "5.3.5. Protecting active centers by catalytic process from destruction" pp. 492-493. The pioneering paper by Kagan et al. [42] was referred. The observed phenomenon is described as follows: in the reaction of hydroformylation of olefines catalyzed by Rh complexes, the high rate of transformation of the active complex, [HRh(CO 3 )], leads to the decrease in the steady-state concentration of this complex. As result, the deactivation of this complex by the cluster generation is hindered. So, we can consider this publication as an experimental justification of the phenomenon, i.e., an enhancement of the catalyst lifetime by the decrease of the steady-state concentration of the free active sites. However, the theoretical analysis related to this phenomenon was still not done, and its corresponding relationships are absent. Conclusions and Perspectives In this paper, for the basic catalytic two-step mechanism (Temkin-Boudart mechanism) with irreversible reactions and irreversible deactivation, two analytical results have been obtained, i.e., the expression for the integral consumption of the gas substance and the expression for the catalyst lifetime. These results became a basis for distinguishing the new phenomenon, the enhancement of the catalytic reaction with deactivation via the regime with the small steady-state concentration of free active sites. In some domain of parameters, it can be achieved by increasing the steady-state (quasi-steady-state) reaction rate of the fresh catalyst. We can express this conclusion as follows: under some conditions, the elevated fresh catalyst activity protects the catalyst from deactivation. These analytical results are illustrated by computer calculations. We consider these results as prototypes of analogous results for similar models mentioned in the introduction of this paper, i.e., two-step mechanism and n-step single route linear mechanism with reversible steps, models with reversible deactivation, and models of reactions in the CSTR. For some of these models, we already have preliminary results and are going to develop them more applying this approach to the description of the experimental kinetic data with catalyst deactivation. On the other side, the idea of "protecting active centers from deactivation by catalytic process" can be used heuristically for the intensification of the catalytic process. Acknowledgments: We express our acknowledgements to Oleg N. Temkin (Moskou) for extremely helpful information. Conflicts of Interest: The authors declare no conflict of interest. Abbreviations The following abbreviations are used in this manuscript: QSS quasi-steady state NSS non-steady state SS steady-state Appendix A. Epsilon Analysis In this section we give a mathematical implications of having small parameters ε. For this analysis we stick to the quasi-steady state domain of the system. As derived in Section 3.4, the set of equations that hold in this domain look as follows, Note that if there is no deactivation Equation (A3) will be equal to zero, and we find the equations as presented in Section 3.3.1. We have separated our analysis of the small parameters into two cases: 1. Only one small parameter is present, i.e., ε 1 > 0 and ε 2 = 0. As a result there is no deactivation and N S (t) = N 0 S or equivalently N S (t) = 1. Appendix A.1. Case 2a: Two Small Parameters and ε 1 ε 2 In the following case, the hierarchy of small parameters is ε 1 ε 2 . As such the factor Replacing Equation (A3) with (A6) will result in a set of equations equivalent to those presented in Section 3.3.1. We see that the solutions presented in Section 3.3.1 are a good approximation for this case. But only for a limited time. As C A becomes zero the effect of deactivation becomes prevelent again. The two reactions A + Z AZ, Z X, will no longer be in competition when C A ≈ 0. Appendix A.2. Case 2b: There Are Two Small Parameters ε 1 ε 2 In the following case, the hierarchy of small parameters is ε 1 ε 2 . As such the factor ε 2 ε 1 in Equation (A3) is going to infinity. To combat this, we introduce the following time scaling, where τ ε = ε 2 ε 1 τ = ε 2 t. Now ε 1 ε 2 is approximately zero and by extension dC A dτ ε ≈ 0. We solve the system, analytically and find, This rate equation coincides with the three-factor rate Equation (11) presented in [11], specifically for the case where there is no reversible deactivation. Appendix A.3. Case 2c: There Are Two Small Parameters ε 1 ≈ ε 2 In this case the system of equations are given by, Because there is no clear hierarchy in the small parameters we are not able to set one of the derivatives to zero. Instead we introduce the new differential equation, The analytical solution to this equation is found to be For this case, to find the time dependency we must refer back to numerical techniques.
v3-fos-license
2017-09-05T02:47:41.084Z
2017-08-22T00:00:00.000
11466991
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://doi.org/10.1186/s12875-017-0654-9", "pdf_hash": "efd42fa0b2967f48e96e87f35c53676746375115", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43714", "s2fieldsofstudy": [ "Medicine", "Political Science" ], "sha1": "efd42fa0b2967f48e96e87f35c53676746375115", "year": 2017 }
pes2o/s2orc
Primary care physician referral patterns in Ontario, Canada: a descriptive analysis of self-reported referral data Background In many countries, the referral-consultation process faces a number of challenges from inefficiencies and rising demand, resulting in excessive wait times for many specialties. We collected referral data from a sample of family doctors across the province of Ontario, Canada as part of a larger program of research. The purpose of this study is to describe referral patterns from primary care to specialist and allied health services from the primary care perspective. Methods We conducted a prospective study of patient referral data submitted by primary care providers (PCP) from 20 clinics across Ontario between June 2014 and January 2016. Monthly referral volumes expressed as a total number of referrals to all medical and allied health professionals per month. For each referral, we also collected data on the specialty type, reason for referral, and whether the referral was for a procedure. Results PCPs submitted a median of 26 referrals per month (interquartile range 11.5 to 31.8). Of 9509 referrals eligible for analysis, 97.8% were directed to medical professionals and 2.2% to allied health professionals. 55% of medical referrals were directed to non-surgical specialties and 44.8% to surgical specialties. Medical referrals were for procedures in 30.8% of cases and non-procedural in 40.9%. Gastroenterology received the largest share (11.2%) of medical referrals, of which 62.3% were for colonoscopies. Psychology received the largest share (28.3%) of referrals to allied health professionals. Conclusion We described patterns of patient referral from primary care to specialist and allied health services for 30 PCPs in 20 clinics across Ontario. Gastroenterology received the largest share of referrals, nearly two-thirds of which were for colonoscopies. Future studies should explore the use of virtual care to help manage non-procedural referrals and examine the impact that procedural referrals have on wait times for gastroenterology. Electronic supplementary material The online version of this article (doi:10.1186/s12875-017-0654-9) contains supplementary material, which is available to authorized users. Background In many countries, referrals from primary care providers (PCPs) to specialists are a necessary step for patients to access health resources. However, the referral process faces a number of challenges from inefficiencies and rising demand, resulting in excessive wait times for many specialties [1,2]. Referral patterns from primary care to specialty care have been previously studied in several countries-including Canada, the United States, and the United Kingdom-using a range of data sources, including chart audits, surveys, health administrative databases, and electronic health records [3][4][5][6][7][8]. However, differences in local contexts, study methods, and measures of referral patterns make it difficult to compare results between studies. Variations in health system structure make comparisons a particular challenge, as patients in the United States can access secondary care directly while patients in countries with universal healthcare must often access such services through referral by their PCP [4,7,8]. While a few studies have examined referral patterns in Canada, their findings are several years old and drawn from health administrative databases, which cannot paint a complete picture of referral activities at a given clinic [3-5, 9, 10]. Therefore, as part of a larger program of research examining referral issues, wait times, and the use of electronic consultation (eConsult) to improve access to specialist advice, we collected referral data from a sample of family doctors across the province of Ontario [11]. This study describes referral patterns to specialty services using PCP self-reported patient referral data. To our knowledge, this is the first study to explore referral patterns using this type of raw, practice-derived data, which allows for a unique study of referrals made not only to medical specialists but also to allied health professionals. PCP referral patterns may be of interest to healthcare providers, health system administrators, and policy makers, as they reflect the everchanging supply and demand for various services and are significant drivers of healthcare costs. Knowledge of these patterns can help inform health care funding decisions and resource allocation. Design We conducted a prospective study of referral patterns from PCPs to specialist and allied health services using self-reported de-identified patient referral data from participating PCPs across Ontario collected over a 20-month period (June 2014-January 2016). Population PCPs were recruited as part of a larger cluster randomized controlled trial evaluating the impact of the Champlain BASE™ (Building Access to Specialists through eConsultation) service-a novel electronic referral-consultation process-on overall specialist referral rates. All PCPs practicing in Ontario who were not already enrolled with eConsult were eligible to participate in the study. Details of the recruitment process have been published elsewhere [11]. Participating PCPs were invited to submit monthly patient referral data on a voluntary basis as part of the trial. Setting All PCPs came from Ontario. The province has a population of 13 million people with health outcomes and demographic characteristics comparable to the rest of Canada [12]. Data collection Data was prospectively collected using a standardized referral tracking form (Additional file 1) adapted from a similar tool obtained from the American Academy of Family Physicians [13]. The form included month of referral request, type of specialty, reason for referral, and whether the referral was for a procedure. The referral tracking data were faxed or emailed to the research team on a monthly basis and entered into a database by a research assistant. Information on PCP demographics (gender, year of graduation, and medical education location) was obtained from the College of Physicians and Surgeons of Ontario (CPSO) website. Clinics completed a survey adapted from two validated Pan-Canadian Primary Health Care Provider and Practice Surveys from the Canadian Institutes for Health Information, [14] which inquired about demographic characteristics (postal code, primary setting, years in operation, number of PCPs and presence of on-site specialist services), Electronic Medical Record (EMR) use, referral method, and presence of a designated staff for scheduling/tracking referrals or liaising with specialist offices. The Rurality Index for Ontario (RIO) 2008 score was calculated using the Ontario Medical Association (OMA) RIO postal code look-up and used to categorize clinics into rural (score = 0-10), semi-urban (score = 10-40), and urban (score = 40-100) settings. Data analysis All PCPs who submitted at least 6 months of referral data were included in the analysis. Referrals that did not occur face-to-face (e.g. eConsults) or did not indicate a target specialty were excluded. Descriptive statistics were generated to identify the most frequently accessed services, the reason for referral, and whether referrals were procedural. As referral volumes per month did not follow a normal distribution, the number of referrals per PCP per month was reported using medians and interquartile ranges. Results A total of 9509 referrals submitted by 30 PCPs from 20 clinics were eligible for analysis ( Fig. 1). Table 1 provides descriptive characteristics of PCPs and clinics. Most PCPs were female (63%) and trained in Canada (90%). Most clinics were established urban (70%) group practices (76%) without access to on-site specialist services (71%). Of the five clinics with access to on-site specialist services, three clinics had only one medical specialist while the other two clinics had two and eight medical specialists, respectively. Specialty type was not specified. All clinics reported using EMRs to order tests and prescribe medication, and 77% indicated they used them to make referrals. However, 40% of clinics reported completing referrals using a combination of paper-based and electronic methods, and 17% referred by paper alone. Referral patterns PCPs completed a median of 26 (interquartile range 11.5 to 31.8) referrals per month. Ninety-eight percent of included referrals (n = 9297) were directed to medical professionals while only 2% (n = 212) were directed to allied health professionals. Distribution of all medical specialty referrals is shown in Fig. 2. Pediatric specialty referrals made up 2.8% (n = 261) of medical specialty referrals. Among referrals to medical specialists, 30.8% were identified as procedural, 40.9% as non-procedural, and 28.3% were unspecified (Table 2). More than half of all referrals to gastroenterology, obstetrics and gynecology, general surgery, and plastic surgery were identified as procedural. Colonoscopy made up 62.3% of all gastroenterology referrals and 24.1% of general surgery referrals. Distribution of allied health referrals is shown in Fig. 3. The top five were identified as psychology, diabetes education, physiotherapy, chiropody/podiatry, and optometry. Discussion The most frequently referred-to specialty was gastroenterology, followed closely by obstetrics and gynecology, dermatology, and general surgery. Of the specialties that received mostly procedural referrals, gastroenterology was again identified as the top specialty, with colonoscopy accounting for nearly two-thirds of gastroenterology referrals and a quarter of general surgery referrals. Not surprisingly, gastroenterology and dermatology are also among the specialties with the longest wait times in Ontario [9,10]. This information should be used to inform and plan solutions to improve wait times. To our knowledge, this is the first descriptive study of referral patterns in Canada using PCP-derived, selfreported patient referral data. Other studies of referral patterns have focused mostly on examining referral rates and on trying to understand the key factors affecting them. Their findings demonstrate substantial variability in referral rates related to physician, patient, practice, community, and healthcare system characteristics, and present a general lack of consensus regarding which type of factors account for the most of the observed variability in referral rates [3][4][5][15][16][17][18][19]. Very few of these studies reported on the distribution of referrals coming from primary care, though many aligned with our findings in terms of which specialties received a preponderance of referrals [3,9,18,19]. One study also reported gastroenterology as the most frequently referred-to specialty, [19] while others cited dermatology [9,18] and general surgery [3]. These findings suggest recurring patterns, though caution must be taken when comparing studies due to variations in setting and methodology. Studies based in other countries have detected similar referral patterns [20][21][22]. An Australian study of general practices examined which specialty groups received the most referrals. Their findings mirrored ours in many respects, with several of their top ten specialties-notably orthopedic surgery, general surgery, gastroenterology, and dermatology-appearing among ours as well, albeit in a different order [20]. Their reported patterns of allied health referral were likewise similar, citing physiotherapy, psychology, diabetes education, chiropody/podiatry, and optometry [20]. When exploring reasons for referral to gastroenterology, the most commonly cited were rectal bleeding and digestive neoplasm [20]. This suggests that a high proportion of referrals to gastroenterology in Australia may also be for colonoscopy as these presentations lend themselves to further investigation. Another general practice study out of England discussed the impact Fig. 1 Referral data of prevention-based programs on overall wait times, suggesting that "new published guidelines on suspected cancer recognition and referral lowered referral thresholds requiring general practitioners to refer many more people with non-specific or early signs of possible cancer" [22]. The fact that procedural referrals such as colonoscopies represented such a high proportion of referrals in our study-particularly to gastroenterology-raises concerns about the impact of prevention-based programs on overall wait times. While well-intentioned, these programs may be generating an overly large volume of referrals for such procedures and thus may require specific strategies to enable timely access to colonoscopy for patients with a time-sensitive diagnosis, such as colorectal cancer. A nationwide practice audit of wait time for gastroenterology care revealed a median wait time (from referral to procedure) of 91/203 days (median/ 75th percentile) for Canada and 72/118 days for Ontario [3]. Furthermore, median wait times were 99/208 days for physicians who offered screening colonoscopy for average-risk patients versus 66/180 days for physicians who did not [3]. These wait times greatly exceed the 2006 benchmarks set by the Canadian Association of Gastroenterology [23] and may come with substantial costs, as digestive diseases account for 15% of the health care spending in the Ontario, exceeding all other disease categories [24]. While referrals to general surgery have (82.4) Other, n (%) 1 (5.9) No, but Planning to Adopt within a Year, n (%) minimized some of the procedural burden, the number of colonoscopy referrals to gastroenterology remains high, and requires a targeted and more efficient management strategy. Non-procedural referrals may lend themselves to virtual care in some settings. Forty percent of referrals in our study were non-procedural and thus may have been eligible to be handled via telemedicine or eConsult services. These services have the potential to address excessive wait times for specialist care, which are a serious issue in Canada; a recent survey by the Commonwealth Fund placed Canada last in timeliness of care among the 11 countries surveyed [25]. Prolonged waiting for specialist care can cause patients anxiety, delay important diagnoses and treatments, and lead to poorer health outcomes [2,26]. eConsult services have demonstrated effectiveness at improving access, increasing patient and provider satisfaction, and lowering costs [27,28]. However, such services are not self-implementing and require deliberate uptake by clinics and providers. Potential challenges in this regard are reflected in participating clinics' incomplete adoption of EMR-based referral systems with over one-third reporting the use of paper and electronic means to refer. The hesitance to switch to exclusively electronic referral methods stems from many factors, including provider preferences and the fact that EMRs from different vendors are unable to communicate with each other [29]. We also found that a small but sizeable number of referrals were made to allied health services, of which psychology was the most frequent. Unlike with medical specialty services, patients can access allied health services without first being referred by a PCP. At present, only one-third of patients in Ontario have access to publically-funded allied health services [30]. Patients outside of this group must rely on private insurance to cover costs or else pay for services out of pocket, putting lower income patients at risk of experiencing poorer access to care. Further work is needed to explore potential inequities in access to allied health services and whether or not they have an impact on patient health outcomes. Our study has several limitations. Our data collection strategy did not allow us to report referral rates or examine patient, provider, and clinic factors related to the observed referral patterns. Participation was voluntary and consisted of a convenience sample of PCPs interested in gaining access to the eConsult service, hence introducing a selection and possibly a response bias. Most participating clinics were in the central, eastern and western regions of the province and all had access to an EMR. This in turns limits generalizability of the results, specifically for more rural practices in northern Ontario. There was also no mechanism to verify whether the participating PCPs reported all referrals, especially to the allied health providers. As such the number of allied health referrals may actually be an underestimate. Conclusion We examined patterns of patient referral from primary care to specialist and allied health services for 30 PCPs in 20 clinics across Ontario. Future studies should explore the use of eConsults and other forms of virtual care to help manage non-procedural referrals and examine the impact that procedural referrals have on wait times for gastroenterology. A better understanding of when and why PCPs referral to allied health professionals-particularly psychologists-is needed to ensure that patients receive access to essential care regardless of their level of income. Additional file Additional file 1: Standardized tracking form used to collect referral data. We have provided a copy of the standardized tracking form that clinics used to collect and send data regarding patient referrals. (DOCX 45 kb) Abbreviations BASE: Building access to specialists through eConsultation; PCP: Primary care provider Acknowledgements Our thanks to the PCPs and clinics who participated in this project, and to Justin Joschko for his assistance with editing and preparing the manuscript for publication. Funding Funding for this project was provided by the Ontario Ministry of Health and Long-Term Care, The Ottawa Hospital Academic Medical Organization Innovation Fund, e-Health Ontario, and the Champlain Local Health Integration Network. The funders were not involved in the study design, data collection, data analysis, or manuscript preparation, or in the decision to publish the results. The views expressed do not necessarily reflect those of the Province of Ontario. Availability of data and materials The datasets used and/or analysed during the current study are available from the corresponding author on reasonable request. Authors' contributions CL and EK conceived of and designed the study, and contributed to the data analysis and drafting of the publication. SA-T and IM contributed to the data analysis and drafting of the publication. All authors have read and approved of the final submitted version of the manuscript. Ethics approval and consent to participate All participating PCPs and clinic staff provided written consent for participation in the study, including the collection of survey data. This project was approved by the Bruyère Continuing Care Research Ethics Board (Protocol #: M16-13-058) and the Ottawa Health Sciences Network Research Ethics Board (Protocol #: 20,130,674-01H). Consent for publication Not applicable.
v3-fos-license
2020-08-13T10:05:44.361Z
2020-08-10T00:00:00.000
225420179
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.jstemoutreach.org/article/14513.pdf", "pdf_hash": "4e99ae3f2ecdcc191c56a557a1a9f1f2d293faf2", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43715", "s2fieldsofstudy": [ "Education", "Mathematics", "Environmental Science" ], "sha1": "db40d8e2ab059f1260c01c1b4e8032977c027b75", "year": 2020 }
pes2o/s2orc
Start-Up and Sustaining 20 Years of STEM Outreach Research and Programming: The Food, Mathematics, and Science Teaching Enhancement Resource (FoodMASTER) Initiative Science and mathematics literacy are fundamental to the basic understanding of food and health and/or the pursuit of science-based careers. In 1999, the FoodMASTER Initiative (FMI) was created to provide an opportunity for youth to experience authentic, real-world health science activities in K-12 learning environments. FMI administrative locations have included Ohio University 1999-2005, East Carolina University 2006-2018 and Northern Illinois University 2018-current. The key programmatic elements for the FMI include: 1) curricular hands-on activities developed with teacher input, 2) free online access, 3) rigorous evaluation of program materials, and 4) robust partnerships with organizations that promote mathematics and science education. The purpose of this manuscript will be to 1) provide a rationale for the FMI programming, 2) share the curriculum and the process for developing curriculum and summarize the quantitative and qualitative findings of the 19 peer-reviewed articles, 3) discuss funding that was secured, 4) discuss strategies that lead to program sustainability, 5) discuss the mission and vision, and 6) summarize programmatic component sustainability. INTRODUCTION The Committee on STEM Education of the National Science and Technology Council (NSTC) clearly articulates a need for Americans to have access to lifelong science, mathematics, technology, and engineering (STEM) learning opportunities. Unfortunately, learners often become frustrated with science knowledge acquisition because information is taught without context, making it difficult to grasp the importance of the concept (Pajares, 1992). Guiding the development of STEM learning activities with Next Generation Science Standards (NGSS) and selecting subject matter content on learner’s preexisting knowledge (Saunders, 1992) in combination with strategic pathways representing cross-cutting approaches (NSTC, 2018) can serve to empower all learners towards effective, productive citizenship. While subject matter selection, meeting content standards, and creating relevance are important factors, teaching methods and approach should also be carefully considered if educators are to impact a learner’s attitude and confidence towards science learning (Pajares et al., 2014). To date, traditional lecture-style and read-write learning style approaches continue to be the default teaching style for many educators. It is easy, efficient, and works for many learners; however, it often fails to appeal to a wide variety of learning styles, inspire curiosity, and create a love of learning. Educators must be willing and prepared to provide more engaging experiences for students that involve inquiry, hands-on learning, group discussion, questioning, demonstrations, problems, drawing, projects, and other teaching methods if they are The FoodMASTER Initiative Duffrin Vol. 3, Issue 2, August 2020 Journal of STEM Outreach 2 going to achieve quality outcomes (Kelly, 2000). In 1999, a partnership was created between a rural elementary school teacher and a practicing rural dietitian and university faculty in Southeast Ohio to use food as a tool to teach food, science, and mathematics. Knowing that science and mathematics literacy are fundamental to individuals’ basic understanding of food and health and/or the pursuit of science-based careers, the two professionals created a STEM program called the FoodMASTER Initiative. The purpose of the FoodMASTER Initiative (FMI) was to create a science education program rooted in constructivist learning theory (Phillips et al., 2004). Food was selected as the teaching tool because 1) students have preexisting experiences with foods, 2) it is conducive to hands-on activities, and 3) concepts can be linked to course content in biology, chemistry, environmental sciences, mathematics, nutrition, and health. Additional consideration for the use of food included: 1) food is already relevant for the learner, 2) food is an easily accessible material, 3) positive science and mathematics learning experiences can be inspired with various foods, and 4) food experiences can span across the P-20 learning environment in both formal and informal learning environments. Given that the foundation of food and nutrition is rooted in mathematics and science subject matter, along with the potential to engage and inspire an individual’s desire for lifelong science learning, the FMI team was compelled to examine food and nutrition education through various teaching and learning methods (Stage et al., 2015). The multidisciplinary team of science, nutrition, and health educators began to discuss how to create a program for mathematics and science teachers that would allow for more time in the classroom devoted to issues related to food and nutrition while maintaining the required learning objectives of the Mathematics Standards Common Core State Initiative and Next Generation Science Standards (NGSS). Instead of using a traditional health education approach, the team focused on foundational mathematics and scientific knowledge to apply basic good nutritional science principles in the classroom. The advantages of this approach would be to 1) expand the number of teachers in the school system discussing food and health sciences, 2) support the acquisition of foundation knowledge that would further enhance knowledge in science and mathematics, and 3) increase the number of content hours in nutrition science without taking time away from the core subject matter that is required, taught, and assessed with standardized testing. PROGRAM MISSION AND VISION The core mission of the FMI was established to address a long-standing lack of realistic sciences education opportunities for youth. The program vision is to provide teachers with authentic learning activities for classroom implementation to improve access, knowledge, and attitudes of youth towards science education (Diaz et al., 2018; Duffrin et al., 2010; Hovland et al., 2013; Roseno et al., 2014; Stage et al., 2017). FUNDING SECURED FoodMASTER began as a self-funded project partnering Ohio University with rural elementary students to learn mathematics and science using food. The project was called Kitchen Wizards and quickly demonstrated robust learning outcomes for both elementary and university students (Duffrin et al., 2005; McLeod et al., 2012). The project also captured the attention of Ohio University 1804 funds committee which provided additional funding to expand the proof-ofconcept to six classrooms in Southeast Ohio. In 2003, the Kitchen Wizard concept was modified and renamed the FoodMASTER Initiative. In 2005, team FMI received a National Institutes of Health (NIH) Science Education Partnership Award (SEPA). In 2008, the FMI received project funding from the United States Department of Agriculture. NIH SEPA remains the primary funder of the FMI. KEY PROGRAMMATIC ELEMENTS The FMI encourages teacher adoption of the FMI approach by alleviating what teachers and administrators might perceive as barriers to implementation. The developers created the curricular materials to 1) address science learning standards, 2) provide proficiency type questions, 3) use affordable food, supplies, and equipment, and 4) ensure Years FoodMASTER Projects Funding Source 1999-2003 Concept development 2003-2004 Elementary Grades Ohio University 1804 funds 2005-2008 2008-2010 Grade 3-5 Phase I Grade 3-5 Phase II NIH Science Education Partnership Award R25RR020447 NIH Science Education Partnership Award R25RR020447-04 2008-2010 FM Higher Education USDA Higher Education Challenge Award NCE2008-38411-19041 2011-2016 Grade 6-8 NIH Science Education Partnership Award R25RR032144-01 2017-2022 Informal Science Learning NIH Science Education Partnership Award R25GM129216 Table 1. Funding Sources. The FoodMASTER Initiative Duffrin Vol. 3, Issue 2, August 2020 Journal of STEM Outreach 3 that all materials are teacher tested and revised with relevant feedback. The curriculum includes the features of reinforcing healthy food selection, encouraging reading comprehension, and providing take-home activities for children and families to implement in the home learning environment. Curriculum Development. FoodMASTER grades 3-5 was the first curricular manual developed, with twenty-four activities within 10 topic areas (see Table 2). Workbook activities introduce subject matter and promote reading comprehension through clever “Doodle bugs.” “Doodle bugs” encourages reading comprehension of the pertinent content for each chapter by asking students to underline or circle key concepts. Next, students and teachers must read and implement a hands-on activity. These activities generally use food as a hands-on tool to convey a scientific and/or mathematics concept. Workbook developers created activities that require basic household cooking equipment, can be conducted in a regular classroom environment, and use food ingredients that are easily accessible. Other considerations are to provide as many edible end products as possible, especially using healthy food choices. While science and mathematics are the primary learning objectives, food and/or nutrition concepts are also integrated into each activity. The workbook encourages students to demonstrate learning through laboratory reports and discussion; however, proficiency questions are available at the end of each chapter. These questions can assist the learner in checking basic knowledge and provide an opportunity for responding to standardized questioning. The workbooks provide additional lessons to give students the opportunity to demonstrate thinking and learning skills in a “Try this at home” learning environment. The FoodMASTER grades 3-5 mathematics supplement, FoodMASTER grades 6-8 Science, FoodMASTER grades 6-8 Mathematics, and FoodMASTER Higher Education follow similar strategies, adjusted for grade level. While FoodMASTER resource materials apply some variation in learning objectives and approaches, the process generally includes the following: 1) initial development of the curricular material based on educators’ needs and input, 2) pilot testing and implementation with formative evaluations 3) advisory panel review and revisions, 4) final implementation, and 5) data analysis after final revisions. Workbooks for each learning level are guided by food subject matter and integration of science and mathematics concepts. All workbook activities are guided by and aligned with the Next Generation Science Standards (NGSS) and Common Core Mathematics Standards. NGSS and Common Core activity alignments can be found in the corresponding educator resources. Educators can choose to use the entire workbook or select individual activities based on their students’ learning needs and interest. Educator manuals, website videos, and teacher professional development activities assist educators in identifying the meaningful science and mathematics concepts within each chapter. FoodMASTER workbook materials are evaluated by educators throughout the process of development. Curriculum developers utilize appropriate grade level teachers as consultants during the first stages of development. Workbook materials are then tested in a classroom or multiple classrooms and implementation teachers provide extensive feedTopic Grade 3-5 Science Grade 3-5 Math Grade 6-8 Science Grade 6-8 Math Higher Ed Weights & Measures Chapter 1 Chapter 1 Chapter 1 Chapter 2 Food Safety Chapter 2 Chapter 2 Chapter 2 Chapter 1 Vegetables Chapter 3 Chapter 3 Chapter 3 Chapter 3 Fruit Chapter 4 Chapter 4 Chapter 4 Chapter 4 Milk and Cheese Chapter 5 Chapter 5 Chapter 5 Chapter 7 Meat, Fish, Poultry Chapter 6 Chapter 6 Chapter 6 Chapter 9 Eggs Chapter 7 Chapter 7 Chapter 6 Chapter 8 Fats and Oil Chapter 8 Chapter 8 Chapter 9 Chapter 10 Grains Chapter 9 Chapter 9 Chapter 7 Chapter 5 Meal Management Chapter 10 Chapter 10 Energy Balance Chapter 10 Activities 1-4 Food Composition Activities 5-8 Super Tasters Appendix Sugar Chapter 8 Chapter 6 Quick Breads Chapter 11 Yeast Breads Chapter 12 Table 2. FoodMASTER Workbook General Topic Content. The FoodMASTER Initiative Duffrin Vol. 3, Issue 2, August 2020 Journal of STEM Outreach 4 back to revise materials in working towards a final product. Curriculum Design. Workbook graphic design takes place prior to final implementation. The FoodMASTER activity material are produced by a professional graphic designer. This artistic enhancement improves visual appeal of the written information and applies high quality pictures and graphics. Educators who participate in final implementation receive printed hard copies and provide additional feedback post implementation. After the final implementation feedback is assessed, minor revisions are made to the final product. Finalized materials are then posted online, maintained free of charge, and made publicly available at www.foodmaster.org. Curriculum and Material Access. FMI team members decided that all curricular materials would be available to educators free of charge. At the completion of each 5-year project, efforts have been made to ensure that curriculum and other research products are shared publicly. This effort effectively removes the barrier of cost for educators wanting to access materials. Any potential user can go to the FMI website www. foodmaster.org and download materials for free. Currently, the website contains grades 3-5, 6-8, and higher education resource manuals (see Figure 1). The FMI also partnered with another free website at the USDA National Agricultural in the Classroom, to offer components of both the FMI grades 3-5 and 6-8 materials. The FMI team also prints and distributes free resource materials to educators through project participation and professional development opportunities. FMI book printing cost ranges from $12 to $50, depending on the size of the manual or workbook. Evaluation of Materials and Programs. Recognizing the importance of data to support the development and utilization of FMI workbook materials, educational research and evaluation is a component of all projects. The grades 3-5 and 6-8 science workbook projects have applied the most funding thus far to provide extensive research and evaluation. Both projects were implemented and evaluated in multiple classrooms with participating educators in Ohio and North Carolina classrooms. FMI institutional locations have determined where classrooms would be located for implementation and evaluation. Schools in desired geographic locations were approached directly with information about projects; teacher participation has been voluntary. School administration support was obtained before contacting teachers; this support was essential for subsequent voluntary teacher participation. In cases where well-matched comparison classrooms were utilized, teacher participation was also voluntary. All research projects were approved by university Institutional Review Boards and participant consents were obtained. When students were involved, parental consent was also obtained. Students in implementation classrooms were not required to participate in research to be part of program implementation. In the cases of well-matched comparison classrooms, teachers in those classrooms received all the same program support materials as the implementation teachers after data collection period. Grades 3-5 Food on the Farm, Grades 6-8 Food and You, and FoodMASTER Higher Education, have not undergone as extensive evaluation as have the Grades 3-5 and 6-8 science workbook activities. However, data from the grades 3-5 and 6-8 science workbook research informed much of the Grades 3-5 Science and Mathematics Grades 6-8 Science and Mathematics Higher Education Figure 1. FoodMASTER Workbooks. Graphic Art by Cara Cairns Design Years Projects Geographic Location Intervention Classrooms 2003-2004 Elementary Grades Southeast Ohio 1 2005-2008 Grade 3-5 Phase I Southeast Ohio 1 INTRODUCTION The Committee on STEM Education of the National Science and Technology Council (NSTC) clearly articulates a need for Americans to have access to lifelong science, mathematics, technology, and engineering (STEM) learning opportunities. Unfortunately, learners often become frustrated with science knowledge acquisition because information is taught without context, making it difficult to grasp the importance of the concept (Pajares, 1992). Guiding the development of STEM learning activities with Next Generation Science Standards (NGSS) and selecting subject matter content on learner's preexisting knowledge (Saunders, 1992) in combination with strategic pathways representing cross-cutting approaches (NSTC, 2018) can serve to empower all learners towards effective, productive citizenship. While subject matter selection, meeting content standards, and creating relevance are important factors, teaching methods and approach should also be carefully considered if educators are to impact a learner's attitude and confidence towards science learning (Pajares et al., 2014). To date, traditional lecture-style and read-write learning style approaches continue to be the default teaching style for many educators. It is easy, efficient, and works for many learners; however, it often fails to appeal to a wide variety of learning styles, inspire curiosity, and create a love of learning. Educators must be willing and prepared to provide more engaging experiences for students that involve inquiry, hands-on learning, group discussion, questioning, demonstrations, problems, drawing, projects, and other teaching methods if they are going to achieve quality outcomes (Kelly, 2000). In 1999, a partnership was created between a rural elementary school teacher and a practicing rural dietitian and university faculty in Southeast Ohio to use food as a tool to teach food, science, and mathematics. Knowing that science and mathematics literacy are fundamental to individuals' basic understanding of food and health and/or the pursuit of science-based careers, the two professionals created a STEM program called the FoodMASTER Initiative. The purpose of the FoodMASTER Initiative (FMI) was to create a science education program rooted in constructivist learning theory (Phillips et al., 2004). Food was selected as the teaching tool because 1) students have preexisting experiences with foods, 2) it is conducive to hands-on activities, and 3) concepts can be linked to course content in biology, chemistry, environmental sciences, mathematics, nutrition, and health. Additional consideration for the use of food included: 1) food is already relevant for the learner, 2) food is an easily accessible material, 3) positive science and mathematics learning experiences can be inspired with various foods, and 4) food experiences can span across the P-20 learning environment in both formal and informal learning environments. Given that the foundation of food and nutrition is rooted in mathematics and science subject matter, along with the potential to engage and inspire an individual's desire for lifelong science learning, the FMI team was compelled to examine food and nutrition education through various teaching and learning methods . The multidisciplinary team of science, nutrition, and health educators began to discuss how to create a program for mathematics and science teachers that would allow for more time in the classroom devoted to issues related to food and nutrition while maintaining the required learning objectives of the Mathematics Standards Common Core State Initiative and Next Generation Science Standards (NGSS). Instead of using a traditional health education approach, the team focused on foundational mathematics and scientific knowledge to apply basic good nutritional science principles in the classroom. The advantages of this approach would be to 1) expand the number of teachers in the school system discussing food and health sciences, 2) support the acquisition of foundation knowledge that would further enhance knowledge in science and mathematics, and 3) increase the number of content hours in nutrition science without taking time away from the core subject matter that is required, taught, and assessed with standardized testing. PROGRAM MISSION AND VISION The core mission of the FMI was established to address a long-standing lack of realistic sciences education opportunities for youth. The program vision is to provide teachers with authentic learning activities for classroom implementation to improve access, knowledge, and attitudes of youth towards science education (Diaz et al., 2018;Duffrin et al., 2010;Hovland et al., 2013;Roseno et al., 2014;Stage et al., 2017). FUNDING SECURED FoodMASTER began as a self-funded project partnering Ohio University with rural elementary students to learn mathematics and science using food. The project was called Kitchen Wizards and quickly demonstrated robust learning outcomes for both elementary and university students McLeod et al., 2012). The project also captured the attention of Ohio University 1804 funds committee which provided additional funding to expand the proof-ofconcept to six classrooms in Southeast Ohio. In 2003, the Kitchen Wizard concept was modified and renamed the FoodMASTER Initiative. In 2005, team FMI received a National Institutes of Health (NIH) Science Education Partnership Award (SEPA). In 2008, the FMI received project funding from the United States Department of Agriculture. NIH SEPA remains the primary funder of the FMI. KEY PROGRAMMATIC ELEMENTS The FMI encourages teacher adoption of the FMI approach by alleviating what teachers and administrators might perceive as barriers to implementation. The developers created the curricular materials to 1) address science learning standards, 2) provide proficiency type questions, 3) use affordable food, supplies, and equipment, and 4) ensure that all materials are teacher tested and revised with relevant feedback. The curriculum includes the features of reinforcing healthy food selection, encouraging reading comprehension, and providing take-home activities for children and families to implement in the home learning environment. Curriculum Development. FoodMASTER grades 3-5 was the first curricular manual developed, with twenty-four activities within 10 topic areas (see Table 2). Workbook activities introduce subject matter and promote reading comprehension through clever "Doodle bugs." "Doodle bugs" encourages reading comprehension of the pertinent content for each chapter by asking students to underline or circle key concepts. Next, students and teachers must read and implement a hands-on activity. These activities generally use food as a hands-on tool to convey a scientific and/or mathematics concept. Workbook developers created activities that require basic household cooking equipment, can be conducted in a regular classroom environment, and use food ingredients that are easily accessible. Other considerations are to provide as many edible end products as possible, especially using healthy food choices. While science and mathematics are the primary learning objectives, food and/or nutrition concepts are also integrated into each activity. The workbook encourages students to demonstrate learning through laboratory reports and discussion; however, proficiency questions are available at the end of each chapter. These questions can assist the learner in checking basic knowledge and provide an opportunity for responding to standardized questioning. The workbooks provide addition-al lessons to give students the opportunity to demonstrate thinking and learning skills in a "Try this at home" learning environment. The FoodMASTER grades 3-5 mathematics supplement, FoodMASTER grades 6-8 Science, FoodMAS-TER grades 6-8 Mathematics, and FoodMASTER Higher Education follow similar strategies, adjusted for grade level. While FoodMASTER resource materials apply some variation in learning objectives and approaches, the process generally includes the following: 1) initial development of the curricular material based on educators' needs and input, 2) pilot testing and implementation with formative evaluations 3) advisory panel review and revisions, 4) final implementation, and 5) data analysis after final revisions. Workbooks for each learning level are guided by food subject matter and integration of science and mathematics concepts. All workbook activities are guided by and aligned with the Next Generation Science Standards (NGSS) and Common Core Mathematics Standards. NGSS and Common Core activity alignments can be found in the corresponding educator resources. Educators can choose to use the entire workbook or select individual activities based on their students' learning needs and interest. Educator manuals, website videos, and teacher professional development activities assist educators in identifying the meaningful science and mathematics concepts within each chapter. FoodMASTER workbook materials are evaluated by educators throughout the process of development. Curriculum developers utilize appropriate grade level teachers as consultants during the first stages of development. Workbook materials are then tested in a classroom or multiple classrooms and implementation teachers provide extensive feed- back to revise materials in working towards a final product. Curriculum Design. Workbook graphic design takes place prior to final implementation. The FoodMASTER activity material are produced by a professional graphic designer. This artistic enhancement improves visual appeal of the written information and applies high quality pictures and graphics. Educators who participate in final implementation receive printed hard copies and provide additional feedback post implementation. After the final implementation feedback is assessed, minor revisions are made to the final product. Finalized materials are then posted online, maintained free of charge, and made publicly available at www.foodmaster.org. Curriculum and Material Access. FMI team members decided that all curricular materials would be available to educators free of charge. At the completion of each 5-year project, efforts have been made to ensure that curriculum and other research products are shared publicly. This effort effectively removes the barrier of cost for educators wanting to access materials. Any potential user can go to the FMI website -www. foodmaster.org and download materials for free. Currently, the website contains grades 3-5, 6-8, and higher education resource manuals (see Figure 1). The FMI also partnered with another free website at the USDA National Agricultural in the Classroom, to offer components of both the FMI grades 3-5 and 6-8 materials. The FMI team also prints and distributes free resource materials to educators through project participation and professional development opportunities. FMI book printing cost ranges from $12 to $50, depending on the size of the manual or workbook. Evaluation of Materials and Programs. Recognizing the importance of data to support the development and utilization of FMI workbook materials, educational research and evaluation is a component of all projects. The grades 3-5 and 6-8 science workbook projects have applied the most funding thus far to provide extensive research and evaluation. Both projects were implemented and evaluated in multiple classrooms with participating educators in Ohio and North Carolina classrooms. FMI institutional locations have determined where classrooms would be located for implementation and evaluation. Schools in desired geographic locations were approached directly with information about projects; teacher participation has been voluntary. School administration support was obtained before contacting teachers; this support was essential for subsequent voluntary teacher participation. In cases where well-matched comparison classrooms were utilized, teacher participation was also voluntary. All research projects were approved by university Institutional Review Boards and participant consents were obtained. When students were involved, parental consent was also obtained. Students in implementation classrooms were not required to participate in research to be part of program implementation. In the cases of well-matched comparison classrooms, teachers in those classrooms received all the same program support materials as the implementation teachers after data collection period. Grades 3-5 Food on the Farm, Grades 6-8 Food and You, and FoodMASTER Higher Education, have not undergone as extensive evaluation as have the Grades 3-5 and 6-8 science workbook activities. However, data from the grades 3-5 and 6-8 science workbook research informed much of the The FoodMASTER Initiative -Duffrin Vol. 3, Issue 2, August 2020 Journal of STEM Outreach served as engaged participants. Overall, student participants' attitudes towards science increased at the completion of activities. The knowledge domains of nutrition, science, and mathematics (tested with researcher developed exams) indicated a collective increase in scores for all domains and a significant difference in scores between the intervention and well-matched comparison groups. Educator participants displayed gains in self-efficacy toward teaching nutrition that were significantly greater than changes observed in the educator well-matched comparison group. Grades 3-5 project dissemination generated interest in expanding project concepts at higher grade levels. Food-MASTER expanded its resources to include middle grades 6-8 with the creation of a new science workbook. This workbook followed a similar subject matter, development, and research approach as the grades 3-5 workbook. Grades 6-8 Studies. Of the two completed studies with grades 6-8 (Table 6), one focused on nutrition knowledge exam development and one focused on process evaluation. The studies evaluated educators and students in sixteen classrooms in Eastern North Carolina over the course of a year-long implementation of the grades 6-8 workbook lessons. The nutrition knowledge exam included appropriate levels of item difficulty. Further analysis of student participant nutrition knowledge and attitudes remains in progress. For process evaluation, educators felt the program was a valuable experience for middle school students and were willing to repeat over half the chapters. Educators reported that the motivating factors for repeating activities included student enjoyment, standard alignment, ease of instructions, professional development training experience, and the provision of additional resources. Overall, the pilot test, grades 3-5, and grades 6-8 projects have produced quality products and data to support the continued use and expansion of the FoodMASTER initiative. Additional data and details can be found in the 19 peer-re-development of these materials. Efforts to implement and evaluate these products are on-going. In general, FMI research projects collect both formative and summative data to evaluate outcomes and program effectiveness. Evaluation methods include process evaluation of educator efficacy Stage et al., 2016), student knowledge (Hodges et al., 2017;Hovland et al., 2013;Roseno et al., 2014;Stage et al., 2015) student attitudes, and dietary intake assessment. Data collection techniques include observations, pre and post knowledge assessment and surveys, and educator interviews. Table 4 summarizes data collected during pilot studies and table 5 and 6 summarize the grades 3-5 and 6-8 science workbook research. Evidence supporting the use of FMI approach and materials is drawn from these studies. (Table 4) demonstrated successful student engagement and learning. The elementary students displayed enthusiasm for the curricular content and were engaged when utilizing food as a tool to learn science. Involving college students in the program enhanced their learning as well, especially through a mentorship process. Pilot study results warranted further exploration of the use of food as tools to engage elementary grade students in learning science. Subsequent to the first pilot studies, Kitchen Wizards was renamed FoodMASTER and further changes were made to the program content. Grades 3-5 Studies. Of the eight studies completed with grades 3-5 (Table 5), two focused on attitudes, one focused on implementation of a mathematics lesson, four focused on knowledge gains, and one focused on nutrition teaching efficacy. Six of the eight studies utilized a pre and posting testing data set from a year-long implementation of the finalized grades 3-5 workbook. Educators utilizing the entire grades 3-5 workbook expose students to an average of 18 hours of food-based education over the academic year. In general, students in all grades 3-5 activities were ob- viewed publications that are listed in the reference section of this manuscript. The reference section includes publications on preschool (Geist et al., 2011;Roseno et al., 2015) and higher education (Duffrin, 2003, Rivera et al., 2009Willard and Duffrin, 2003) projects that were outside the focused scope of the grades 3-5 and 6-8 projects. Outreach and Partnerships. Robust partnerships are a key component of FMI research and development projects. Teacher partnerships had been the primary focus for FMI in earlier projects, and community educator partnerships have been the focus of later projects. Both formal science learning environment partnerships with K-12 teachers and informal science learning environments with community educators are the key to programming success. Formal Science Learning Environments. Initially, partnerships with K-12 teachers and schools were the foundation for the FMI. A strong partnership with teachers was necessary to enhance the quality of the materials and to move program expansion forward. It was the proof of concept data and endorsement of previous FMI teacher(s) that provided FMI researchers the ability to establish new teacher partnerships. When approaching new teacher partnerships, it was important to create interest and establish trust. The process of establishing partnerships required several one-on-one teacher meetings. To begin teacher partnerships, first teacher contacts aim to establish interest in the FMI and to provide information about participation in current projects. For teachers expressing interest, FMI researcher(s) visit the teacher at their school and spend two hours with the teacher explaining project participation in further detail. Once a teacher partnership is established, FMI researchers work with the teacher to gain school administrative approvals and set up project participation expectations. Over the course of a project, teachers receive multiple site visits from the project coordinator to support implementation and partnership building. At the conclusion of the grades 3-5 and 6-8 projects, the FMI had partnered with a total of 31 Ohio and 46 North Carolina classroom teachers. Informal Science Learning Environments. As a component of the grades 6-8 project, the FMI proposed the development of a summer science experience for middle grade youth. This summer science experience forged a new partnership opportunity. Networking within the science education community introduced the FMI team to community organizations expressing interest in utilizing FMI workbook activities for science education programming. As a result of the summer science camp experience, a robust partnership formed with a local North Carolina -organization called the Love A Sea Turtle Foundation (LAST). LAST promotes environmental advocacy through youth community service and science education. A large component of LAST programming involves food and nutrition, as well as protecting food systems. LAST supports youth lead-ership development and provides services to underrepresented youth. The FMI and LAST partnership have served to reach thousands of youth in Eastern North Carolina with access to FMI programming, leadership opportunities, and an experience in informal science education. Initial summer day camp programming in partnership with the Love A Sea Turtle Foundation in Eastern North Carolina included 258 Boys and Girls Club members. LAST has continued to assist the FMI by reaching youth through additional partnership with organizations that provide opportunities for underserved youth such as the Boys and Girls Clubs, Police Athletic League, and the SPOT. Professional Development. As another component of outreach and partnerships, the FMI provides teacher and other professional development opportunities. The FMI team members have provided teacher professional development workshops to 270 elementary and middle school teachers through partnerships with the National Science Teachers Association regional meetings, Annual Environmental Health Sciences Summer Institute for K-12 Texas Educators, North Carolina Association for Biomedical Research, and the Northern Illinois University P-20 Center. The FMI team has also partnered with the School Nutrition Services Dietetics Practice Group of the Academy of Nutrition and Dietetics, introducing the FMI concept in a workshop to 58 registered dietitians attending the 2011 Food and Nutrition Conference Expo. SUSTAINABILTY The FMI team works diligently to address its biggest challenge, sustainability. One component is current project partnerships with six universities and one nonprofit 501(c) (3) organization. The goal is developing and delivering FMI concepts in science education programming. While still a work in progress, the university partnerships aim to support mentorship-based project partnerships. These mentorships can continue to expand and establish successful programs in other communities. At the 20-year anniversary of the initiative, the team created a significant amount of curricular materials and data to support the use of FMI materials. The FMI team has found that partnering and mentorship with formal and informal science education organizations appears to be the keys to expansion and sustainability success. Current efforts are underway to permanently integrate and establish the FMI within the structure of the Northern Illinois University P-20 Center. The FMI team believes new opportunities are rooted in educator professional development, along with continued strategic mentor-based program expansion. ASSOCIATED CONTENT More detailed acknowledgements and free materials can be found at www.foodmaster.org. The manuscript was written through contributions of all authors. All authors have given approval to the final version of the manuscript. ACKNOWLEDGMENTS FMI team members thank Sharon Phillips, the first partner teacher, and all the partner educators for connecting students with the FMI. Dr. Christopher Duffrin and Paula Wilson provided behind the scene ideas and editing support. Program and grants management staff from the National Institute of General Medical Sciences (NIGMS) at the NIH provided valuable support and guidance. The team thanks co-authors, program partners and researchers, administrators, students, advisory panels, and program officers who have supported the program and its mission.
v3-fos-license
2021-08-08T13:31:38.279Z
2021-08-07T00:00:00.000
236948626
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://bmcplantbiol.biomedcentral.com/track/pdf/10.1186/s12870-021-03021-6", "pdf_hash": "f84507c414447a487011d83d4f57b5bcc145026b", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43716", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "sha1": "e314b2891235e8ef63945bf841a28070ac2a1bc1", "year": 2021 }
pes2o/s2orc
Metabolic activities and molecular investigations of the ameliorative impact of some growth biostimulators on chilling-stressed coriander (Coriandrum sativum L.) plant Background Priming of seed prior chilling is regarded as one of the methods to promote seeds germination, whole plant growth, and yield components. The application of biostimulants was reported as beneficial for protecting many plants from biotic or abiotic stresses. Their value was as important to be involved in improving the growth parameters of plants. Also, they were practiced in the regulation of various metabolic pathways to enhance acclimation and tolerance in coriander against chilling stress. To our knowledge, little is deciphered about the molecular mechanisms underpinning the ameliorative impact of biostimulants in the context of understanding the link and overlap between improved morphological characters, induced metabolic processes, and upregulated gene expression. In this study, the ameliorative effect(s) of potassium silicate, HA, and gamma radiation on acclimation of coriander to tolerate chilling stress was evaluated by integrating the data of growth, yield, physiological and molecular aspects. Results Plant growth, yield components, and metabolic activities were generally diminished in chilling-stressed coriander plants. On the other hand, levels of ABA and soluble sugars were increased. Alleviation treatment by humic acid, followed by silicate and gamma irradiation, has notably promoted plant growth parameters and yield components in chilling-stressed coriander plants. This improvement was concomitant with a significant increase in phytohormones, photosynthetic pigments, carbohydrate contents, antioxidants defense system, and induction of large subunit of RuBisCO enzyme production. The assembly of Toc complex subunits was maintained, and even their expression was stimulated (especially Toc75 and Toc 34) upon alleviation of the chilling stress by applied biostimulators. Collectively, humic acid was the best the element to alleviate the adverse effects of chilling stress on growth and productivity of coriander. Conclusions It could be suggested that the inducing effect of the pretreatments on hormonal balance triggered an increase in IAA + GA3/ABA hormonal ratio. This ratio could be linked and engaged with the protection of cellular metabolic activities from chilling injury against the whole plant life cycle. Therefore, it was speculated that seed priming in humic acid is a powerful technique that can benefit the chilled along with non-chilled plants and sustain the economic importance of coriander plant productivity. Supplementary Information The online version contains supplementary material available at 10.1186/s12870-021-03021-6. Background Generally, chilling has been defined as that under low atmospheric temperatures no ice formed inside plant tissues. It has been previously reported that plant species subjected to low temperature emerged as one of the serious problems. This problem was reported previously by Wang et al. [99] in tropical and subtropical plants due to a sudden change in temperature. Chilling has a serious impact on the growth and production of commercial crop plants marked as sensitive to chilling like tomato, maize, cotton, pepper, soybean, rice, and affects tropical and subtropical fruits like bananas, papayas, mangoes, grapes, and oranges [86]. Furthermore, low-temperature results in a physiological disturbance known as chilling injury. Various plant developmental and physiological processes (like crop growth, cell division, photosynthesis, water transport, lipids, metabolites, and yield) are negatively affected by this injury [32,59]. Coriander (Coriandrum sativum L.) is a Mediterranean famous herb that belongs to family Apiaceae (Umbelliferae) and characterized by its essential oils used in food industries. Also, coriander is considered as an essential ingredient in curry powder, pharmaceutical and medicinal industry, and cosmetics. Coriander is also well known for its antioxidant, anti-diabetic, antimutagenic, anti-anxiety and antimicrobial activity along with analgesic and hormone balancing effect. Furthermore, coriander is famous by containing many essential oil active compounds primarily monoterpenes, pinene, limpene, ý-terpinene, p-cymene, borenol, citronellol, camphor, geraniol, coriandrin, dihydrocoriandrin, coriandrons A-E, and, flavonoids. These components help in removing toxic mineral residues such as mercury and lead [54]. Coriander seeds, leaves, and roots are edible, possessing light and fresh distinct flavor. Fresh leaves and ripe fruits are mainly used for culinary purposes. The plant leaves are rich source of vitamins, while seeds are rich in polyphenols and essential oils [79]. The fruit contains 50% linalool composition used in pharmaceuticals (as good source of α-tocopherol and vitamin A), in cosmetic and hygienic industries, and in food and drug industries [79]. The previously mentioned coriander benefits have prompted us to focus our study on this valuable herb particularly the influence of low temperature (chilling) environmental factor on the coriander productivity. Recently, regulation of the metabolic pathways was practiced in application of biostimulants, such as 2,4dichlorophenoxyacetic acid exploited as stimulant in mango fruits, to enhance acclimation and tolerance in coriander subjected to chilling stress [98]. The silicon effect was reported as beneficial for protecting many plants from biotic or abiotic stresses [60]. Many investigations of primed plants with silicon have recorded greater membrane stability index under stress [56]. The value of potassium silicate was as important a nutritional supplement of both silicon and potassium, which are involved in improving the morphological characters of plants [25]. Humic acid (HA) is a derived acid from soil organic matter and originated from plants, microbes, carbohydrates, proteins, and lignin. HA is the major component of humic substance and is extractable in alkali soil media [96]. In addition, HA possess a powerful impact on improving soil fertility and facilitating root uptake by regulating their function and structure under normal or abiotic stress [16,96]. The chemical structure of HA enhances chelation of soil minerals and increased acquisition of nutrients by plants [73]. Previous studies have demonstrated that HA derivatives get firmly attached to the root, aggregate on the cell wall, and solubilize quickly in the cell cytoplasm within few hours of treatment before moving upwards to the shoot [16]. Gamma radiation is also known as ionizing radiation that reacts with atoms and molecules inside the cells to produce free radicals. However, production of free radicals is dependent on the irradiation dose and likely causes damage or modification of components in plants, and ultimately affects the morphology, physiology, anatomy, and biochemistry of plants [7]. As a result, gamma alters photosynthesis, expansion of thylakoid membrane, accumulation of phenolic compounds, and variation of the antioxidative system [7]. It was reported that previously fertilized rice with silicon has grown better after exposure to gamma rays [61,62]. Moreover, medicinal plants subjected to 50-Gy gamma irradiation had the maximal beneficial effects on stress acclimation, improvements in germination and growth/ yield parameters, and active ingredients enhancement [6,22,89]. In addition, gamma irradiation was used for decontamination in medicinal plants [28,35]. In the same context, application of low doses of gamma radiation Gy) on chilled-primed Apium graveolence (L.) seeds, either at room temperature or at 5°C, were effective in alleviating chilling stress by stimulating celery growth and proliferation [26]. Hereby, the aim of this study was to evaluate the ameliorative effect(s) of potassium silicate, HA, and gamma radiation on acclimation of coriander to tolerate chilling stress by recording the data of growth, yield, physiological and molecular aspects. Results For sake of clarity and concise focus throughout showing the obtained results, the percentage of increase/decrease was calculated from the statistically analyzed represented data in the shown tables. The percentage of increase/decrease was calculated as an increase/decrease percentage value in accordance with the control value. This percentage value was calculated by subtracting the value of control reading from the reading value of any physiological treatment, then the result was divided by the reading of control value, and finally, the result is multiplied by 100. The experimental protocol is presented and listed in Table 1. Growth parameters When compared with non-treated coriander plants, chilling stress caused a significant inhibition in all growth parameters (shoot and root lengths, fresh and dry weights of shoot and root, number of leaves/plant, number of branches/plant, leaves area/plant, and no. of inflorescences/plant) throughout experimental duration (Table 2). Generally, all growth parameters were stimulated by soaking seeds in potassium silicate, HA or exposed to γ-rays as compared with control and chillingstressed coriander plants at the vegetative stage ( Table 2). The most effective treatment was HA alone in both control and stress alleviated samples. At the flowering stage ( Fig. S1 and Table 3), chilling stressed and alleviated coriander samples by HA treatment have recorded a significant increase in growth parameters (full length figure is attached as Fig. S1) evaluated by 29.03, 8.94, 91.6, 208.5, 216.1, 178.8, 132.3, 100, 402.9, and 80% respectively, more than chilling stressed samples ( Table 3). The same parameters were increased over their corresponding control plants by 12.5, 15.6%, 56.5, 103.9, 218.4, 68.3%, 34.4, 36.4, 82.7, 28.6% for shoot length, root length, fresh and dry weights of shoot and root, number of leaves per plant, number of branches per plant, leaves area per plant, and no. of inflorescences per plant, respectively. Yield components In comparison with non-treated coriander plants, chilling stress (6°C ± 0.5) induced significant decrease in yield components (c.a. number of fruits/plant, number of seeds/plant, weight of seeds/plant, and weight of 1000 seeds) as shown in Table 4. Among the different treatments, it has been found that, the number of fruits and seeds, seeds weight/plant, and weight of 1000 seeds were all increased. The superior treatment, in enhancing and improving fruits and seeds development within chilling and non-chilling conditions, was HA followed by silicate and γ-radiation. Hereby, HA most likely has triggered the highest ameliorative effect on fruits and seeds number/plant (Table 4). Also, pre-soaking treated coriander seeds in silicate, HA, or γ-radiation have caused improvement of the seed index as compared with control and stressed coriander plants. The best treatment that caused the highest quality and improved seeds yield was HA alone or in combination with chilling. Moreover, seeds quality was improved by 73.3 and 92.92% over those of the control and chilling-stressed plants, respectively (Table 4). Therefore, HA application was the best to alleviate the impact of chilling stress. Endogenous phytohormones Chilling stress has induced a significant decrease in the growth promoting substances (IAA and GA 3 ) levels by (Table 5). All applied treatments either separately or in combination with chilling stress have induced marked increases in both IAA and GA 3 contents. The maximum increases in IAA (104.52%) and GA 3 were obtained in chilling-stressed samples alleviated by HA as compared by other chilling-treatments (Table 5). On the other hand, ABA content was increased upon chilling stress and decreased particularly after HA subsequent treatment. Treatment by γ-radiation has led to ABA increase in control coriander. Furthermore, sole treatment by γ-radiation has led to ABA increase in coriander leaves. In addition, chilling stress has caused a marked decrease in IAA+ GA 3 /ABA ratio, while soaking coriander seeds in pot. Silicate, HA, or irradiation with γ-rays has induced a reverse pattern in this ratio as compared with chilling-stressed samples. It was found that the maximum peak of such response was obtained by alleviation of the chilling stress by HA application (Table 5). Changes in photosynthetic pigments and carbohydrates content Chilling stress caused a pronounced decrease in chl a, chl b, and consequently the total chlorophylls below those detected in control coriander leaves. All applied treatments have induced a marked increase in chl a, chl b, and total chlorophylls in stressed samples ( Table 6). The maximum alleviated impact was achieved by individual treatment of HA or HA combined with chilling when compared with chilling-stressed coriander samples. Chilling stress has induced an increase in (chl a/ chl b) ratio more than control plants. Furthermore, all applied treatments have triggered a marked increase in (chl a/chl b) ratio in relation to control. Chilling stress combined with different stimulator elements (Pot. silicate, HA, and γ-irradiation) have recorded an increase in (chl a/chl b) ratio in control and chilling stress leaves. The maximum values 2.14 were achieved by chilling plus Pot. silicate and chilling plus HA which increased by 20.90 and 12.04% more than control and chilling-stressed leaves, respectively. On the other hand, the soluble sugars were increased significantly in chilling stressed plants, particularly under the effect of HA treatment compared with control values. All applied treatments-Pot. silicate, HA, or gamma radiation-either separately or in combination with chilling stress have increased the soluble sugars content of coriander leaves as compared with untreated control plant ( Table 6). The most pronounced effect was recorded in HA application. The latter treatment was considered as the best enhancer for soluble sugars in chilling stressed coriander by 40% higher than control samples, followed by gamma radiation and Pot. silicate application. Polysaccharide contents were decreased under chilling stress and increased in treated coriander alleviated with silicate and HA in both stressed and control coriander. However, the increase of total carbohydrates level was taken place by HA pretreatment in the control and alleviated chilling stressed coriander with silicate and HA (Table 6). It was worthy to mention that although individual gamma radiation has increased carbohydrates values over chilling stress condition, it was not the best in terms of chilling stress alleviation through carbohydrates protection and restoration compared to HA and silicate treatments ( Table 6). Gamma rays' impact on carbohydrates might be described as intermediate between HA and pot. Silicate effects. Table 4 Effect of chilling stress on coriander (Coriandrum sativum L) seeds pre-soaked in 80 mM pot. Silicate, 50 mg l − 1 humic acid or soaked in water after exposure to γ-rays (50 Gy) and the interaction of the alleviation treatments and chilling stress on the yield components. The shown data was extracted by using 3 biological and 3 technical replicates. Each biological replicate is comprised of 10 plants (one pot). To perform the biochemical analysis, the combined tissue of these ten plants (one pot content) refers to one technical replicate. The readings of the 3 technical replicates were recorded. Sample extraction was done solely for each technical replicate. The mean of the values was used to calculate ±SE. Also, the least significant differences (LSD) at 5% level were calculated to compare the means of different treatments according to Snedecor and Cochran [92]. The values with the same letter are not significantly different (P<0.05). The raw data set of the technical replicates was attached as a supplementary file Table 5 Effect of chilling stress on coriander (Coriandrum sativum L.) seeds pre-soaked in 80 mM pot. Silicate, 50 mg l −1 humic acid or soaked in water after exposure to γ-rays (50 Gy) and the interaction of the alleviation treatments and chilling stress on endogenous phytohormones (μg/100 F. wt.) at flowering stage. The shown data was extracted by using 3 biological and 3 technical replicates. Each biological replicate comprised of 10 plants (one pot). To perform the biochemical analysis, the combined tissue of these ten plants refers to one technical replicate. The readings of the 3 technical replicates were recorded. Sample extraction was done solely for each technical replicate. The mean of the values was used to calculate ±SE. Also, the least significant differences (LSD) at 5% level were calculated to compare the means of different treatments according to Snedecor and Cochran [92]. Changes in antioxidant compounds The Changes in antioxidant compounds (ascorbic acid, carotenoids, flavonoids, total Phenols, and Proline) of coriander leaves in response to pre-sowing step with chilling stress (the case of pot. Silicate or HA) or in H 2 O (after exposed to γradiation) and their interaction were shown in Table 7. Firstly, referred to control value, chilling stress caused a significant decrease in ascorbic acid contents by 46.81% below control value. In turn, pot. Silicate and HA (separate or in combination with chilling stress) have caused a significant increase in ascorbic acid contents as compared with non-chilling & chilling-stressed plant. The maximum value obtained from chilled plant primed in HA was increased reached 31.91 and 148% over non-chilling and chilling control plant, respectively (Table 7). Conversely, exposure of seeds to γrays caused a decrease in ascorbic acid contents as compared with control plant, but their interaction with chilling stress have induced marked increase in ascorbic acid as compared with chilling-stressed plant. The most effective treatment in alleviating adverse effect of chilling was HA. Secondly, it was notably detected that chilling in coriander caused a significant decrease in carotenoids content by 45.14% below of control plants. Pot. silicate, HA, and γradiation and their interaction with chilling stress have induced marked increments in carotenoids content over chilling-stressed plants. The most effective treatment in alleviating adverse effect of chilling was HA followed by silicate individually or in combination with chilling stress (Table 7). Thirdly, as compared with the control coriander plants, chilling stress has caused a significant increase in flavonoids content by 19.96% as shown in Table 7. While, silicate, HA, and γradiation individually caused significant increments in flavonoids content in relation to non-chilling control. The highest content was obtained by seeds soaked in HA. However, the interaction between different treatments and chilling stress induced significant increments in flavonoid contents as compared with non-chilling or chilling stressed control plants except in γradiation, which decreased flavonoids content significantly when compared to chilled control plants. Therefore, the best treatment that alleviated the harmful effect of chilling was HA then silicate, as both had increased the flavonoids content by 15.41 and 3.33% over that of chilled coriander plants. Fourthly, compared with control plants, chilling stress induced increments in total phenolic content by 17.768% over control coriander plant. Silicate, HA, and γradiation which applied individually and their interactions with chilling stress induced significant increments in total phenol contents as compared with non-chilling control plants. On the other hand, the interaction treatments decreased total phenol except silicate that caused an increase by 3.42% as compared with chilled plants. Finally, incubation of coriander seed in water (6°C ± 0.5) for 16 h has increased proline contents by 92.78% in grown leaves above the control value. Generally, silicate, HA, and gamma irradiation treatments have induced increments in proline content comparing to non-chilling control plants. However, all applied treatments Table 6 Effect of chilling stress on coriander (Coriandrum sativum L.) seeds pre-soaked in 80 mM pot. Silicate, 50 mg l −1 humic acid or soaked in water after exposure to γ-rays (50 Gy) and the interaction of the alleviation treatments and chilling stress on photosynthetic pigments (μg/g D. wt. in coriander leaves) and carbohydrate contents (g/100 g D. wt.) at flowering stage. The shown data was extracted by using 3 biological and 3 technical replicates. Each biological replicate comprised of 10 plants (one pot). To perform the biochemical analysis, the combined tissue of these ten plants refers to one technical replicate. Sample extraction was done solely for each technical replicate. The readings of the 3 technical replicates were recorded. The mean of the values was used to calculate ±SE. Also, the least significant differences (LSD) at 5% level were calculated to compare the means of different treatments according to Snedecor and Cochran [92]. The values with the same letter are not significantly different (P<0.05). The raw data set of the technical replicates was attached as a supplementary file decreased the proline contents below those of chilling stressed plant except in case of silicate in combination with chilling stress, which increased their content by 1.81%. Antioxidant enzymes and lipid peroxidation The changes in antioxidants enzymes activities were investigated for primed non-chilled or primed chilled coriander plant using pot. Silicate, HA, and H 2 O after exposure to gamma rays and their interaction are represented in Table 8. All the applied treatments have decreased PPO activity below that of chilling stressed plant, except in plant exposed to γradiation, which has non-significant change. Also, alleviation the chilling stress by pot. Silicate, HA, and γ-radiation decreased POD activity by 33 Regarding monitoring lipid peroxidation, estimation of MDA is crucial since MDA was a marker for evaluating lipid peroxidation and damage to plasma lemma or organelle membranes which increases with different environmental stress factors. The result listed in Table 8 revealed that incubation of coriander plant seeds in 6°C ± 0.5 induced a marked increase in MDA contents by 84.62% with respect to control coriander plant. Whereas pre-soaked seeds in pot. Silicate or HA or soaking in water after irradiated by γrays have induced either significant increase in MDA values as compared with control coriander plants or decrease by 34.45, 27.70 and 37.16%, respectively when compared with chilling stress. Also, the interaction of priming and chilling caused a decrease in value of MDA in relation to chilling stressed plants. The magnitude of such response was more pronounced in gamma radiation followed by HA priming, which decreased by 35.59 and 20.95%, respectively. In general, pot. Silicate, HA, and γradiation could Table 7 Effect of chilling stress on coriander (Coriandrum sativum L.) seeds pre-soaked in 80 mM pot. Silicate, 50 mg l − 1 humic acid or soaked in water after exposure to γ-rays (50 Gy) and the interaction of the alleviation treatments and chilling stress on antioxidant compounds (ascorbic acid, carotenoids, flavonoids, total phenolics and proline) at flowering stage. The shown data was extracted by using 3 biological and 3 technical replicates. Each biological replicate comprised of 10 plants (one pot). To perform the biochemical analysis, the combined tissue of these ten plants refers to one technical replicate. Sample extraction was done solely for each technical replicate. The readings of the 3 technical replicates were recorded. The mean of the values was used to calculate ±SE. Also, the least significant differences (LSD) at 5% level were calculated to compare the means of different treatments according to Snedecor and Cochran [92]. The values with the same letter are not significantly different (P<0.05). The raw data set of the technical replicates was attached as a supplementary file. The percentage of increase (inc.) or decrease (dec.) caused by the chilling stress was investigated. ⬇ Refers to the percentage of decrease and ⬆ refers to the percentage of increase compared with the control values. By being the best alleviation element against the chilling stress (except for phenolic and proline), the percentage of increase in all measurements, triggered by HA application, was further investigated. This percentage was calculated by subtracting the value of control/chillied reading from the reading value of any physiological treatment, then the result was divided by the reading of control value, and finally, the result is multiplied by 100 alleviate the inhibitory effect of chilling stress by decreasing lipid peroxidation below that induced by chilling stress. Characterization of chilling stress impact on TCPs and expression of chloroplast marker proteins TCPs were extracted from control, chilling-stressed, and alleviated biostimulants treated and stressed leaves of 75-days-old coriander plant at vegetative stage. Protein banding profiles of 70-100 μg TCPs (equivalent to total protein content) were fractionated using 10% SDS-PAGE technique (Fig. S2a, b). To manifest the consistency and reproducibility of resulted protein profiles after stress performing and stress-alleviation application, TCPs were extracted from studied samples along with two successive seasons (season 1; Fig. S2a and season 2; Fig. S2b). It was found that the protein band, detected approximately at 53 kDa, was identified as RuBisCO LS in all samples of control ( Fig. S2a, b, Lane 1), stressed ( Fig. S2a, b, Lane 2), and chilling-stressed alleviated coriander plants (Fig. S2a, b, Lanes 3-5). Accumulation of RuBisCO LS was pronounced and negatively affected by applied chilling stress (Fig. 2a). Notably, alleviation of chilling stress by HA application (50 mgl − 1 ) has potentially enhanced and promoted the accumulation of the major pronounced RuBisCO LS protein band (Fig. S2a, b). The Expression of RuBisCO LS protein product was not retrieved, at least to control level, in chilling-stressed coriander plants alleviated by separate and individual application of silicate and gamma irradiation (Fig. S2a, b). Moreover, using of HA as a stress alleviation element has positively induced the expression of unique and characteristic polypeptides running approximately at 45, 48, 65, and 80 kDa more than their corresponding bands in control samples (Fig. S2a, b). In the same context, quantification of RuBisCO LS protein band, by loading ascending concentrations of protein standard BSA using SDS-PAGE technique, has manifested previous investigations ( Fig. 1a; Fig. S2a, b). Band scoring has revealed a percentage of polymorphism by 38.4 and 29.4% for season 1 and 2, respectively, with a mean of 33.9%. The generated binary matrix (based on band presence) was used to construct a cluster analysis. Latter analysis was used to find the most relevant samples Table 8 Effect of chilling stress on coriander (Coriandrum sativum L.) seeds pre-soaked in 80 mM pot. Silicate, 50 mg l −1 humic acid or soaked in water after exposure to γ-rays (50 Gy) and the interaction of the alleviation elements and chilling stress on antioxidant enzymes PPO, POD, CAT (unit/mg protein) and MDA(nmol/g F. wt.) at flowering stage. The shown data was extracted by using 3 biological and 3 technical replicates. Each biological replicate comprised of 10 plants (one pot). To perform the biochemical analysis, the combined tissue of these ten plants refers to one technical replicate. Sample extraction was done solely for each technical replicate. The readings of the 3 technical replicates were recorded. The mean of the values was used to calculate ±SE. Also, the least significant differences (LSD) at 5% level were calculated to compare the means of different treatments according to Snedecor and Cochran [92]. The values with the same letter are not significantly different (P<0.05). The raw data set of the technical replicates was attached as a supplementary file. The percentage of increase (inc.) caused by the chilling stress compared with the control values was investigated. By being the most alleviation element to restore quietly the enzymes steady state concentration (except for POD and MDA), the percentage of decrease compared with the chilling stress values in all measurements triggered by HA application was further investigated. This percentage was calculated by subtracting the value of control/chillied reading from the reading value of any physiological treatment, then the result was divided by the reading of control value, and finally, the result is multiplied by 100 based on their protein profiles. Notably, control coriander was clustered in one group with stressed samples alleviated by HA treatment (Fig. 1b). On the other hand, the expression of Toc34, Toc75, and eHSP70 were negatively affected by chilling stress; whereas HA treatment was able to maintain, even upregulate, their production ( Fig. 2; Fig. S3). The same findings were demonstrated concerning the expression of RubisCO and Toc complexes running approximately at 480 and 700 kDa, respectively ( Fig. 3; Fig. S4). HA was found to trigger the optimum alleviating impact keeping and promoting the production of both RubisCO and Toc complexes (Fig. 3). Improvement of the growth parameters and yield components in stressed-alleviated coriander plants Generally, all growth parameters were stimulated by soaking seed in potassium silicate, HA or exposed to γrays as compared with control and chilling (stressed) coriander plants ( Table 2). The most effective treatment was HA alone in both control and chilling-primed samples. In the present study, chilling stress has initiated adversely and inhibitory impact on investigated growth parameters in coriander plant. Reduction in shoot length, branches number/plant, leaves area/plant, root length, fresh and dry weights of shoot and root at The data were shown as mean ± s.e.m.; *, P < 0.05. ImageJ software (IJ 1.46r) was used for image processing and analysis of the electrophoretic running of ascending concentration series of BSA (as protein size standard) to quantify RuBisCO LS concentration in (ng) of three independent gel repeats. Full data sets showing the quantification counts were supplement separately. Also, Full-length gels are presented in Supplementary Fig. (2a and b). The data was normalized to the protein band running approximately at 180 kDa as shown in Suppl. Fig. 2a. b) Impact of alleviation treatments on TCPs profiles of chilling-stressed (6°C ± 0.5) coriander plants at the vegetative stage (75 days old). Cluster analysis resulted from SDS-PAGE fractionated TCPs as revealed by chilling stress and alleviation treatments to its impact. A dendrogram for the five examined coriander samples was constructed using scored data of fractionated TCPs after chilling stress application and subsequent biostimulants treatments using Unweighed Pair-group Method of Arithmetic mean (UPGMA) and similarity matrices was computed according to Dice coefficient vegetation and flowering stages of plant was observed. As compared with control values, this reduction in growth parameters has been elsewhere reported [29,32] and could be attributed to decrease in water absorption, altered cell division and cell elongation rates which affect the leaf sizes and weight and reduced ability to close stomata in response to subsequent water deficit [17]. Supply of insufficient water provoked a rapid drop of water potential in leaves during the first hours of cooling. The declining rate of photosynthesis, due to the adverse effect in CO 2 assimilation, may weaken the growth through lowering of the rates of both cell division and elongation [4]. Improvement in the growth parameters by increasing of shoot length, fresh and dry weight of shoot and root, leaf area, and branches number/plant ( Table 2) were initiated and triggered by using silicate, HA, and gamma rays to alleviate chilling stress. The most effective alleviating element was HA in both control and chilling-stressed samples. The triggered stimulatory impact in growth parameters could be considered as a protective role of silicate, HA, and gamma rays. Silicon was suggested to alleviate chilling stress by deposition in cell wall, increasing its rigidity, and increasing internal storage water within the plant by reducing the water loss, conferring higher growth rates, and, lightening in turn harmful effects of abiotic stress [10]. Also, application of HA was suggested to induce plant growth by acting as a plant growth regulator [80] by the interaction of HA with the rhizosphere and evolving IAA increasing cell division. The latter promotional 4) or soaked in water after exposed to 50 Gy gamma irradiation (Lane 5). TCPs were extracted and fractionated by SDS-PAGE and immunodecorated against α-Toc34, α-Toc75, eHSP70, and actin primary antibody in a dilution of 1:10,000 as demonstrated in [51]. Cropping of the shown blots was performed properly for sake of clarity and focusing the information. Full-length blots are accompanied the manuscript as Supplementary Fig. 3. Protein extraction procedure for each physiological status (control, chilling stressed, etc.) was performed from the leaves of 3 biological replicates and 3 technical replicates. Each technical replicate represented one biological replicate. Each biological replicate comprises the collection of leaves of 10 plants. The protein extraction was carried out from each technical replicate independently. Finally extracted proteins from the 3 technical replicates were pooled together. Pooled sample were quantified, equally loaded into 10% SDS-PAGE, and blotted onto PVDF membrane as shown in methods section. Consequently, aliquots of pooled sample were kept as −80°C after the short snap for 30 s in Liquid Nitrogen results were reflected as an increase in cytoskeleton protein, growth of lateral roots, and root total area [19,73]. Detected IAA higher rate in coriander treated plants with HA supported the latter notion. HA might lead to higher rates of K+ ions uptake and therefore a corresponding increase in chlorophyll fluorescence [67]. Hereby, it might be suggested that HA has improved plant tolerance to abiotic stress and promoted growth by increasing auxins, gibberellins and decreasing ABA (the present data), enhancing nutrient uptakes, photosynthesis, and by reduction of water loss [21,84]. In addition, stimulation effect of low doses of gamma rays was evidenced by the promotion of various cellular processes, induction the biosynthesis of phytohormones or nucleic acid, accelerated cell proliferation and enzymatic activity, stress resistance, and crop yield [48,78]. The results obviously have shown that pre-sowing coriander seeds in HA was the most effective treatment in mitigation the adverse effect of chilling on seeds yield of coriander plant (Table 4). This result agreed with an earlier study [11]. Improvement of yield and yield components by HA may be attributed to increasing of nutrients uptake, especially nitrogen content, phosphorus and hormone-like effect of HA, or by maintained photosynthetic tissues and leaf chlorophyll increase [74]. Also, the stimulatory effect of endogenous hormones on the cell division and/or enlargement by applied HA was reported by maintaining IAA level, decreasing IAA oxidase The data were shown as mean ± s.e.m.; *, P < 0.05, **, P < 0.005, ***, and P < 0.0005. ImageJ software (IJ 1.46r) was used for image processing and analysis of the electrophoretic running of ascending concentration series of BSA (as protein size standard) to quantify the concentration in (ng) of three independent gel repeats. Full data sets showing the quantification counts were supplement separately. Also, Full-length gels are presented in Supplementary Fig. (3a, b). The data was normalized to the protein band running approximately at 300 kDa as shown in Suppl. Fig. 3a activity, and promoting metabolic activities which accelerate crops growth and yield [42]. In addition, gamma irradiation has induced improvement of seed yield in the chilling of coriander plants. Similar results were obtained for sunflower [2], Ammi visnage L. [24], and soybean [72]. This could be ascribed to growth stimulation by changing the hormonal signaling network, or by increasing antioxidative capacity of the cell to easily overcome daily stress [47], or by promoting the enzymatic activation resulting in stimulation of cell division rate, which affects not only in germination but also vegetative growth and flowering. In the same context, previous studies have concluded that plant and grain nutritional quality were enhanced by irradiation due to its promoting effect on plant water status by controlling photosynthetic rate, transpiration, and stomatal conductance [90]. HA is the key player in promotion of endogenous phytohormones under chilling stress Chilling stress caused a decrease in both IAA and GA 3 contents in coriander leaves (Table 5). This may be due to the influence of chilling stress on hormonal balance that affects plant growth and development. Hereby, it could be speculated that the reduction in plant growth under stress conditions could be an outcome of an altered hormonal balance [70]. On the other hand, the amount of ABA detected in coriander leaves increased in response to chilling stress. Abscisic acid accumulated in response to different environmental stresses such as salinity, cold and drought [39]. ABA regulates important cellular processes such as stomatal closure by guard cells, mediated by solute efflux, and regulates expression of many genes that may function in tolerance against chilling stress [39]. On the other hand, pre-soaking coriander seeds in silicate, HA, and irradiation with gamma rays induced higher contents of growth promoting substances (IAA and GA 3 ) and lowered ABA level. The most effective treatment that increased (IAA and GA 3 ) to alleviate chilling stress was HA (Table 5). In this respect, latter findings agreed with Abdel-Mawgoud et al. [1] who has demonstrated that HA treatment was the causal agent of increased auxins, cytokinins, and GA 3 contents in tomato. In the same context, growth promoter (IAA) increased in wheat grown under newly reclaimed soil supplemented with HA [23]. HA might be considered as growth regulator that adjusts hormonal levels, stimulates plant growth, and induces stress tolerance [21]. To a lesser extent, low dose of γ-rays was found to increase Kinetin and GA 3 hormones of Eruca vesicaria L. through triggering changes in hormonal signal network followed by stimulation of growth [71]. It might be concluded that improvement of coriander tolerance to chilling stress was achieved to a higher extent in response to applied HA treatment, followed by silicate. This depended on their role in decreasing IAA oxidase activity, synthesizing adequate level of endogenous phytohormones, promoting metabolic activity, and consequently accelerating plant growth. Enhancement of photosynthetic pigments by HA in coriander plants with alleviated stress The deleterious effect of chilling stress on photosynthetic pigments of coriander leaves was shown through decreasing chl a, chl b, and subsequently the total chlorophylls (Table 6). This result was consistent with earlier experiments conducted on Phaseolus spp. grown at low temperature (10°C) [97]. The marked reduction in photosynthetic pigments in chilling-stressed coriander leaves might be ascribed to the mechanical forces generated by formation of extracellular ice crystals, cellular dehydration, and increase concentration of intracellular salts [55]. Latter mechanical forces not only resulted in membrane damage and membrane structure alteration but also affected photosynthetic electron transport, CO 2 fixation, RubisCO activity, and stomatal conductance [61,62]. Application of silicate, HA, and gamma radiation on chilling-stressed plants could alleviate the adverse effect of chilling by increasing Chl a, Chl b, and the total chlorophylls levels ( Table 5). These results were in harmony with those of Zhu et al. [103], Sivanesan et al. [91], and Habibi [29]. This may be attributed to silicon whose application increased the levels of chl a and chl b, which in turn indicates synthesis of new pigments and maintenance of previously existing chl a and chl b. However, HA was the most effective treatment in mitigating chilling stress by increasing Chl a, Chl b, and consequently total chlorophylls. This may be ascribed to the role of HA as an important biostimulant capable of promoting hormonal activity, producing antioxidants, and reducing free radicals in plants. It has improved root vitality, increased nutrient uptake, stimulated chlorophyll synthesis and/or delayed chlorophyll degradation [57]. Taken together from the presented results, HA treatment restored and maintained the hormonal balance in chilling-stressed coriander to the same level found in the control plants. This balance was triggered by declining ABA levels which mediated root growth enhancement, maintained photosynthetic pigments, and carbohydrates metabolism [63,70]. Enrichment of carbohydrate content by HA treatment In the present investigation, soluble sugars were increased in leaves of the chilling-stressed coriander plant, while polysaccharides and total carbohydrate contents ( Table 6) were decreased as compared to the control plant agreeing with the previous investigation of Azymi et al. [12]. The accumulation of total soluble sugars was reported as a fundamental component in chilling tolerance in many plant species in response to chilling stress. Soluble sugars might act as compatible solutes under chilling stress [12]. It was suggested that soluble sugars play crucial roles in osmotic adjustment, protection of specific macromolecules, and stabilization of membrane structures [13]. Soluble sugars are thought to interact with phospholipids polar head groups in membranes and to prevent membrane fusion [13]. In addition, sucrose and other sugars play a central role as signaling molecules that regulate the physiology, metabolism and development of plants [8]. The reduction in polysaccharides and total carbohydrates of leaves of chilling coriander plants were correlated with arrested growth rate and decrease in leaf photosynthetic pigments (Table S1; Fig. 4). Specifically, upon HA application, an ameliorative impact in growth, metabolism, and expression of Toc and RuBisCO complexes was triggered (Fig. 4). It might be concluded that cold stress might inhibit the photosynthetic activity and/or increase partial utilization of carbohydrates into the soluble sugars and metabolic products [8]. On the other hand, pre-soaking the seed of coriander plants in silicate, HA, or exposed to gamma radiation induced significant increases in soluble sugar, polysaccharides, and total carbohydrates ( Table 6). These effects were much more pronounced by HA alone or in combination with chilling treatment. Similar result concerning the effect of low dose of gamma radiation (20 Gy) on increasing the carbohydrate contents were reported on onion and potatoe [75] as well as Lupine [46]. HA was found to cause the accumulation of soluble sugars concomitantly with the increase in polysaccharides content and total carbohydrates in wheat plants grown in newly reclaimed soil [14]. Also, silicon has promoted photosynthetic pigments and hence total carbohydrates were increased. It could be concluded that silicate, HA, and gamma radiation alone or in combination with chilling stress have played prominent role in alleviating the water dehydration status caused by chilling stress in coriander plant either via osmotic adjustment by increasing soluble sugars or by stabilizing the chloroplast membrane and enhancing the photosynthetic rate resulting in increased content of carbohydrate biosynthesis. Antioxidant compounds Synthesis of compatible solutes de novo like osmoprotectants, sugars, amino acids, carotenoids, flavonoids, phenols, and polyphenols is regarded as adaptive plant mechanism against osmotic and oxidative stress [9]. The presented study has investigated significant decrease in ascorbic acid content by 46.81% below the control value caused by chilling stress (Table 7). Furthermore, all individual applied treatments or in combination with chilling stress have induced a significant increase in ascorbic acid content as compared with chilling-stressed plant. The most effective treatment alleviating the impact of chilling stress was HA (Table 7). These results are in harmony with those of Pokluda et al. [79] who reported significant increase in ascorbic acid, total phenolic concentration, and total antioxidant activity in chilled coriander downstream of biostimulants application. On the same context, reduction in carotenoids content was concomitant with significant increase in ABA level in stressed coriander leaves below the control. This might be speculated as an adaptive mechanism to stress. ABA Fig. 4 Correlation analysis linking the interaction between the application of biostimulants applied on the chilling-stressed on coriander (Coriandrum sativum L.) seeds pre-soaked in 80 mM pot. Silicate, 50 mg l − 1 humic acid or soaked in water after exposure to γ-rays (50 Gy) and the improvement in the photosynthetic pigments (μg/g D. wt. in coriander leaves) and carbohydrate contents (g/100 g D. wt.) at the vegetative and flowering stages. *. The data in this figure represent the relative values percentage in compare to the control for the results in Tables 2 and 5 (the value/control*100) biosynthesis from C 40 carotenoids was assured to enable plants to cope with unfavorable condition [33,104]. Notably, a marked increase in carotenoids content, downstream silicate and HA treatments, was most likely attributed to their antioxidant efficacy in trapping free radical and quenching singlet oxygen [81]. Latter results agreed with that of Habibi [29] who reported that silicon increases synthesis of protective pigment such as carotenoids and anthocyanin in chilling-stressed grapes. The investigated increase of phenols and flavonoids in the presented study, either upon chilling stress or after alleviation using silicate or HA treatments, was also reported by Rivero et al. [82] and then was attributed by Pokluda et al. [79]. The increase in proline content in chilling-stressed coriander leaves was higher than the control values. This might be due to induced synthesis and accumulation of compatible solutes such as proline or due to the inhibition of protein synthesis followed by increased level of free amino acids, especially proline [88]. In the present work, silicate, HA, and γrays and their interaction with chilling stress have induced a pronounced increase in proline content (Table 7). Latter findings were supported by Ahmad and Haddad [5] who worked on wheat and demonstrated the promoting effect of silicate on proline production under abiotic saline conditions. Moreover, HA and gamma radiation application on chilling-stressed coriander plant were shown to increase proline content. These results were similarly demonstrated on irradiated coriander [53], Pisum sativum L. [77], and wheat [14] plants. Antioxidant enzymes and lipid peroxidation In this study, it was shown that chilling stress has caused a significant increase in CAT, PPO, and POD activities (Table 8). These results were in line with previous studies regarding CAT enzyme activity in maize seedlings [27] and other various plant species [50]. Inducing the activity of antioxidant enzymes by chilling stress is most likely regarded as a plant-derived defense mechanism to protect cell membranes, proteins, and metabolic machinery, which would preserve the subcellular structure from damage as a result of cell dehydration [85]. Alleviation of the chilling stress by γrays, has maintained and/or slightly increased the activity of PPO enzyme. A significant increase in the PPO enzyme activity was found using low doses of γradiation [43]. Furthermore, irradiation by γrays has increased the PPO and POX capacities in fresh fruits and vegetables [95]. Generally, activities of scavenging enzymes, such as POD, CAT, and SOD increased in various plant species in response to ionizing radiation [48,101], especially the potential activity of POD to remove toxic H 2 O 2 . In the same context, silicon alleviates abiotic stress by enhancing the production of antioxidant enzymes involved in detoxifying free radicals [105]. It also increases their activities which in turn protect plants against ROS generation and lipid peroxidation [30]. The hindrance effect of induced activities of antioxidant enzymes to protect the cells from lipid peroxidation, caused by chilling stress, was evidenced by the reduction of MDA accumulation by all applied treatments. Latter investigation agreed with elsewhere previous studies [29,64,72,85,102] especially by using HA and silicate as stress alleviation elements. Significant expression of RuBisCO LS and toc complex subunits in chilled alleviated coriander plants Extracted TCPs were fractionated by SDS-PAGE technique. By achieving high-quality protein profiles, it was important to study and analyze the ameliorative effect of silicate, HA, and gamma irradiation on expressed TCPs generally and RuBisCO LS expressed protein specifically. High variation in RuBisCO LS expression level was revealed by chilling stress (Fig. 1a). Accumulation of RuBisCO LS protein product, containing the active site, was demonstrated upon HA treatment [93]. Toc and Ru-BisCO enzyme complexes were detected at the same molecular weight demonstrated by Ladig et al. [51]. The complex activity was judged by the assembled RuBisCO complex in the cell. The biosynthesis/degradation rate of the RuBisCO two subunits controlled by gene expression is significantly affected by unfavorable abiotic conditions [49]. However, continuous significant accumulation of RuBisCO LS may have a negative impact on the efficiency and the assembly of RuBisCO complex. Induced changes in the protein profiles of chilling stressed and alleviated samples by HA occurred within a narrow range (45-80 kDa) of polypeptides and were recorded in this study (Fig. S1a, b). On the same context, 25 protein spots were differentially and up-regulated in response to low temperature (4°C) during imbibition in a known chilling-resistant soybean cultivar Z22 [20]. It had been found that optimum temperature for photosynthesis is 20°C in barley [94]. Temperature stress has a deleterious effect on the photosynthesis apparatus [83]. In this context, the protein expression of the chloroplast coupling factor (CF1) was negatively affected by the chilling stress [45]. By grouping control and HA alleviated chillingstressed plants together (Fig. 2), the cluster analysis has reflected the ability of HA treatment to alleviate the deleterious effect of chilling stress on the coriander plant proteostasis, especially RuBisCO LS . It might be concluded that chilling stress affects the photosynthesis process by disruption of RuBisCO complex assembly inside the chloroplast via down regulating the production of Toc machinery subunits (Toc34 and Toc75) and HSP70 chaperone. Latter impact would limit and restrict RubisCO import and assembly into chloroplast [41]. Taken together, it might be concluded that applied growth stimulators in this study, especially HA followed by silicate, have enhanced the antioxidative defense system for limiting the oxidative damage for coriander plants under chilling stress by scavenging excessive ROS through inducing non-enzymatic antioxidant compounds (ascorbic acid, carotenoids, total phenolic, flavonoids, and proline) as well as antioxidant defense enzymes (CAT, POD, and PPO). Besides that, molecular diagnosis of the catalytic effect of biomarkers to reduce the chilling stress at the level of TCPs was assigned and evidenced the restoration and maintenance of RuBisCO LS . Consequently, achieved improvements of growth parameters and yield components have reflected previous demonstrations. Hereby, presented results may reflect new insights and broaden our understanding about tolerance mechanism(s) against chilling stress in order to produce winter resistant crops of highly important economic importance like the coriander plant. This study has investigated the potential positioning of physiological, biochemical, and molecular analyses to evaluate and judge the effect of temperature stress fluctuations on the coriander crop in Egypt. Conclusions Acclimation to chilling stress was reinforced in the coriander plant by priming of coriander seeds in potassium silicate (80 mM), humic acid (50 mg. l− 1 ) or priming in water after being exposed to gamma rays (50 Gy) and their combination with chilling stress. Alleviated chilling stress was characterized in coriander by improved plant growth and decreased ABA level. Photosynthetic pigments and carbohydrates content (c.a. soluble sugars) were positively promoted concomitantly with polysaccharides and total carbohydrates after alleviation of chilling stress using applied growth stimulators. Moreover, investigated antioxidants compounds and enzymes have undergone either induction or significant increase upon pre-and alleviation treatments. Besides that, induction the accumulation of large subunit of RuBisCO enzyme was also reported as a sign for restoration and maintenance of cellular protein homeostasis. Therefore, it could be suggested that the effectiveness of biostimulators used in this study (especially HA) and their potential stimulatory effect has induced stress tolerance in cultivated coriander under low temperature. The biostimulators applied in the presented study most likely triggered a pronounced step to enhance acclimation and tolerance of coriander plant to chilling stress by safe methods, thus improving and stimulating bioactive hormones, pigments, and healthy components. Plant material and applied treatments Coriander (Coriandrum sativum L.) seeds used in this study was assessed by Agricultural Research Center (ARC), Ministry of Agriculture, Giza, Egypt, purchased from seeds suppliers' in Egyptian local market by Abd Elhady Gayar Company, Cairo, Egypt, and named by "Baladi variety". The HA used in this study is produced and purchased from Misr International Company for Agricultural and Industrial Development, Cairo, Egypt. This product is registered and accredited under the name of "HUMO" with No. 7050, Egyptian Ministry of Agriculture, Cairo, Egypt. The prementioned HA product was approved from Agriculture Research Center (ARC), Giza, Egypt. Potassium silicate (99% degree of purity) was purchased from Sigma-Aldrich Company (Cat. No. 792640). Pilot experiments and basic aspects of the optimization process were carried out with a wide range of potassium silicate or humic acid concentrations (like sub-optimum, optimum, and supra-optimum concentrations). To detect the optimum concentration of HA, various ascending concentration were applied; 5, 10, 25, 50, 75, and 100 mg.l − 1 . The best concentration was 50 mg.l − 1 . In case of potassium silicate, a series of concentrations; 10, 20, 40, 80, and 160 mM were used. It was found that 80 mM is the optimum concentration. The judgement of the results' quality in the stage of executing the pilot experiments was based on the highest records of growth parameters and yield components. Then, these experimental results were obtained and provided us with a solid basis to which optimum concentration should be selected. The used water source, named as "tap water" in this study, met the standard requirements of WHO (World Health Organization, Geneva 2008). The needed details of the water analysis were accompanied as a supplementary data set. Seed priming was performed by tap water using solutions of potassium silicate (80 mM) or humic acid (50 mg.l − 1 ) prior to seeds chilling (6.0 ± 0.5°C) or non-chilling (20.0 ± 2.0°C) conditions for 16 h in water. Similarly, coriander dry seeds were irradiated using gamma rays (50 Gy) prior to rinsing in non-chilled or chilled water for 16 h. The irradiation experiment for chilled and non-chilled seeds was carried out in National Center for Radiation Research and Technology (NCRRT), Atomic Energy Authority, Cairo, Egypt using Cesium-137 with a dose rate 0.758 rad/sec. The experiment was carried out during two successive seasons; a short description of experimental protocol is presented and listed in Table 1. Soaked seeds were washed thoroughly with distilled water, then sown in field plastic pots (L .W .D = 50 × 50 × 80 cm) containing 15 Kg clay: sandy soil (2:1 w/w), ten seeds/pot, and 10 pots for each treatment. The number of pots were counted putting into consideration that sample collection was planned to be performed at different growth and developmental stages. Pots were irrigated by tap water to keep 80% water holding capacity. Plants of the vegetative stage were harvested at day 75 from the sowing date, while, the plants of the flowering stage were harvested after 105 days. Yield components were harvested after 135 day from the sowing date. Throughout this study, three biological and/or three technical replicates were used to measure either growth/ yield parameters or to perform chemical and molecular analyses. Representative samples of ten plants (one pot; counted as one biological replicate) were taken from each treatment at (vegetative stage and flowering stage) to measure the growth parameters; plant height, root length, number of branches /plant, number of leaves/ plant, area of leaves/plant, and fresh and dry weights of shoot and root/ plant. Yield components parameters (number and weight of seeds/ plant as well as seed index) were recorded for each treatment. Chemical analyses were carried out in coriander leaves at flowering stage. The experiments were repeated at the next season and the mean values of growth parameters and yield components were recorded. The experiment design was completely randomized. Extraction, separation and estimation of growth regulating substances The method of extraction was identical to that adopted by Shindy and Smith [87] and described by Hassanein et al. [34]. Determination and identification of acidic hormones (IAA, GA3, and ABA) were performed as described by Kelen et al. [44]. The plant tissues (five grams of each sample out of the three independent used technical replicates) were collected and ground in 80% methanol. The macerated tissues were transferred to a flask with fresh methanol and the volume was adjusted to 20 ml of methanol for each g fresh weight of sample. The tissues were extracted for 24 h at 0°C and then was vacuum filtrated through Whatman filter paper (No. 42). The residues were returned to flask with fresh volume of methanol and stirred for 30 min with magnetic stirrer and then filtrated again. The procedure was repeated once more and combined extract ions were evaporated to the aqueous phase in a rotary flash evaporator. The aqueous phase (10-30 ml) was adjusted to pH 8.6 with 1% (w/v) NaOH and partitioned three times with equal volumes of ethyl acetate. The combined ethyl acetate fraction was evaporated to dryness and held for further purification. The aqueous phase was adjusted to pH 2.8 with 1% HCl (v/v) and re-partitioned three times with equal volume of ethyl acetate. The remaining aqueous phase was discarded, and the combined acidic ethyl acetate phase was reduced to 5 ml (fraction I) to be used for gas chromatography (GC) determination of acidic hormones such as IAA, ABA and GA 3 . To estimate the amounts of acidic hormones (fraction I) by HPLC isocratic UV analyzer, reverse phase C18 column (RO-C18 μBondapak, WATERS corporation, MA, USA) was detected. The column used included octadecylsilane (ODS) ultrasphere particle (5 μm), the mobile phases used were acetonitrile-water (26:74 v/v), PH 4.00; Flow rare: 0.8 ml/ min, detection by UV at 208 nm, the standard solution of the individual acid was prepared in mobile phase and chromatographed. The retention times of peaks of authentic samples were used in identification and characterization of peaks of samples under investigation. Peak identification was performed by comparing the relative retention time of each peak with those of IAA, GA 3 , and ABA standard. Peak area was measured by triangulation and relative properties of the individual component were therefore obtained at various retention times of samples. Estimation of photosynthetic pigments The photosynthetic pigments; chlorophyll a (chl a), chlorophyll b (chl b), and carotenoids were determined by Metzner et al. [69]. Briefly, fresh weight of leaves (0.5 g) was homogenized in 85% aqueous acetone for 5 min. The homogenate was centrifuged, and the supernatant was made up to volume with 85% aqueous acetone. The extinction was measured against a blank of pure 85% aqueous acetone at 3 wave lengths of 452.5, 644, and 663 nm using Spectropolarimeter DC Tiny 25III Model TUDC12B4. The photosynthetic pigments were determined in μg/ml using the following equations: Chlorophyll a = 10.3 E663-0.918 E644, Chlorophyll b = 19.7 E644-3.87 E663, and Carotenoids = 4.2 E425.5 -(0.026 chlorophyll a + 0.426 chlorophyll b). Finally, the pigment contents were expressed in μg.g − 1 of leaves dry weight. Estimation of carbohydrates For soluble sugars and polysaccharides determination, plant material (one gram of fresh tissue) was oven-dried at 80°C to constant weight and then ground to a fine powder using local domestic blender. For extraction and estimation of soluble sugars, 25 mg of dried tissues was homogenized using 80% ethanol, and then kept in boiling water bath with continuous shaking for 15 min. After cooling, the extract was filtrated, and the filtrate was oven dried at 60°C then dissolved in 2 ml of water to be ready for determination of soluble sugars [40]. The anthrone sulphuric acid method carried out by Whistler et al. [100] was used for determination of soluble sugars. Polysaccharides were extracted and estimated using the dry residue left after extraction of soluble carbohydrate. A known weight of dried material (100 mg) was added to 10 ml 1.5 N sulfuric acid in sugar tubes with air reflux at 100°C in a water bath for 6 h. Then, the hydrolysate was neutralized by 2.5 N NaOH using phenol red as indicator. The latter neutralized solution was used for polysaccharide determination by method of anthrone sulphuric acid reagent [37,100]. A calibration curve using pure glucose was made, from which the data were calculated as mg/g dry weight. Finally, total carbohydrates content was expressed as the summation of soluble sugars plus polysaccharides amounts in each sample. Extraction and estimation of antioxidants compounds In this study, the antioxidants defense compounds (ascorbic acid, total flavonoids, phenolic compounds, and proline content) were determined. Ascorbic acid was determined in mg/100 g fresh leaves by 2,6 dichlorophenol indophenol for titration according to Zvaigzne et al. [106]. Briefly, ten grams of leaves were accurately weighed and ground using mortar and pestle with an additional of 20 ml of 3% metaphosphoric acid-acetic acid solution. The mixture was further ground and strained through muslin and the extract was made up to 100 ml with the metaphosphoric-acetic acid mixture. Five ml of the metaphosphoric acid-acetic acid solution was pipetted into three of the 50 ml Erlenmeyer flask followed by 2 ml of the samples extract. The samples were titrated separately with the indophenol dye solution until a light rose pink persisted for 5 s. The amount of dye used in the titration were determined and used in the calculation of vitamin C content. Total flavonoids contents were determined by the aluminum chloride colorimetric assay according to Marinova et al. [66]. Each ethanolic extract (1.0 ml) or standard solution of quercetrin was added to 10 ml volumetric flask containing 4.0 ml distilled water. To the flask 0.3 ml of 5% NaNO 2 was added. After 5 min, 0.3 ml of 10% AlCl 3 was added and after 6 min, 2.0 ml 1 M NaOH was added and the total volume was made up to 10 ml with distilled H 2 O. The solution was mixed, and the absorbance was measured against the blank at 510 nm. Finally, total flavonoids were expressed as mg quercetin equivalent per 100 g of dry weight. Moreover, phenolic compounds were estimated according to Malik and Singh [65] in which phenols could react with phosphormolbdic acid in Folin-Ciocalteau reagent in alkaline medium and produce blue colored complex (molybdenum blue). The absorbance was measured using Milton Roy Spectronic 601 Spectrophotometer at 650 nm. The concentration of phenolic compounds per 100 g leaves (fresh weight) was calculated from gallic acid standard curve. The values were then calculated as (mg 100 g − 1 ) dry weight. Free proline was determined according to the method of Bates et al. [15]. This method was based on the reaction between proline and acid ninhydrin reagent. Acid ninhydrin reagent was prepared by warming 1.25 g ninhydrin in 30 ml glacial acetic acid and 20 ml 6 M phosphoric acid with agitation until dissolved; kept cool and stored at 4°C. The reagent remains stable for 24 h. Approximately 0.1 g of macerated dried tissue was homogenized in 10 ml of 3% aqueous sulfosalicylic acid, and then filtered through filter paper Whatman No. 2. Two ml of the filtrate were mixed with 2 ml glacial acetic acid and 2 ml of the acid ninhydrin reagent in a test tube and heated for 1 h at 100°C .The reaction mixture was extracted with 4 ml toluene, mixed vigorously in a test tube for 15-20 s. The chromophore containing toluene was aspired from the aqueous phase and warmed to room temperature. The absorbance was read at 520 nm using toluene as a blank. The proline concentration was determined using stander curve and calculated on a dry matter basis. Extraction and measurements of antioxidants enzymes The antioxidants enzymes (catalase (CAT), peroxidase (POD), and polyphenol oxidase (PPO)) were extracted from frozen ground leaves (0.5 g) using cold mortar and pestle and homogenized with cold sodium phosphate buffer (100 mM, pH = 7) containing 1% (w/v) polyvinylpyrrolidone (PVP) and 0.1 mM EDTA. The extraction ratio was 4 ml extraction buffer for each one gram of plant tissues. The homogenate was centrifuged at 10, 000 g at 4°C for 15 min. The supernatant was used for to measure CAT, POD, and PPO activities. Also, protein concentration was quantified in the crude extract by Lowry et al. [58] using bovine serum albumin as a standard. The activity of CAT (EC 1.11.1.6) was determined according to Aebi [3]. Enzyme extract (100 μl) was added to 2.9 ml of a reaction mixture containing 20 mM H 2 O 2 and 50 mM sodium phosphate buffer (pH 7.0). The activity of CAT was measured by monitoring the reduction in the absorbance at 240 nm as a result of H 2 O 2 consumption. The amount of consumed H 2 O 2 was calculated by using a molar extinction coefficient of 0.04 cm 2 μmol − 1 . One unit of enzyme activity was defined as the decomposition of 1 μmol of H 2 O 2 /min. Catalase activity was expressed as unit min − 1 mg − 1 protein. Also, POD (EC1.11.1.7) activity was quantified by the method described by Hammerschmidt et al. [31]. The assay mixture (100 ml) contained 10 ml of 1% (v/v) guaiacol, 10 ml of 0.3% H 2 O 2 and 80 ml of 50 mM phosphate buffer (pH = 6.6). Volume of 100 μl of crude enzyme was added to 2.9 ml of the assay mixture to start the reaction. The absorbance was recorded every 30 s for 3 min at 470 nm using spectrophotometer (UV-Vis spectrophotometer UV 9100 B, LabTech). The rate of change in absorbance per minute was calculated and one unit of enzyme was expressed as ΔOD = 0.01. The POD activity was expressed as (unit min − 1 mg − 1 protein). Moreover, PPO (EC 1.14.18.1) activity was measured according to Oktay et al. [76]. The reaction mixture contained 600 μl catechol (0.1 M) and 100 μl enzyme extract was completed to 3.0 ml with 0.1 M phosphate buffer pH 7. The absorbance was recorded at 420 nm by spectrophotometer (UV-visible-160A, Shimadzu). One unit of PPO activity was defined as the amount of enzyme that causes an increase in absorbance of 0.001 min − 1 ml − 1 . The enzyme activity was expressed as (unit min − 1 mg − 1 protein). Estimation of lipid peroxidation The lipid peroxidation in fresh coriander leaves was determined by measuring the amount of Malondialdehyde (MDA) produced by thiobarbituric acid (TBA) reaction as described by Heath and Packer [36]. The leaves (0.5 g) were homogenized in 5 ml of 0.1% (m/v) TCA. The homogenate was centrifuged (Hettich Zentrifugen Universal 16 R Centrifuge, Hettich Rotor 1612 12X3g, Germany) at 10,000 g for 20 min. To an aliquot (1 ml) of the supernatant, 4 ml of 0.5% TBA in 20% TCA was added. The mixture was heated at 95°C water bath for 30 min and then quickly cooled in an ice bath. After centrifugation at 10, 000 g for 15 min, the absorbance of the supernatant was recorded at 532 and 600 nm. The value for non-specific absorption at 600 nm was subtracted. The concentration of MDA was calculated by dividing the difference of (A532-A600) by its molar extinction coefficient (155 mM − 1 cm − 1 ) and the result was expressed as (nmol g − 1 ) fresh weight. Extraction of total cellular proteins (TCPs) and chloroplast protein complexes from coriander TCPs and chloroplast protein complexes were extracted from coriander leaves at the vegetative stage (75 days old) according to Mehta et al. [68] with minor modifications. Briefly, chilling stress-primed coriander leaves were ground in liquid nitrogen to a fine powder using the mortar and pestle. To assure complete homogenous cellular disruption, aliquots (250 mg) were subjected to high throughput TissueLyser II equipment (Qiagen, Cay. No. 85300) for three times/30 s each. Immediately, extraction buffer (100 μl) of 100 mM Tris-HCl PH 8, 50 mM EDTA, 40% Glycerol, 4% βmercaptoethanol, 2% w/v SDS, 0.1 mM phenylmethylsulfonyl fluroide (PMSF), 1x protease inhibitors cocktail (Roche, Penzberg, Germany), and 0.001% bromophenol blue dye was added to the ground leaves and mixed until a completely homogeneous lysate is obtained using the mortar and pestle. The tissue lysate was vortexed for three minutes, incubated at 95°C for 5 min (Eppendorf™ Thermomixer™), and finally centrifuged (Hettich MICRO 22 centrifuge, Germany) at high speed for 30 min at 20.000 xg. Subsequently, the supernatant was removed and saved as 25 μl aliquots at − 80°C freezer for further analysis by SDS-PAGE technique. Detection of protein concentration was performed using protein assay dye reagent (BioRad, cat No. #5000006). Bradford programmed method (Eppendorf Biophotometer, ver. 1.35, Model #6131) and calibration memory for protein methods were used to quantify the protein concentration according to Bradford [18]. SDS−/ HDN-PAGE and immunoblotting techniques Previously extracted TCPs were subjected to either preparatory and/or analytical one-dimensional 10% SDS-PAGE or gradient 4-12% HDN-PAGE procedures as previously described [51,52,68]. Color-coded prestained protein marker (High-Range SDS-PAGE Standards, GeneON, Ludwigshafen, Germany) was loaded and electrophoretic fractionation was carried out using Bio-Rad Mini-Protean II Cell Gel System at 70 V in 1 X premade tank buffer (BioRad, #1610734). Protein extraction procedure for each physiological status (control, chilling-stressed, etc.) was performed from the leaves of 3 biological replicates and 3 technical replicates. Each technical replicate represented one biological replicate. Each biological replicate comprises the collection of leaves of 10 plants. The protein extraction was carried out from each technical replicate independently. Finally extracted proteins from the 3 technical replicates were pooled together. Pooled sample was quantified and equally loaded into 10% SDS-PAGE consequently after measuring its concentration. Aliquots of pooled sample were kept as − 80°C after the short snap for 30 s in Liquid Nitrogen. After completion of protein migration, the gel was stained, de-stained, and finally placed between two sheets of cellophane membrane for immunoblotting technique and/or for documentation purposes as demonstrated by Ladig et al. [51]. Gel images were captured using Bio-Rad gel documentation system (Gel Doc™ EZ system and enabled Image Lab™ software). Protein concentration was revealed using normalized Bovine serum albumin (BSA) standard curve. The cluster analysis was constructed by the following; Band Scoring {(0) for absence and (1) for presence}, based on SDS-PAGE fractionated protein profile, was performed. The binary matrix was generated using the data as revealed by SDS-PAGE. The binary matrix was executed to calculate the genetic similarity matrix coefficient. A distance tree was constructed using the unweighted pair group method with arithmetic mean (UPGMA) using PAST, ver. 4.02 as previously described by Hammer et al. [38]. Blotting onto a PVDF membrane (0.1 μ, Schleicher & Schull, Germany) in Towbin-buffer (192 mM Glycin, 25 mM Tris/HCl, pH 8.3, 0,1% (w/v) SDS, and 15% (v/v) Methanol) was carried out using Bio-Rad Trans-Blot® Semi-Dry electrophoretic cell (Cat. number 170-3940) according to manufacturer's instructions. Phosphate buffered saline (PBS), supplemented with Tween-20, were used for membranes washing steps intervening the primary and secondary antibodies incubation times. Monoclonal primary antibodies against AtToc75 and AtToc34 (A. thaliana chloroplast outer membrane proteins with 75 and 34 kDa, respectively), eukaryotic HSP70 (eHSP70; intermembrane space chaperone for RuBisCO translocation into chloroplast inner membrane), plant Actin (as a control housekeeping gene), and HRP-conjugated secondary anti-rabbit IgG were used. All primary and secondary antibodies were used at dilutions 1: 10,000 and 1: 25,000, respectively and purchased from Agrisera (Vännäs, SWEDEN). Immunoblotting (western blotting, WB), detection of immobilized specific antigens conjugated to Horseradish Peroxidase (HRP), and visualization of HRP was executed by chemiluminescent (ECL) detection kit (Pierce™ ECL Western Blotting Substrate, ThermoFisher SCIENTIFIC, Cat. No. 32106) according to the manufacturer's recommendations. Moreover, by using of ImageJ software (IJ 1.46r) for image processing and analysis of the electrophoretic running of ascending concentration series of Bovine serum albumin (BSA), used as protein size standard, quantification of RuBisCO large subunit (RuBisCO LS ) and Toc (Translocon at outer membrane of chloroplast) complex band were performed. Statistical analysis The experimental procedure for each physiological status (control, stress, etc.) was performed from the leaves of 3 biological replicates and 3 technical replicates. Each technical replicate represented the mean of one biological replicate members. Each biological replicate comprises the collection of plant tissue or leaves of 10 plants (one pot). The mean of the independent technical replicates was calculated, and the mean values were used to calculate ±SE. The data were statistically analyzed for variance and the values of least significant differences (LSD) at 5% level were calculated to compare the means of different treatments according to Snedecor and Cochran [92]. Different letters indicate significant variation according to Duncan ' s multiple rang test to discriminate significance (defined as P < 0.05). Additional file 1: Suppl. Fig. 1. Impact of alleviation treatments on growth parameters of chilling-stressed coriander plants at the flowering stage. Suppl. Fig. 2. Impact of alleviation treatments on TCPs profiles of chilling-stressed (6°C ± 0.5) coriander plants at the vegetative stage (75 days old). Protein extraction procedure for each physiological status (control, stress, etc.) was performed from the leaves of 3 biological and 3 technical replicates. Each technical replicate represented one biological replicate. Each biological replicate is composed of the collection of leaves of 10 plants. The latter collected leaves represented one technical replicate. The protein extraction was carried out from each technical replicate independently. Finally extracted proteins from the 3 technical replicates were pooled together. Pooled sample were quantified and equally loaded into 10% SDS-PAGE consequently after measuring its concentration. Aliquots of pooled sample were kept as − 80°C after the short snap for 30 s in Liquid Nitrogen. Coriander control (Lane 1) seeds and chillingstressed (Lane 2) ones were subjected to pre-soaking in 80 mM Pot. silicate (Lane 3), 50 mg l − 1 HA (Lane 4) or soaked in water after exposed to 50 Gy gamma irradiation (Lane 5). TCPs were then extracted, fractionated on 10% SDS-PAGE for season 1 (Panel a) and season 2 (Panel b), and finally stained with CoBB stain. The numbers shown on the left-handed side of the figures indicate molecular weight standards in kDa (High-Range SDS-PAGE Standards, GeneON, Ludwigshafen, Germany). Red arrowheads refer to induced upregulated polypeptides detected in "chillin-g+HA", but not in "Chilling" and/or other chilling plus alleviation elements. The asterisk refers to approximate molecular weight of RuBis-CO LS . ImageJ software (IJ 1.46r) was used for image processing and analysis of the electrophoretic running of ascending concentration series of BSA (as protein size standard) to quantify RuBisCO LS concentration in (μg) downward of every panel. Full-length gels are presented in Supplementary Fig. (2a and b). Black arrowhead refers to the protein band at approximate molecular weight of 180 kDa used for normalization of quantified RuBisCO LS protein expression. Suppl. Fig. 3 Full-length original and unprocessed Immunoblot analysis of the expression of chloroplast marker proteins. TCPs were extracted and fractionated by SDS-PAGE and immunodecorated against α-Toc34, α-Toc75, eHSP70, and actin primary antibody in a dilution of 1:10,000 as demonstrated in [51]. Cropping of the shown blots was performed properly for sake of clarity and focusing the information. Full-length blots are accompanied the manuscript as Supplementary Fig. 3. Protein extraction procedure for each physiological status (control, chilling stressed, etc.) was performed from the leaves of 3 biological replicates and 3 technical replicates. Each technical replicate represented one biological replicate. Each biological replicate comprises the collection of leaves of 10 plants. The protein extraction was carried out from each technical replicate independently. Finally extracted proteins from the 3 technical replicates were pooled together. Pooled sample were quantified, equally loaded into 10% SDS-PAGE, and blotted onto PVDF membrane as shown in methods section. Consequently, aliquots of pooled sample were kept as − 80°C after the short snap for 30 s in Liquid Nitrogen. Suppl. Fig. 4 Fractionation of chloroplast protein complexes by HDN-PAGE technique profiles of chilling-stressed (6°C ± 0.5) coriander plants at the vegetative stage. Panel (a) Coriander control (Lane 2, 7) seeds and chilling-stressed (Lane 3) ones were subjected to pre-soaking in 80 mM Pot. silicate (Lane 5), 50 mg l − 1 HA (Lane 5) or soaked in water after exposed to 50 Gy gamma irradiation (Lane 6). Extracted protein complexes (especially RuBisCO enzyme complex) were then fractionated on 4-12% gradient native HDN-PAGES, and finally stained with CoBB stain. Native molecular weight standards (HMW Native Marker kit, GE Healthcare) (lane 1) was loaded and denoted by numbers left-handed of the figure indicating molecular weight standards in kDa. ImageJ software (IJ 1.46r) was used for image processing and analysis of the electrophoretic running of ascending concentration series of BSA (as protein size standard) to quantify RuBisCO LS concentration and Toc complex bands in (μg) were depicted downward of the figure. Cropping of the HDN-PAGE shown was performed for sake of clarity and focusing the information. Full-length original HDN-PAGE gel is presented in Supplementary Fig. 4 (Repeat 1, 2). Black arrowhead refers to the protein band at approximate molecular weight of 300 kDa used for normalization of quantified Toc and RuBisCO protein complexes. Original full length HDN-PAGEs; Repeat 1 (Panel a) and Repeat 2 (Panel b) were shown without cropping for sake of clarity. Repeat 1 and 2 were executed to manifest the consistency and reproducibility of the given protein complexes. Panel (b) specifically showed also the loading of different concentrations of
v3-fos-license
2018-04-03T05:30:59.224Z
2016-10-13T00:00:00.000
4584903
{ "extfieldsofstudy": [ "Materials Science", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.nature.com/articles/srep35304.pdf", "pdf_hash": "7489f31c5863150f47e3caa6b4efb8d1a0b7ad0f", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43719", "s2fieldsofstudy": [ "Chemistry", "Engineering" ], "sha1": "7489f31c5863150f47e3caa6b4efb8d1a0b7ad0f", "year": 2016 }
pes2o/s2orc
Hexagonal 2H-MoSe2 broad spectrum active photocatalyst for Cr(VI) reduction To make full use of the solar energy, exploring broad spectrum active photocatalysts has become one of the core issues for photocatalysis. Here we report a novel hexagonal 2H-MoSe2 photocatalyst with ultraviolet (UV)-visible-near infrared (NIR) light response for the first time. The results indicate that the MoSe2 displays excellent photo-absorption and photocatalytic activity in the reduction of Cr(VI) under UV and visible even NIR light irradiation. MoSe2 synthesized at pH value of 2 achieves the highest Cr(VI) reduction rates of 99%, 91% and 100% under UV, visible and NIR light irradiation, respectively, which should be attributed to its comparatively higher light absorption, efficient charge separation and transfer as well as relatively large number of surface active sites. The excellent broad spectrum active photocatalytic activity makes the MoSe2 to be a promising photocatalyst for the effective utilization of solar energy. Agrowing number of contaminations such as heavy metal ions and organic chemical compounds in natural water have become a serious threat to environment and human health. Hexavalent chromium (Cr(VI)) is a most common contaminant, which is discharged from industries such as electroplating, leather tanning, metal finishing, textile manufacturing, steel fabricating, paint and pigments, fertilizing, and so on. It is highly toxic to most organisms when its concentration is above 0.05 mg l −1 , and can cause the lung-cancer, chromeulcer, perforation of nasal septum and kidney-damage. Various techniques, such as adsorption, biosorption, electrocoagulation, ion exchange, membrane filtration, have been reported to remove the Cr(VI) from wastewater 1,2 . However, these techniques have some disadvantages, such as membrane fouling, high power consumption and cost for operation and maintenance. The reduction of Cr(VI) to Cr(III) is considered as an efficient route to remove Cr(VI), because Cr(III) is less toxic and can be readily precipitated in aqueous solution in the form of Cr(OH) 3 3 . Semiconductor photocatalysis as a novel, economical and environmentally-friendly technique for the reduction of Cr(VI) has attracted considerable attention in recent years [4][5][6][7] . One of the major factors in photocatalysis is the limited light absorption of photocatalysts in the incident solar spectrum. Up to now, most of the photocatalysts such as TiO 2 , CuO, CdS, SnS 2 , AgCl:Ag, WO 3 and metal-free g-C 3 N 4 et al. are only active under ultraviolet (UV) or visible light irradiation [8][9][10][11][12][13][14][15][16][17][18][19][20][21][22] . UV and visible light make up only about 4% and 43% of the solar energy reaching the surface of the earth, respectively, while near-infrared (NIR) light constitutes more than 50% [23][24][25] . Nevertheless, up to now few efforts have been made to effectively utilize the NIR light [26][27][28] . Therefore, exploration of novel broad spectrum (UV, visible and NIR) responsive semiconductors with efficient and stable photocatalytic activity remains a challenge. Transition metal dichalcogenides (TMDs) have attracted great interests due to their intriguing properties and potential applications in hydrogen evolution 29 , lithium/sodium batteries 30 , and photocatalysis [31][32][33][34] . Among these TMDs, MoSe 2 with a narrow band gap of ~1.4 eV can harvest the solar energy in a very broad spectral region and has been employed as an efficient photocatalyst under UV, visible and NIR light irradiation 35 . Theoretical studies indicated that single layer MoSe 2 is an ideal candidate for photocatalytic splitting of water to generate hydrogen in solar light irradiation 36 . So far, previous studies focused mainly on the visible light photocatalytic activity of MoSe 2 in the degradation of dye. Unfortunately, there have been no reports on the photocatalytic activity of MoSe 2 under UV and NIR light irradiation, especially for photocatalytic reduction of Cr(VI). In this work, novel broad spectrum responsive MoSe 2 was synthesized via a facile solvothermal method for the first time. The MoSe 2 exhibits excellent photo-absorption in the whole light region and shows good photocatalytic activity in the reduction of Cr(VI) under UV, visible and NIR light irradiation. The photocatalytic mechanism was also studied in terms of a series of characterization and controlled experiments using hole scavengers. Results and Discussion Characterizations of MoSe 2 . Figure 1 shows the field-emission scanning electron microscopy (FESEM) images of M-1, M-2, M-3 and M-4. It is clearly observed that all the MoSe 2 samples display the nanoparticle structures with a size range between 30 and 50 nm (inset of Fig. 1a-d), which indicates that the morphology of MoSe 2 does not change when the pH value of the precursor solution increases from 1 to 4. However, the MoSe 2 nanoparticles are aggregated when the pH value increases to 3 and 4. The M-2 was identified by energy dispersive X-ray spectroscopy (EDS) linked to FESEM, as shown in Fig. 1e. The atom ratio of Mo and Se is about 1:2, further indicating the formation of MoSe 2 . Figure 1f shows the X-ray diffraction (XRD) patterns of M-1, M-2, M-3 and M-4. The diffraction patterns of as-prepared MoSe 2 samples show that all the peaks can be indexed to (002), (102) and (110) crystal planes of the hexagonal 2H-MoSe 2 phase with space group P6 3 /mmc (JCPDS: 29-0914) 37 . No other impurity peaks for as-prepared MoSe 2 samples are observed, which confirms that the as-prepared products are pure MoSe 2 . The XRD pattern of commercial MoSe 2 (labeled as MoSe 2 bulk) was also measured for comparison, as shown in Figure S1. It can be observed that the XRD patterns of as-prepared MoSe 2 samples show relatively broader diffraction peaks compared with MoSe 2 bulk, indicating that the as-prepared products are somewhat amorphous and have short range structural order 38 . This is in good agreement with the reported results 39 . The crystallinity does not play a decisive role in the optical and photocatalytic activity of MoSe 2 . In addition, the diffraction peaks of as-prepared MoSe 2 samples shift toward smaller angle, suggesting their larger interlayer spacing than that of the MoSe 2 bulk. Fig. 2). Furthermore, it is clearly observed from Fig. 2b that the MoSe 2 nanoparticles are formed by self-assembly nanosheets. Commonly, the MoSe 2 appears to have a 2D sheet-like structure with abundant active defective-edges, but the defects can act as recombination center instead of providing an electron pathway and promote the recombination of electron-hole pairs 40 . Compared with MoSe 2 with 2D sheet-like structure, the defects of MoSe 2 nanoparticles formed by self-assembly nanosheets are relatively less, which is beneficial to the photocatalytic activity. The interlayer spacing of 0.68 nm observed from lattice fringes can be ascribed to the (002) direction of hexagonal MoSe 2 , which is slightly larger than that of the MoSe 2 bulk (0.64 nm). This result is in agreement with the XRD result. The selected area electron diffraction (SAED) pattern in Fig. 3 shows clear diffraction rings and can be well indexed as a pure hexagonal MoSe 2 phase, indicating a high crystallinity of MoSe 2 . In order to investigate the chemical composition of M-2, X-ray photoelectron spectroscopy (XPS) measurements were carried out. Figure 4a, b show the high resolution XPS spectra of Mo 3d and Se 3d for M-2. Mo 3d 5/2 and Mo 3d 3/2 were found at 228.7 eV and 232.4 eV, respectively, revealing the chemical oxidation state of + 4 for Mo and the formation of MoSe 2 41 . The peak at 55 eV is attributed to Se 3d 3/2 . The mole ratio of Mo:Se in M-2 is about 1:2, further indicating the high purity of the products, which is in accordance with the EDS measurement. The XPS spectra of M-1, M-3, M-4 and MoSe 2 bulk ( Figure S1) are similar to that of M-2. Compared with MoSe 2 bulk, the binding energies of Mo 3d and Se 3d for as-prepared MoSe 2 show a blue shift, which may be due to atomic undercoordination induced local quantum entrapment and polarization 42 . All of these results clearly confirm the formation of MoSe 2 photocatalyst. Ultrahigh photocatalytic activity. Photocatalytic reduction of Cr(VI) by M-1, M-2, M-3 and M-4 was performed under NIR light irradiation. Figure 5 shows the UV-vis absorption spectra of Cr(VI) with irradiation time under NIR light irradiation using M-2. It is observed that the UV-vis absorption of Cr(VI), related to its concentration in the solution, becomes weak with the increase in the irradiation time. under NIR light irradiation. The normalized temporal concentration changes (C/C 0 ) of Cr(VI) during the photocatalytic process are proportional to the normalized maximum absorbance (A/A 0 ), which can be derived from the change in the Cr(VI) absorption profile during the photocatalysis process. It is observed that the concentration of Cr(VI) is hardly reduced under NIR light irradiation in the absence of the photocatalyst. The reduction rates of Cr(VI) for P25 and MoSe 2 bulk are 1% and 1%, respectively. The photocatalytic activity of MoSe 2 is dependent on the pH value of precursor solution. The reduction rate of Cr(VI) for M-1 is 94% at 180 min. When the pH value of precursor solution increases to 2, the reduction rate increases and reaches a maximum value of 100% for M-2 at 180 min. However, when the pH value of precursor solution further increases, the reduction rate decreases to 93% and 80% for M-3 and M-4 at 180 min, respectively. Figure Fig. 7. Under visible light irradiation, the reduction rates of Cr(VI) for P25, MoSe 2 bulk, and M-1 are 1%, 1% and 73%, respectively, and a maximum reduction rate reaches 91% for M-2 at 180 min. However, when the pH value of precursor solution further increases, the reduction rate decreases to 75% and 66% for M-3 and M-4 at 180 min, respectively. Under UV light irradiation, the reduction rates are 71%, 9%, 97%, 99%, 97% and 97% for P25, MoSe 2 bulk, M-1, M-2, M-3 and M-4, respectively. Therefore, MoSe 2 can exhibit excellent photocatalytic activity not only under NIR light irradiation but also under UV and visible light irradiation. The photo-stability of photocatalysts is very important for practical application. Therefore, the photo-stability of MoSe 2 (M-2) was studied by investigating its photocatalytic activity under visible light irradiation with three times of cycling uses, as shown in Fig. 8. It is noteworthy that only insignificant decrease for photocatalytic activity is found, which may be due to the loss of photocatalyst during collection process. Moreover, the crystal structure of M-2 after the photocatalytic reaction was characterized by XRD and XPS (Figures S3 and S4). It can be observed from the XRD pattern that the crystal structure of M-2 does not show an obvious change before and after the photocatalytic reaction. The XPS spectrum of Mo 3d for M-2 after the photocatalytic reaction displays two peaks at 228.7 eV and 232.4 eV, assigned to Mo 3d 5/2 and Mo 3d 3/2 , respectively. A characteristic peak located at 55 eV can be observed in Figure S2b, corresponding to Se 3d 3/2 . All of these results clearly confirm the good photo-stability of MoSe 2 photocatalyst under the studied conditions. Mechanism of Photocatalytic Activity. During photocatalysis, the adsorption of Cr(VI), light absorption as well as the charge transportation and separation are crucial factors 43,44 . Figure Table S1. The result shows that the specific surface area is almost same at low pH values of 1 and 2, while it decreases obviously when the pH value increases to 3 and 4, which may be due to the aggregation of Larger specific surface area and pore volume can allow more Cr(VI) to enter into the MoSe 2 , which is beneficial to the photocatalytic activity 45 . The pore size of MoSe 2 increases with the pH value of precursor solution increases, which can influence the fast transport of Cr(VI), but it does not play a decisive role in the photocatalytic activity of MoSe 2 . Figure 10a shows the UV-Vis-NIR diffuses absorption spectra of M-1, M-2, M-3 and M-4. It can be observed that all MoSe 2 samples exhibit strong absorption in the entire visible light and even NIR light region. Compared with M-1, M-2, M-3 and M-4 have better absorption. Highly ordered mesoporous crystalline MoSe 2 synthesized using mesoporous silica SBA-15 as a hard template via a nanocasting strategy was found to show a strong absorption band covering from 400-800 nm 35 , suggesting that the highly ordered mesoporous crystalline MoSe 2 can only absorb the visible light. Furthermore, the diffuse reflectance spectra were also used to estimate the band gap energy through the Kubelka-Munk function: F(R) = α = (1-R) 2 /2R, where R is the percentage of reflected light and α is absorption coefficient, as shown in Figure S5a The charge transfer and recombination behavior of the as-prepared samples was studied by analyzing the electrochemical impedance spectra (EIS) spectra in dark condition. Figure 8b shows the typical Nyquist plots of M-1, M-2, M-3 and M-4. The semicircle in the EIS spectra is ascribed to the contribution from the charge transfer resistance (R ct ) and constant phase element (CPE) at the photocatalyst/electrolyte interface. The inclined line, resulting from the Warburg impedance Z W , corresponds to the ion-diffusion process in the electrolyte. The corresponding equivalent circuit is shown in the inset of Fig. 10b. The fitted R s , R ct , CPE and Z W for all MoSe 2 samples are listed in Table S2. It is found that the R ct for M-2 is 143.3 Ω, much lower than those for other MoSe 2 samples, indicating that the recombination of photo-induced electrons and holes in M-2 is more effectively inhibited. The results confirm that the pH value of the precursor solution can influence the charge transfer and recombination behavior. The charge separation and transfer behavior of the as-prepared samples was also investigated by photoluminescence (PL) and photoelectronchemical measurements. Figure S6a shows PL spectra of M-1, M-2, M-3 and M-4 with the excitation wavelength of 320 nm. It can be clearly observed that the intensity of M-2 is much weaker than those of other MoSe 2 samples, which further confirms that the recombination of photo-induced electrons and holes in M-2 can be effectively inhibited. Figure S6b shows the time resolved PL (TRPL) spectra of M-1, shows the longest lifetime, which can improve the charge separation and transfer efficiency, and thus enhance the photocurrent 22 . Figure 11 shows the transient photocurrent responses of M-1, M-2, M-3 and M-4 under UV, visible and NIR light irradiation in the photocatalytic reaction. It can be observed that the photocurrent quickly decreases to zero when the light is switched off, indicating the recombination of photo-induced electron and hole. M-2 exhibits the highest photocurrent, revealing more efficient charge transfer process and longer lifetime of the photo-induced electron-hole pairs. All results are in agreement with the EIS results. It is known that during photocatalysis, the adsorption of pollutants, the light harvesting as well as the charge transportation and separation are crucial factors 46 . Compared with M-1, M-2, M-3 and M-4 show a higher absorption, resulting in an increase of the number of photo-generated electrons and holes. Compared with M-3 and M-4, M-2 shows larger specific surface area and pore volume, which is beneficial for adsorbing the Cr(VI). In addition, M-2 exhibits the lowest resistance and highest photocurrent, indicating that the recombination of photo-induced electrons and holes in M-2 is most effectively inhibited. All results are beneficial to the enhanced photocatalytic activity, which has been confirmed by the UV-Vis-NIR absorption, EIS and BET measurements. Therefore, among all samples, M-2 exhibits the best photocatalytic activity under UV, visible and NIR light irradiation. To confirm the role of the photo-generated electrons in the photocatalytic process, controlled experiments were carried out with addition of hole scavenger (ethanol). As shown in Fig. 12, the photocatalytic activity of M-2 is enhanced with the addition of hole scavenger in that the ethanol as hole scavenger can capture photo-generated holes in the photocatalytic process, and thus suppresses the recombination of photo-generated carriers. The results indicate that the photo-generated electrons govern this photocatalytic process, which is consistent with the report in the literature 47 . Based on the above analysis, the possible photocatalytic mechanism of MoSe 2 in the reduction of Cr(VI) is proposed. In the process, MoSe 2 is excited under UV, visible or NIR light irradiation, and thus the electron-hole pairs are generated. The photo-generated charge carriers may migrate to the surface of MoSe 2 and participate in the reduction and oxidation reactions. The valance band (VB) of all MoSe 2 samples was measured by ultraviolet photoelectron spectroscopy (UPS) measurement, as shown in Fig. 13a the CB level of MoSe 2 is negative than the Cr(VI)/Cr(III) potential (0.51 V, vs. NHE) 48 , the photo-generated electrons in MoSe 2 can reduce the adsorbed Cr(VI) to produce Cr(III) in the photocatalytic process. Many researches indicate that the Cr(III) species will precipitate on the surface of photocatalyst as Cr 2 O 3 or Cr(OH) 3 in the photocatalytic process, which can be removed via simply washing with deionized water or NaOH 49 . Meanwhile, the hole can oxidize the water to form oxygen in the photocatalytic process 50 . Figure S7 shows the O 2 evolution yield with irradiation time under visible light irradiation using M-2. It is observed that the O 2 evolution yield increases with the increase in the irradiation time, indicating the production of O 2 in the photocatalytic process. The major reaction steps are summarized as follows: Conclusions MoSe 2 samples were successfully synthesized via a facile solvothermal method and their photocatalytic activity in the reduction of Cr(VI) under UV, visible and NIR light irradiation was investigated. The results show that (i) the as-prepared MoSe 2 exhibits excellent photo-absorption in the whole light region; (ii) MoSe 2 samples display good photocatalytic activity with a Cr(VI) reduction rates of 99%, 91% and 98% at 180 min under UV, visible and NIR light irradiation, respectively; (iii) the enhanced photocatalytic activity is ascribed to the comparatively higher light absorption, efficient charge separation and transfer as well as the relatively large number of surface active sites; (iv) the photo-generated electrons govern this photocatalytic process. ) with a monochromatic Al Kα X-ray source. The Brunauer-Emmett-Teller specific surface areas of the samples were evaluated on the basis of nitrogen adsorption isotherms measured at 77 K using a BELSORP-max nitrogen adsorption apparatus (Micrometitics, Norcross, GA). The diffuse absorption and reflection spectra of the samples were recorded using a PerkinElmer Lambda750S UV-vis-NIR spectrophotometer equipped with an integrated sphere attachment by using BaSO 4 as a reference. PL spectra at room temperature were examined by fluorescence spectrophotometer (HORIBA Jobin Yvon fluoromax-4). The TRPL spectra were obtained on an Edinburgh Lifespec II spectrofluorometer (Edinburgh, UK). Photoelectrochemical measurements were carried out on an electrochemical workstation (AUTOLAB PGSTAT302N) using a three electrode configuration with the as-prepared films as working electrodes, a Pt foil as counter electrode and a standard calomel electrode as reference electrode. The electrolyte was 80 mg l −1 Cr(VI) aqueous solution. The photocurrent measurement was performed at a constant potential of + 0.6 V (vs. SCE). 300 W Xe arc lamp (λ > 400 nm and > 800 nm) with a cut off filter and 500 W mercury lamp with a maximum emission at 356 nm were utilized as the light source. EIS were recorded in the frequency range of 0.1 Hz-1 MHz in dark conditions, and the applied bias voltage and ac amplitude were set at open-circuit voltage and 10 mV, respectively. Methods Photocatalytic experiments. The photocatalytic activity of the as-prepared samples was evaluated through the experiment of photocatalytic reduction of Cr(VI) under visible and NIR light irradiation. The samples (1.2 g l −1 ) were dispersed in 80 ml Cr(VI) aqueous solutions (80 mg l −1 ) with a pH value of 7 which were prepared by dissolving K 2 Cr 2 O 7 into deionized water. The suspensions were magnetically stirred in the dark for 30 min to reach the adsorption-desorption equilibrium. Under ambient conditions and stirring, the mixed suspensions were exposed to visible and NIR light irradiation produced by a 300 W Xe arc lamp (λ > 400 nm and > 800 nm) with a cut off filter. The light intensity of 300 W Xe arc lamp (λ > 400 nm and > 800 nm) with a cut off filter was approximately 741 and 310 mW cm −2 measured using a chromameter (CS-100A). A 500 W mercury lamp with a maximum emission at 356 nm and light intensity of 200 mW cm −2 was used as the UV source for photocatalysis. At certain time intervals, 2 ml of the mixed suspensions were extracted and centrifuged to remove the photocatalysts. The filtrates were analyzed by recording the absorption spectra of Cr(VI) using a PerkinElmer Lambda750S UV-vis-NIR spectrophotometer. The produced gas was analyzed with a gas chromatograph (GC-2014C, Shimadzu, Japan) equipped with a thermal conductivity detector, with N 2 as the carrier gas. A recycled photocatalytic activity test was carried out according to the above-mentioned procedure. After each run of photocatalytic reaction, the fresh Cr(VI) aqueous solution was injected, and the separated photocatalyst was washed with deionized water carefully and used again. To investigate the photocatalytic mechanism, trapping experiments were carried out to determine the main reactive species in the photocatalytic process. The experimental procedure was similar to the photocatalytic activity measurement except that the hole scavengers were added into the reaction system.
v3-fos-license
2022-02-17T14:38:22.738Z
2022-02-16T00:00:00.000
246872987
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://link.springer.com/content/pdf/10.1007/s00277-022-04789-9.pdf", "pdf_hash": "6a2d1e1816b306ae3558bbc4afbfde3ad2f35b1c", "pdf_src": "Springer", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43720", "s2fieldsofstudy": [ "Medicine" ], "sha1": "6a2d1e1816b306ae3558bbc4afbfde3ad2f35b1c", "year": 2022 }
pes2o/s2orc
A comparison of FDG PET/MR and PET/CT for staging, response assessment, and prognostic imaging biomarkers in lymphoma The aim of the current study was to investigate the diagnostic performance of FDG PET/MR compared to PET/CT in a patient cohort including Hodgkins lymphoma, diffuse large B-cell lymphoma, and high-grade B-cell lymphoma at baseline and response assessment. Sixty-one patients were examined with FDG PET/CT directly followed by PET/MR. Images were read by two pairs of nuclear medicine physicians and radiologists. Concordance for lymphoma involvement between PET/MR and the reference standard PET/CT was assessed at baseline and response assessment. Correlation of prognostic biomarkers Deauville score, criteria of response, SUVmax, SUVpeak, and MTV was performed between PET/MR and PET/CT. Baseline FDG PET/MR showed a sensitivity of 92.5% and a specificity 97.9% compared to the reference standard PET/CT (κ 0.91) for nodal sites. For extranodal sites, a sensitivity of 80.4% and a specificity of 99.5% were found (κ 0.84). Concordance in Ann Arbor was found in 57 of 61 patients (κ 0.92). Discrepancies were due to misclassification of region and not lesion detection. In response assessment, a sensitivity of 100% and a specificity 99.9% for all sites combined were found (κ 0.92). There was a perfect agreement on Deauville scores 4 and 5 and criteria of response between the two modalities. Intraclass correlation coefficient (ICC) for SUVmax, SUVpeak, and MTV values showed excellent reliability (ICC > 0.9). FDG PET/MR is a reliable alternative to PET/CT in this patient population, both in terms of lesion detection at baseline staging and response assessment, and for quantitative prognostic imaging biomarkers. Introduction The goal of treatment for Hodgkins lymphoma (HL), diffuse large B-cell lymphoma (DLBCL), and high-grade B-cell lymphoma is most often cure. Accurate staging at baseline and response assessment is therefore essential for optimal treatment strategies. Furthermore, risk stratification and treatment decisions require reliable prognostic imaging biomarkers in addition to pretreatment clinical risk assessment scores. It is well established that functional imaging with positron emission tomography (PET)-computed tomography (CT) with 18 F-Fluorodeoxyglucose (FDG) is the standard imaging modality for FDG-avid lymphomas. Response adaptive FDG PET/CT guided treatment approach to evaluate chemosensitivity and to guide decision of radiotherapy allows for individual treatment strategies in HL [1,2]. Furthermore, end of treatment response FDG PET/CT for detecting residual disease in DLBCL is an important prognostic factor and guides indication for consolidation radiotherapy [3]. PET/magnetic resonance (MR) has the advantage of simultaneous PET and magnetic resonance imaging (MRI) data acquisitions, combining metabolic activity from PET with great soft tissue contrast and functional data from MRI with diffusion weighted imaging (DWI). In addition, reduction of the radiation dose by eliminating the contribution from CT is of value for the young patient. Although FDG PET/MR has shown comparable ability to PET/CT for lesion detection and anatomical staging in terms of Ann Arbor in a few studies in adult lymphoma populations [4][5][6], these studies have included highly heterogeneous lymphoma populations acquired at different time points of imaging evaluation. Only one recent study [7] has assessed a large, homogenous lymphoma population (HL) at baseline but lacks response assessment evaluations. To establish whether FDG PET/MR is a reliable alternative to PET/CT in the care of lymphoma patients, there is also a need to compare the quantitative PET metrics between the two modalities. PET-detector technology and attenuation correction of the PET images differs between PET/CT and PET/MR, which can have a significant impact on the quantitative PET measurements. Baseline maximum standardized uptake value (SUVmax) was one of the first quantitative PET measurements used as a prognostic biomarker, and studies have found that SUVmax from FDG PET/MR versus PET/CT correlates well [4][5][6]8]. The role of baseline SUVmax in predicting treatment outcome is however discordant, and other quantitative PET metrics such as metabolic tumor volume (MTV) are reported in PET studies with increasing frequency. MTV is a promising and robust PET-based prognostic factor in HL [9,10] and DLBCL [11], but no studies have yet compared MTV from FDG PET/MR and PET/CT in lymphoma patients. Deauville score and PET-based criteria of response are other strong prognostic factors used in FDG PET/CTbased response assessment that guides treatment decisions. Deauville score (5-point scale) is based on visual assessment and SUV in residual lymphoma lesions compared with SUV in mediastinal blood pool and liver [12]. PETbased criteria of response [13] use Deauville score and change in SUV from baseline. The correlation of Deauville score between the FDG PET/CT and PET/MR has only been evaluated in one pediatric HL study [14], where they found excellent agreement, while no studies has compared criteria of response. The aim for this prospective study was to investigate the diagnostic performance of FDG PET/MR with DWI compared to today's standard PET/CT during first line treatment in a patient cohort including classical HL, DLBCL, and high-grade B-cell lymphoma. Primary endpoints were region-based and patient-based (Ann Arbor) agreement. Secondary endpoints were correlation of prognostic biomarkers in terms of Deauville score, criteria of response, SUVmax, SUVpeak, and MTV using FDG PET/CT as reference standard. Study population Patients were enrolled from the lymphoma section at St. Olavs Hospital, Trondheim University Hospital, from June 2016 to February 2019. Sixty-four patients fulfilled the following inclusion criteria: 18 years or older, histological confirmed DLBCL, classical HL, or high-grade B cell lymphoma. Exclusion criteria were contra-indications for MRI or pregnancy. Three patients were excluded due to missing PET/MR raw data at baseline. This left 61 patients included in the study (Table 1). All 61 patients were imaged with FDG PET/CT directly followed by PET/MR at baseline. Interim (after 2 cycles of chemotherapy only for HL) and end of treatment (3-6 weeks after chemotherapy for both HL and aggressive non-Hodgkin lymhoma (NHL)) examinations were performed on a subgroup of the patients when PET/CT was clinically indicated. A total of 108 (61 baseline, 13 interim and 34 end of treatment) FDG PET/MR and PET/ CT examinations were therefore included in the study. The study was approved by the Regional Committee for Ethics in Medical Research (REK-Midt #2014/1289). All participants gave written informed consent before participation. Image acquisition PET/CT and PET/MR data was acquired by using a single intravenous injection of 18 F-FDG (4 MBq/kg). PET/CT and PET/MR were acquired 60 (median) minutes (range 59-64) and 100 (median) minutes (range 87-150) after injection, respectively. All patients fasted for at least 6 h before injection of 18 F-FDG, and blood glucose levels were measured prior to radiotracer administration. None of the patients had hyperglycemia (> 10 mmol/L). PET/CT PET/CT was acquired on a Siemens Biograph mCT (Simens Helthcare, Erlangen, Germany). Patients were examined with their arms up in 4-9 bed positions (depending on body height), 2.5-3 min per bed position (depending on body weight), covering top of skull to upper thighs. Non-contrastenhanced, low-dose CT with 120 kV, 0.5 s rotation time, pitch 0.95, and 40 mAS was performed for attenuation correction and morphological correlation. PET/MR A PET/MR system (Siemens Biograph mMR, Erlangen, Germany) was used for simultaneous PET and MRI acquisitions. Patients were examined with their arms down in 5 bed positions covering top of skull to upper thighs, 5 min for each bed position. Simultaneous MRI was acquired with the following MRI sequences: coronal T1 Dixon-Vibe, transversal diffusion-weighted MRI (DWI) (b values 50 and 800), transversal T2-HASTE, and coronal T2-TIRM. Breath-hold imaging was used for bed positions 2-4, covering the thorax and abdomen. Attenuation correction maps were calculated from the T1 Dixon-Vibe sequence, segmenting four tissue types (air, soft tissue, fat, and lung) into predefined linear attenuation coefficients. PET reconstructions PET image reconstruction was performed with iterative reconstruction (3D OSEM algorithm, 3 iterations, 21 subsets, and 4-mm Gaussian filter) with point spread function (PSF) and decay-, attenuation-, and scatter-correction. Timeof-flight was used on PET/CT but was not available on the PET/MR system. A 400 × 400 matrix was used on the PET/ CT while a 344 × 344 matrix was used on the PET/MR (this corresponds to a relatively similar pixel size on the two scanners). Image analyses The PET/CT and PET/MR images were read by two pairs of nuclear medicine physicians (7 and 24 years of experience) and radiologists (13 and 14 years of experience) using the same standardized reading protocols. The nuclear medicine physicians interpreted the PET images and the radiologists interpreted the CT/MRI images separately, followed by a joint report for the PET/CT and PET/MR by each reading team. The readers were blinded for clinical status, but aware of the histology. To avoid recollection, a period of 4 weeks was required between readings from different modalities. Baseline images were available when reading interim and end of treatment scans. In case of disagreement between joint report on PET/CT or PET/MR between the two reading teams, a final consensus was made by a third group consisting of a clinician with access to biopsy results, primary staging, and follow-up results and one reader from each reading team. Standard clinical software, Syngo.Via (Simens Healthineers) and AW server (GE Healthcare) were used. Anatomical staging in terms of extent of lymphoma disease with the modified Ann Arbor staging system [13] was performed separately by a lymphoma oncologist based on the PET/CT and PET/MR readings in the study. PET reading The Lugano Classification [13] criteria for staging and response assessment were used for the PET reading. Diffuse uptake in the spleen without focal lesions had to be higher than 150% of SUV liver to be classified as diffusely involved [10]. For response assessment, Deauville score of 5 was defined as ≥3 times greater than liver. SUVmax and SUVpeak were recorded in all disease regions at baseline, and in response assessment, SUVmax was measured in the tumor with highest uptake. Metabolic tumor volume MTV was computed using the research software ACC URA-TE, a semi-automatic software tool for quantitative analysis of PET/CT [16]. Both nuclear medicine physicians segmented MTV independently on baseline PET/MR and PET/CT scans for the patients with aggressive NHL. Initially, an automated analysis was done with fixed SUV-threshold of 4.0 [17] before physiological uptake was excluded from the volume. CT and MR reading On CT and MRI, a lymph node > 15 mm in largest diameter on axial sequences was defined as pathological for lymphoma involvement. Morphological criteria for splenic involvement were focal lesions or craniocaudal diameter more than 13 cm on coronal CT or MRI. Bulky tumor was defined as ≥ 10 cm in largest diameter. The reading of different MRI sequences was performed simultaneously with no distinct order to combine morphological and structural information. Statistical analyses Inter-observer agreement between the two reading teams was assessed by kappa statistics for nominal categorical variables and weighted kappa score for ordinal categorical variables. Kappa values were indicative of poor (k < 0.2), fair (k 0.2-0.4), moderate (k > 0.4-0.6), good (k > 0.6-0.8), and excellent (k > 0.8) agreement [18]. The kappa values and the 95% confidence intervals were calculated for nodal and extranodal sites (lymphoma involvement or not) combined and disease stage (Ann Arbor I-II versus III-IV). Weighted kappa statistics were used for Deauville score (1-3, 4, or 5) and criteria of response (complete metabolic response, partial metabolic response, no metabolic response, or progressive metabolic response). Intraclass correlation coefficients (ICC) were used for inter-observer agreement for the continues variable MTV. ICC estimates less than 0.5, between 0.5 and 0.75, between 0.75 and 0.9, and greater than 0.9, which indicates poor, moderate, good, and excellent reliability, respectively [19]. Concordance for lymphoma involvement between consensus FDG PET/MR and the reference standard consensus PET/CT was assessed using kappa statistics, observed agreement, positive predictive values (PPV), negative predictive values (NPV), sensitivity and specificity at baseline, and response assessment scans (interim and end of treatment response). ICC estimates and their 95% confidence interval were calculated using absolute agreement and two-way mixed-effects model for continuous variables (SUVmax, SUVpeak, and MTV). ICC measurements showed no correlation (below 0.1) between measurements from multiple lesions in the same patient both on PET/CT and PET/MR; thus, no correction was performed when including measurements from multiple lesions in the same patients. Additionally, Bland-Altman plots [20] were made to evaluate the bias between the mean differences of SUVmax and MTV measured on FDG PET/MR and PET/CT. The difference of SUVmax between PET/MR and PET/CT was not normally distributed, and a nonparametric approach was therefore used by employing the median and inter-quartile range to estimate the limits of agreement (LoA) (± 1.45 *IQR) in the Bland-Altman plot. The difference of MTV was normally distributed, and a parametric method was used in terms of the mean difference and the standard deviation to estimate the LoA (mean ± 1.96 SD of the difference) in the Bland-Altman plot. All statistical analyses were performed using SPSS version 26.0. Consensus FDG PET/MR vs. PET/CT at baseline When comparing consensus baseline FDG PET/MR with PET/CT as the reference standard (Table 3) (1)]. An example of a false-positive extranodal site on PET/MR is shown in Fig. 2, where PET/MR demonstrates a compelling radiological FDG avid lesion in pancreas that was scored as a paraaortic lymph node on PET/CT. Eleven extranodal sites were false negative on PET/MR compared with PET/CT [bone marrow (2), adrenal glands (2), GI tract (1), pancreas (1), thyroid (1), and soft tissue including pleura (4)]. Figure 3 shows an example of a false-negative lesion in the small intestine on PET/MR compared to a distinct FDG uptake on PET/CT, which was also histologically confirmed with biopsy from ileum. We found concordance in disease stage (Ann Arbor 0-IV) in 57/61 patients (93%) with a weighted kappa value of κ 0.92 (95% CI 0.85-1). When comparing limited disease, I-II versus extended disease III-IV, 59/61 was staged similar on PET/MR and PET/CT. Table 4 shows consensus interim and end of treatment FDG PET/MR compared with PET/CT as the reference standard. Observed agreement for nodal sites combined was 798/799 (99.9%), with a sensitivity 100%, specificity 99.9%, PPV 87.5%, and NPV100%. The discrepant was a false-positive axillary node on PET/MR. For extranodal sites, the observed agreement was 751/752 (99.8%), sensitivity 100%, specificity 99.9%, PPV 80.0%, and NPV 100%. One PET/MR examination graded the bone marrow as false positive compared to PET/CT. Consensus FDG PET/MR vs. PET/CT for response assessment All of the 47 consensus interim and end of treatment examinations were scored similar on FDG PET/MR and PET/CT in terms of criteria of respons. Deauville score grading showed good agreement on weighted kappa, κ 0.72 (95% CI 0.54-0.89). Difference in Deauville score grading between PET/MR and PET/CT was seen in 11 patients and only in those with complete metabolic response (Deauville score 1-3). Figure 4 shows a patient with partial metabolic response and Deauville score 5 at end of treatment response on both image modalities, while the FDG avid lesion was scored as soft tissue on PET/CT but correctly identified in scapula on PET/MR. Metabolic tumor volume at baseline MTV was measured in 33 patients with DLBCL or highgrade B-cell lymphoma at baseline. Two patients were not included in this analysis due to no detectable disease on both modalities, and one was excluded because it was impossible to delineate lymphoma tissue form kidney and bladder on both PET modalities. ICC estimate and the 95% confidence intervals were 0.99 (0.98-1) for MTV, showing excellent reliability (p < 0.001). A Bland-Altmann plot of MTV (Fig. 6) showed a slightly higher MTV with PET/CT than Discussion In this prospective study, we have compared FDG PET/MR to today's standard PET/CT at baseline and response assessment in 61 patients with classical HL, DLBCL, or highgrade B-cell lymphoma. Excellent kappa agreement was found for lymphoma staging of nodal sites combined when comparing consensus FDG PET/MR versus PET/CT with a sensitivity of 92.5% and a specificity of 97.9%. The nodal regions with most discrepancies were infraclavicular, hilar, and mediastinal lymph nodes. Previous studies comparing MRI versus FDG PET/CT for lymphoma staging have also found these nodal regions most challenging [21,22]. In all cases of discrepancies in the current study, both modalities showed the same FDG avid lesions, but the readers scored the lymph node region different between FDG PET/MR and (Fig. 1). Motion artifacts and poor quality of the DWI sequences were reported in one of the misclassifications of hilar and mediastinal nodal regions. A few of the discrepancies can be explained by higher SUV value on FDG PET/CT than on PET/MR and vice versa and therefore classified as reactive lymph node on one of the modalities. For staging of extranodal sites combined, excellent kappa agreement between consensus FDG PET/MR and PET/CT was also found, but with a lower overall sensitivity and specificity for PET/MR. The exstranodal regions with most discrepancies were pancreas, bone marrow, and soft tissue/ pleura. All, except one (Fig. 3), of the lesions were visible on both modalities, and the main reason for the discrepancies was different interpretation of location of the lesions, as illustrated in Fig. 2. Other lymphoma studies have found a higher concordance of extranodal disease between FDG Table 4 Consensus PET/MR versus consensus PET/CT response assessment (13 interim and 34 end of treatment scans) in terms of nodal and extranodal sites separate and combined. Deauville score and criteria of response *Skin, genitalia, brain, and bladder: no disease **Weighted kappa NA, not applicable due to no true positive lymphoma lesions on PET/CT and no false negative on PET/MR or one of the variables are constant. Site Observed PET/MR and PET/CT [6,8,14]. However, compared to these studies, our patient population had a higher burden of extranodal disease, and in contrast to other study designs [22], we did not correct reader errors before the statistical analyses. This approach was chosen to reflect clinical imaging reading. Concordance in disease stage (Ann Arbor) was found in 57 of 61 patients. Three patients were understaged and one upstaged with FDG PET/MR versus PET/CT. The reason for under staging on FDG PET/MR was two patients with soft tissue or pleural involvement on PET/CT classified as nodal lesions on PET/MR, and one periaortic node with lower SUVmax on PET/MR and therefore interpreted as a reactive node. The discrepancy that led to upstaging on FDG PET/MR was a cervical lymph node with higher SUVmax on FDG PET/MR, interpreted as reactive node on PET/CT. In our cohort, this would have had clinical treatment consequences in one HL patient if staged with FDG PET/MR instead of PET/CT. In response assessment scans, we found a sensitivity of 100% and a specificity of 99.9% for FDG PET/MR on both nodal and extranodal sites combined. Seven of the 43 response assessment scans had FDG avid disease present on both modalities. Among these patients, one patient with partial metabolic response on both examinations also had an axillary node that was graded positive on FDG PET/MR due to higher SUVmax than on PET/CT. In addition, one bone marrow was graded positive on FDG PET/MR and labeled as soft tissue on PET/CT (Fig. 4). A recent study of pediatric HL patients found excellent agreement on Deauville score grading between FDG PET/MR and PET/CT [14]. Our study also showed a perfect agreement on Deauville score 4 and 5 between the two modalities. Differences were only found in patients with complete metabolic response (Deauville score 1-3) and would not have altered any treatment decisions. The reason for this difference is hard to postulate due to no observable trend in the results. In addition, all consensus response assessment scans were graded similar on FDG PET/MR and PET/CT in terms of criteria of respons, meaning that none of the included patients would have been treated any different based on FDG PET/MR response assessment examinations versus PET/CT. When studying SUVmax and SUVpeak, both showed excellent reliability, but a slightly higher median SUVmax was found on FDG PET/MR compared to PET/CT. These findings may relate to the prolonged uptake time on PET/MR in our study (median 100 min after injection), as increased SUV is reported in lymphoma patients until 2 h after FDG injection [23]. Previous lymphoma studies acquiring PET/ CT before PET/MR has however demonstrated both higher SUVmax [5] and lower SUVmax [8,14] on PET/MR, indicating that the differences in SUV between the modalities are caused by other factors such as heterogeneity in the included patients in the studies or technical differences between the sites. To our knowledge, this is the first study to compare MTV between FDG PET/CT and PET/MR in lymphoma patients. Our results showed excellent reliability in baseline MTV in 33 patients with aggressive NHL. MTV was slightly higher with FDG PET/CT than with PET/MR with a mean difference of 7.4 cm 3 . When considering the scale of MTV values in our population, both the mean difference and the LoA indicate good agreement among most of the patients and no systematic bias of clinical importance. There was no misclassification in attenuation correction maps or other external explanations in the images with the largest differences. MTV has previously been compared between FDG PET/ CT and PET/MR retrospectively in a lung and pancreatic cancer population [24] and the authors found that a threshold of SUV2.5 was more robust against imaging modality and protocol compared to SUV50%. Our study did however only use one method for calculating MTV, which could be a limitation. There is controversy regarding which method to use for calculating MTV, and ongoing work is done in order to standardize MTV measurements worldwide [25]. However, although the use of various contouring thresholds, such as SUV2.5, SUV4.0, and 41% of SUVmax, results in significantly different MTV values for the same PET-data, the various MTV methods have shown to predict prognosis with similar accuracy [25,26]. For that reason, only one method was used to calculate MTV in the current study. A recent study found that SUV4.0 was one of the two best methods for calculating MTV in lymphoma patients [17]. Together with our results, this study supports the use of SUV4.0 as a robust method for calculating MTV. A notable limitation of our study is that all the FDG PET/ MR scans was performed after PET/CT. Future studies may scan half of the patients with PET/MR first to even out the difference in time of administration of FDG to time of imaging. Furthermore, the lack of a diagnostic contrast-enhanced CT for the radiologist to use for anatomical correlation when interpreting FDG PET/CT may have affected the interpretation of extranodal lesion localizations on the reference standard. In conclusion, our study shows an overall excellent agreement between FDG PET/MR and PET/CT in terms of lesion detection at baseline staging and response assessment for adult patients with classical HL, DLBCL, and high-grade B-cell lymphoma. The discrepancies between the modalities were mainly due to misclassification of region and not due to lesion detection. The PET-based quantitative prognostic imaging biomarkers also showed good agreement between the two modalities, demonstrating that FDG PET/MR is a reliable alternative to PET/CT in this patient population.
v3-fos-license